content
stringlengths
86
994k
meta
stringlengths
288
619
[FOM] Embedding intuitionistic logic in classical Joao Marcos botocudo at gmail.com Fri Sep 23 10:43:17 EDT 2011 > this may be a somewhat trivial question, but I have not found the answer anywhere yet, > and I have not managed to prove or disprove it on my own. Classical propositional logic > can be embedded in intuitionistic propositional logic via Glivenko's theorem, and > classical predicate logic can be embedded in intuitionistic predicate logic using the > negative translation. Furthermore, intuitionistic propositional logic can be embedded > in classical S4 logic. But is any of the following possible? > (i) Embedding intuitionistic propositional logic in classical propositional logic. > (ii) Embedding intuitionistic predicate logic in classical predicate logic. > (iii) Embedding intuitionistic S4 propositional logic in classical S4 propositional logic. I guess the answer pretty much depends on what you mean by "embedding a logic into another". One possibility is to *(conservatively) translate* a logic L1 into a logic L2 (as in the theorems by Glivenko, Godel and Kuroda), by providing an inference-preserving mapping from the formulas of L1 into the formulas of L2. In that sense, however, a result announced by Jerabek just a few weeks ago (http://arxiv.org/abs/1108.6263) claims that there are conservative translations between "(almost) any two reasonable deductive systems". It follows from those results that the answer to your questions (i) and (iii) is yes. Moreover, the paper also shows the translations to be computable, in both cases (i) and (ii). More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2011-September/015803.html","timestamp":"2014-04-18T10:38:49Z","content_type":null,"content_length":"4496","record_id":"<urn:uuid:8979614b-fdc1-4e6f-9f27-da5b0fc73ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Pseudo-rational input/output maps and their realizations: a fractional representation approach to infinite-dimensional systems - IEEE Transactions on Automatic Control , 1994 "... This paper presents a new framework for hybrid sampled-data control systems. ..." - in Essays on Control, eds. H.L.Trentelman and J.C. Willems, Birkhauser , 1993 "... The basic features of a special type of learning control scheme, currently known as repetitive control are reviewed. It is seen that this control scheme also induces varied interesting theoretical problems--- particularly those related to infinite-dimensional systems. They include such problems ..." Cited by 6 (1 self) Add to MetaCart The basic features of a special type of learning control scheme, currently known as repetitive control are reviewed. It is seen that this control scheme also induces varied interesting theoretical problems--- particularly those related to infinite-dimensional systems. They include such problems as the internal model principle, minimal representation of transfer functions, fractional representations, stability characterizations, correspondence of internal and external stability, etc. This article intends to give a comprehensive overview of the repetitive control scheme as well as the discussion of these related theoretical problems for infinite-dimensional systems. , 1990 "... It is well known that for infinite-dimensional systems, exponential stability is not necessarily determined by the location of spectrum. ..." "... In these notes, we give a short introduction to the fractional representation approach to analysis and synthesis problems [12], [14], [17], [28], [29], [50], [71], [77], [78]. In particular, using algebraic analysis (commutative algebra, module theory, homological algebra, Banach algebras), we shall ..." Cited by 3 (1 self) Add to MetaCart In these notes, we give a short introduction to the fractional representation approach to analysis and synthesis problems [12], [14], [17], [28], [29], [50], [71], [77], [78]. In particular, using algebraic analysis (commutative algebra, module theory, homological algebra, Banach algebras), we shall give necessary and sufficient conditions for a plant to be internally stabilizable or to admit (weakly) left/right/doubly coprime factorizations. Moreover, we shall explicitely characterize all the rings A of SISO stable plants such that every plant defined by means of a transfer matrix with entries in the quotient field K = Q(A) of A satisfies one of the previous properties (e.g. internal stabilization, (weakly) doubly coprime factorizations). Using the previous results, we shall show how to parametrize all stabilizing controllers of an internally stabilizable plants which does not necessarily admits a doubly coprime factorization. Finally, we shall give some necessary and sufficient conditions so that a plant is strongly stabilizable (i.e. stabilizable by a stable controller) and prove that every internally stabilizable MIMO plant over A = H# (C+ ) is strongly - Automatica , 1991 "... In the current study of robust stability of infinite-dimensional systems, internal exponential stability is not necessarily guaranteed. This paper introduces a new class of impulse responses called in which the usual notion of L -input/output stability guarantees not only external but also inter ..." Cited by 2 (2 self) Add to MetaCart In the current study of robust stability of infinite-dimensional systems, internal exponential stability is not necessarily guaranteed. This paper introduces a new class of impulse responses called in which the usual notion of L -input/output stability guarantees not only external but also internal exponential stability. The result is applied to derive a closed-loop stability condition, and a version of the small gain theorem with internal exponential stability; this leads to a robust stability condition that also assures internal stability. An application to repetitive control systems is shown to illustrate the results. "... establishes a result linking algebraically coprime factorizations of transfer matrices of delay systems to approximately coprime factorizations in the sense of distribu-tions. The latter have been employed by the second author in the study of function-space controllability for such systems. ..." Cited by 2 (0 self) Add to MetaCart establishes a result linking algebraically coprime factorizations of transfer matrices of delay systems to approximately coprime factorizations in the sense of distribu-tions. The latter have been employed by the second author in the study of function-space controllability for such systems. "... Coprimeness conditions play important roles in various aspects of system/control theory: realization, controllability, stabilization, just to name a few. While the issue is now well understood for finite-dimensional systems, it is far from being settled for infinite-dimensional systems. This is due ..." Add to MetaCart Coprimeness conditions play important roles in various aspects of system/control theory: realization, controllability, stabilization, just to name a few. While the issue is now well understood for finite-dimensional systems, it is far from being settled for infinite-dimensional systems. This is due to a wide variety of situations in which this issue occurs, and several variants of coprimeness notions, which are equivalent in the finite-dimensional context, turn out to be non-equivalent. This paper studies the notions of spectral, approximate and exact coprimeness for pseudorational transfer functions. A condition is given under which these notions coincide. 1 - Systems Control Lett "... This paper gives some equivalent characterizations for invariant subspaces of H , when the underlying structure is specified by the so-called pseudorational transfer functions. This plays a fundamental role in computing the optimal sensitivity for a certain important class of infinite-dimensional ..." Add to MetaCart This paper gives some equivalent characterizations for invariant subspaces of H , when the underlying structure is specified by the so-called pseudorational transfer functions. This plays a fundamental role in computing the optimal sensitivity for a certain important class of infinite-dimensional systems, including delay systems. A closed formula, easier to compute than the well-known Zhou-Khargonekar formula, is given for optimal sensitivity for such systems. An example is given to illustrate the result. 1 "... Abstract: There are many, nonequivalent notions of minimality in state space representations for delay systems. In this class, one can express the transfer function as a ratio of two exponential polynomials. Then one can introduce various notions of coprimeness in such a representation. For example, ..." Add to MetaCart Abstract: There are many, nonequivalent notions of minimality in state space representations for delay systems. In this class, one can express the transfer function as a ratio of two exponential polynomials. Then one can introduce various notions of coprimeness in such a representation. For example, if there is no common zeros between the numerator and denominator, it corresponds to a spectrally minimal realization, i.e., all eigenspaces are reachable. Another fact is that if the numerator and denominator are approximately coprime in some sense, then it corresponds to approximate reachability. All these are nicely embraced in the class of pseudorational transfer functions introduced by the author. The central question here is to characterize the Bézout identity in this class. This is shown to correspond to a non-cancellation property in the extended complex plane, including infinity. This leads to a unified understanding of coprimeness conditions for commensurate and non-commensurable delay cases. Various examples are examined in the light of the general theorem obtained here. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1402339","timestamp":"2014-04-21T00:59:04Z","content_type":null,"content_length":"32569","record_id":"<urn:uuid:42d1a0d6-42dc-4816-981c-2ce561d091ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Curves which do not dominate other curves up vote 13 down vote favorite Let $g>1$ be an integer. Does there exist a (smooth projective) genus g curve $X$ which doesn`t dominate a curve of positive genus and genus smaller than $g$? Surely such curves exist. Just take a curve with simple jacobian. I`m wondering whether there are any other interesting examples, or is the above property equivalent to having simple jacobian? ag.algebraic-geometry algebraic-curves jacobians 4 I bet there are curves with non-simple jacobians but such that the factors of the jacobian are not themselves isogenous to jacobians, so these curves don't dominate other curves. Maybe one can show these exist by dimension count. – Felipe Voloch Jul 29 '13 at 18:21 3 This is not quite what you are looking for, but: I have come across a genus 3 curve C that maps to an elliptic curve, but provably without a map to a genus 2 curve. (I.e., the Jacobian of C is, up to isogeny, E x J_H for some hyperelliptic curve H, but C does not map to H.) – David Zureick-Brown♦ Jul 29 '13 at 19:05 Is there some restriction on the polarization too? – Ian Agol Jul 30 '13 at 22:56 I agree with Felipe. Take a genus 2 curve and ask whether it covers a genus one curve. It would seem to have 2 ramification points, hence exhaust only 2 dimensions (?) in the 3 dimensional moduli space of genus 2 curves. As g goes up it seems only to get worse. – roy smith Aug 5 '13 at 2:23 add comment 1 Answer active oldest votes I elaborate a bit on Roy Smith's comment. One can show that the general curve of genus $g>1$ does not map onto a curve of genus $h>0$ by a dimension count, at least for complex curves. Let $f\colon C\to D$ be a map of degree $d$ from a curve of genus $g$ to a curve of genus $h>0$ and let $B$ the branch divisor of $f$ (a point $P\in D$ appears in $B$ with multiplicitiy equal to $\sum_{Q\mapsto P}(m_Q-1)$, where $m_Q$ is the order of ramification of $f$ at $Q$. The Hurwitz formula gives: $$2g-2=d(2h-2)+\deg B.$$ Let $S$ be the support of $B$ (i.e., $S$ is the set of critical values of $f$) and consider the restricted cover $f_0\colon C\setminus f^{-1}(S)\to D\setminus S$: this is a topological cover of degree $d$ and it determines $f$. There are finitely many such covers, hence the maps $f\ colon C\to D$ as above depend on $3h-3+s$ parameters, where $s$ is the cardinality of $s$. Since $s\le \deg B$, the Hurwitz formula gives: $$(3g-3)-(3h-3+s)\ge 3(d-1)(h-1)+s/2>0.$$ So the up vote general curve of genus $g>1$ does not have a map of degree $d$ onto a curve of genus $h>0$. Now it is enough to observe that, again by the Hurwitz formula, there are finitely many 8 down possibilities for $h$ and $d$. I don't think that a curve without maps onto curves of positive genus must have a simple Jacobian: if one takes a curve $C$ inside an irregular surface $S$ such that $C$ is an ample divisor of $S$, then there is an injection $Pic^0(S)\to J(C)$, and I do not see why in general $C$ should have a map onto a curve of positive genus. To get an actual counterexample, one could try to look at an abelian surface $S$ with a polarization $L$ of type $(1,3)$ and a curve $C\in |L|$, but I don't know how to show that a general such $C$ does not map onto a curve of positive add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-curves jacobians or ask your own question.
{"url":"http://mathoverflow.net/questions/138102/curves-which-do-not-dominate-other-curves","timestamp":"2014-04-16T11:13:54Z","content_type":null,"content_length":"55921","record_id":"<urn:uuid:132ecece-d414-4a4c-8230-3380671b4949>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Katabi, Indyk speed up the Fast Fourier Transform Dina Katabi and Piotr Indyk with graduate students Eric Price and Haitham Hassanieh will present a new algorithm this week at the Association for Computing Machinery’s Symposium on Discrete Algorithms (SODA) that, for a large range of practically important cases, improves on the fast Fourier transform -- in some cases yielding a tenfold increase in speed. A very attractive potential of the new algorithm will be in the use of image compression--allowing cell phones to transmit large video files wirelessly without draining batteries or consuming monthly bandwidth allotments. Read more about this new work in the MIT News Office Jan. 18, 2012 article by Larry Hardesty titled "The faster-than-fast Fourier transform. For a large range of practically useful cases, MIT researchers find a way to increase the speed of one of the most important algorithms in the information sciences", also posted in its entirety below. See the group's project website and the paper "Nearly Optimal Sparse Fourier Transform." The Fourier transform is one of the most fundamental concepts in the information sciences. It’s a method for representing an irregular signal — such as the voltage fluctuations in the wire that connects an MP3 player to a loudspeaker — as a combination of pure frequencies. It’s universal in signal processing, but it can also be used to compress image and audio files, solve differential equations and price stock options, among other things. The reason the Fourier transform is so prevalent is an algorithm called the fast Fourier transform (FFT), devised in the mid-1960s, which made it practical to calculate Fourier transforms on the fly. Ever since the FFT was proposed, however, people have wondered whether an even faster algorithm could be found. At the Association for Computing Machinery’s Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments. Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers — discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies. "Weighted" means that some of those frequencies count more toward the total than others. Indeed, many of the frequencies may have such low weights that they can be safely disregarded. That’s why the Fourier transform is useful for compression. An eight-by-eight block of pixels can be thought of as a 64-sample signal, and thus as the sum of 64 different frequencies. But as the researchers point out in their new paper, empirical studies show that on average, 57 of those frequencies can be discarded with minimal loss of image quality. Heavyweight division Signals whose Fourier transforms include a relatively small number of heavily weighted frequencies are called "sparse." The new algorithm determines the weights of a signal’s most heavily weighted frequencies; the sparser the signal, the greater the speedup the algorithm provides. Indeed, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its "In nature, most of the normal signals are sparse," says Dina Katabi, one of the developers of the new algorithm. Consider, for instance, a recording of a piece of chamber music: The composite signal consists of only a few instruments each playing only one note at a time. A recording, on the other hand, of all possible instruments each playing all possible notes at once wouldn’t be sparse — but neither would it be a signal that anyone cares about. The new algorithm — which associate professor Katabi and professor Piotr Indyk, both of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), developed together with their students Eric Price and Haitham Hassanieh — relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight. In signal processing, the basic tool for isolating particular frequencies is a filter. But filters tend to have blurry boundaries: One range of frequencies will pass through the filter more or less intact; frequencies just outside that range will be somewhat attenuated; frequencies outside that range will be attenuated still more; and so on, until you reach the frequencies that are filtered out almost perfectly. If it so happens that the one frequency with a heavy weight is at the edge of the filter, however, it could end up so attenuated that it can’t be identified. So the researchers’ first contribution was to find a computationally efficient way to combine filters so that they overlap, ensuring that no frequencies inside the target range will be unduly attenuated, but that the boundaries between slices of spectrum are still fairly sharp. Zeroing in Once they’ve isolated a slice of spectrum, however, the researchers still have to identify the most heavily weighted frequency in that slice. In the SODA paper, they do this by repeatedly cutting the slice of spectrum into smaller pieces and keeping only those in which most of the signal power is concentrated. But in an as-yet-unpublished paper, they describe a much more efficient technique, which borrows a signal-processing strategy from 4G cellular networks. Frequencies are generally represented as up-and-down squiggles, but they can also be though of as oscillations; by sampling the same slice of bandwidth at different times, the researchers can determine where the dominant frequency is in its oscillatory cycle. Two University of Michigan researchers — Anna Gilbert, a professor of mathematics, and Martin Strauss, an associate professor of mathematics and of electrical engineering and computer science — had previously proposed an algorithm that improved on the FFT for very sparse signals. "Some of the previous work, including my own with Anna Gilbert and so on, would improve upon the fast Fourier transform algorithm, but only if the sparsity k" — the number of heavily weighted frequencies — "was considerably smaller than the input size n," Strauss says. The MIT researchers’ algorithm, however, "greatly expands the number of circumstances where one can beat the traditional FFT," Strauss says. "Even if that number k is starting to get close to n — to all of them being important — this algorithm still gives some improvement over FFT."
{"url":"https://www.eecs.mit.edu/news-events/media/katabi-indyk-speed-fast-fourier-transform","timestamp":"2014-04-17T12:43:24Z","content_type":null,"content_length":"40858","record_id":"<urn:uuid:db4770ce-eb67-42cc-b650-ebb4a1c1e908>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
How to solve ODE for independent function August 10th 2011, 12:39 PM #1 Aug 2011 How to solve ODE for independent function I need to solve numerically an equation of the form v(t) = k1*z(t)*w(t)-k2*i(t)-k3*di(t)/dt The issue is that rungekutta methods are useful for solving di(t)/dt = 1/k3 * [ k1*z(t)*w(t)-k2*i(t)-k3*-v(t) ] but I need to solve for v(t) What I did was: v (t) = k1*z(t)*w(t)-k2*i(t)-k3*[i(t)-i(t-1)]/h But is not a good approximation because the step size h cannot be small enough. I need a more sophisticated method than directly applying the difference quotient as I did. Thanks a lot! Re: How to solve ODE for independent function What functions are known, and what functions are unknown? Presumably i and v are unknown. If so, you need another equation in order to solve for v numerically. Is this equation from a circuit? If so, please post the circuit. Re: How to solve ODE for independent function Thanks Ackbeet! Well, the only unknown is v(t) The equations are for an electric generator in stand alone operation, the actual equation is where Lq, Ld, and Rs are constant parameters and vd, iq, wr, and id are functions of time. My first approach of course was did/dt = (id(t)-id(t-h))/h, then I improved by a higher order approximation of the form did/dt = (3*id(t) - 4*id(t-h) + id(t-2h))/(2*h) But still I have the same problem that did/dt oscillates too much and get's unstable with h less than 0.002, which is too big for me. The variables are declare as double in the C code, to have a better precision. Re: How to solve ODE for independent function It doesn't look like you're actually solving an ODE. What you have, correct me if I'm wrong, are the numerical values for all the functions/values on the RHS of your equation, and from that you want to construct the function on the LHS. Is that correct? If so, I would investigate multi-point approximations for derivatives a bit more even than you have. You might want to check out the derivative on both sides of t. So far, all I see are t, t-h, and t-2h. What about t+h and t+2h? Those points might be useful. Perhaps a symmetric approximation of the derivative? Re: How to solve ODE for independent function You are right in your statement of what I'm trying to do. But I can't use future values because I'm using this equation to produce values of vd in a real-time application. So, I obtain say vd(t1) and with that value I calculate id(t2),iq(t2), and w(t2). With the new values I calculate vd(t2). So I can save the history of the system and use it to calculate the derivative, but I don't have future values. August 10th 2011, 01:02 PM #2 August 11th 2011, 01:21 PM #3 Aug 2011 August 12th 2011, 04:53 AM #4 August 12th 2011, 03:29 PM #5 Aug 2011
{"url":"http://mathhelpforum.com/advanced-applied-math/185932-how-solve-ode-independent-function.html","timestamp":"2014-04-18T12:49:33Z","content_type":null,"content_length":"43424","record_id":"<urn:uuid:816b93a0-5ee9-4196-bc6e-f522494562eb>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
How To Split Black Holes Ask a physicist how to split a black hole, and you will receive the reply " That's impossible". Ask for further clarification, and you will get a lecture on black hole thermodynamics.The argument against the splitting of a black hole is elegant in its simplicity. It is based on a geometrical interpretation of black hole thermodynamic properties such as black hole energy and black hole entropy. The reasoning is most straightforward for Schwarzschild black holes. From perspective of a stationary external observer these are nothing more than spherically-shaped glowing gravitational horizons. Whatever is behind these horizons, it is no part of his/her observable universe. As is well-known, all thermodynamic properties of the black hole can be expressed in terms of horizon parameters. The total energy content of a black hole can be identified with the horizon circumference, and the total entropy with the horizon's area. This means that in terms of black hole horizons, the first law of thermodynamics (energy conservation) manifests itself as conservation of circumference, and the second law (entropy non-decrease) as non-decrease of surface area. And the thing is: you can create multiple smaller spheres such that their total circumference is equal to that of a given sphere, but you can do so only by generating a total surface area that is smaller than that of the original sphere. Mathematically this translates into the statement that the set of does not allow for solutions. Geometrically this means that starting with three edges with one being the sum of the two others, you can not build a triangle with none of the angles exceeding 90°. This is immediately obvious as out of the edges provided you can build only a degenerate triangle with one angle equal to 180°. The conclusion is that the splitting a black hole is thermodynamically forbidden. No matter what tools you bring, you can not cut, crush or crumble a black hole. It Takes Two To Tango Unless you throw at the black hole... another black hole. You can't split a single black hole, but it is easy to see that no law of thermodynamics forbids the splitting of a pair of black holes in three or more components. To explore the threshold at which splitting becomes thermodynamically feasible, we focus on reversible collisions. This means we try to find black hole collisions that don't lose or produce entropy and that can be played back and forth in time without violating the laws of thermodynamics. To make this a nice recreational math exercise, we define Schwarzschildean multiplets. First, recall that Pythagorean multiplets (C; A , A , .. , A ) are defined as sets of integers satisfying the equation C = A + A + .. + A . An example being the triplet (5; 4, 3). Analogously, we define Schwarzschildean multiplets (C , C ; A , A , .. , A ) as whole number solutions to the set of equations Such a Schwarzschildean multiplet describes a reversible collision between black holes with masses C and C into n black holes with masses A , .. , A The Schwarzschildean multiplets (1, 1; 1, 1) and (3, 3; 4, 1, 1) can be extended to more complex cases: (13, 13; 16, 9, 1), (21, 21; 25, 16, 1), and so on (do you see the pattern?) . All of these describe the splitting of two equal mass black holes into three. Spray Collisions Is the splitting of two black holes into more than three black holes thermodynamically allowed? The answer is ' '. In fact, the laws of thermodynamics allow two black holes to split into arbitrary many black holes. This can be inferred from the fact that the Schwarzschildean multiplet (3, 3; 4, 1, 1) can also be extended to multiplets (6, 4; 7, 1, 1, 1), (10, 5; 11, 1, 1, 1, 1), etc. (again there is a pattern...) Such multiplets represent 'spray reactions' between two black holes resulting into one large black hole and a shower of small black holes. It becomes clear the possibilities are almost endless. Effectively, a black hole colliding with a much larger black holes is allowed to split and fragment in any number of small black holes as long as the large black hole grows sufficiently larger in the process. One final question. The above spray reactions are between a pair of black holes of unequal mass. Is it also possible to have two black holes of equal mass produce unlimited numbers of black holes? Also here the answer is ' '. This follows directly from the fact that when considering Schwarzschildean multiplet describing the collision of two equal mass black holes. You can turn any such multiplet describing the production of k black holes into a multiplet describing the production of one more black hole. More specifically, if (M, M; m1, m2, .. , mk) is a valid Schwarschildean multiplet, then so is (3M, 3M; 4M, m1, m2, .. , mk). Starting from the trivial multiplet (1, 1; 1, 1) and using this recursion relation, one can generate multiplets describing the collision of an equal mass pair into two, three, four, and ever more black holes: (1, 1; 1, 1), (3, 3; 4, 1, 1), (9, 9; 12, 4, 1, 1), etc. True spray reactions between two equal mass black holes can also occur. We define these as collisions described by a multiplet of the form (m, m; M, 1, 1, .. , 1). The multiplets (1, 1; 1, 1) and (3, 3; 4, 1, 1) can be viewed as i=1 and i=2 cases of a chain of multiplets (m , m ; M , 1, 1, .. , 1) created by starting from these two cases using the simple recursive equations: m = 6 m - m - 2, and M = 6 M - M - 2. This generates multiplets (15, 15; 21, 1, 1, 1, 1, 1, 1, 1, 1, 1) and so on. The iteration leads to exponentially growing shower reactions, with deep iterations describing 29.3% (a fraction 1-sqrt(1/2)) of the total mass being converted into a spray of arbitrarily small black holes. Milky Way-Andromeda Cataclysm Our Milky Way galaxy is on a head-on collision course with Andromeda . Both galaxies will merge, and both galaxies harbor a supermassive black hole a their centers. The black hole in our Milky Way weighs in at 4.2 million solar masses, and the black hole in Andromeda has grown into a true giant containing 100 million solar masses. What will happen to these black holes during and following the galaxies merger? A similar but much smaller merger likely happened a few million years ago . The satellite galaxy that merged with our galactic core carried a much smaller (probably a several thousand solar masses) black hole. Both black holes almost certainly have found each other, and all signs indicate the fireworks that resulted must have been spectacular. When Andromeda merges with our galaxy, it brings a 10,000 times heavier black hole. It is commonly assumed that ultimately this black hole and our own central black hole will meet. Applying the above 'multiplet math' to the collision of both supermassive black holes tells us that in theory the vast majority of the 4.2 million solar masses contained in the supermassive black hole in our Milky Way could transform into a shower of many millions or billions of black holes. Not a pleasant perspective. ... do you believe all of this? Surely, the laws of thermodynamics do not forbid these black hole showers. But are these physical? Does all of the Schwarzschildean multiplet stuff represents true physics or is it nothing more than 'frolicking with numbers'? I hope one of you will put the correct answer in a comment below. Failing that, my next post will be dedicated to the physical reality of black hole collisions producing showers of mini black holes. Follow-up blog post: More Hammock Physicist articles on black holes: How to count a black hole Cosmic flash memory Black hole in our backyard? Quantum galaxy: NGC 1277 I guess some other principle must be at work preventing these black hole showers. If these showers would occur, would the universe not be flooded with mini black holes? Or wait.., would that be the solution to the dark matter riddle? I need t think a bit more about this. Jain (not verified) | 12/08/12 | 13:44 PM The universe is full of micro black holes. They are energetic and orbit through and around regular matter and create our weather, seismic and volcanic events through beta decay. We get pummeled by them on Earth during CMEs and solar flares. Hurricane Sandy was an orbiting micro black hole. Stewart (not verified) | 12/11/12 | 21:42 PM Michael Martinez | 12/09/12 | 03:45 AM Very interesting, thanks. Is there any observational evidence that small black holes are not in fact floating around? I'm reminded how black holes do not "suck", any more than a same-sized star would. How rare are "normal" inter-stellar collisions (between two random stars not born as a binary)? Peter Davis (not verified) | 12/09/12 | 22:03 PM Nice analysis. We only need to sort out how the swarm of smaller black holes detaches itself during the collision. ;-) When the horizons of two black holes touch, a new horizon is formed that encompasses both previous black holes (semi static case). So the creation of the swarm cannot in any way involve a direct interaction between space/material within the two horizons (no surprise). This semi-static case leaves an enticing option. If there is a horizon surrounding two black holes that touch at their horizons, is there a horizon when they almost touch? And where is it? And if there is such a "combined" horizon encompassing both black holes incompletely, what happens when the one of the black holes moves with the escape velocity at that point? What happens when the black holes scamp at close the speed of light is beyond my imagination and command of GR. RobvS (not verified) | 12/10/12 | 08:01 AM If we take the case of two black holes, one that is infinitesmally larger than the other, by your logic the larger one must grow a little, and the smaller must burst into an infinitesmally large number of black holes. Unless the small ones shoot out at vastly inflated velocities, it seems that gravity would cause a lot of coelescence. Similarly if a massively larger black hole met a tiny one, the spray off the tiny one would certainly coalesce into the larger. bob goodwin (not verified) | 12/11/12 | 01:27 AM
{"url":"http://www.science20.com/hammock_physicist/how_split_black_holes-98465","timestamp":"2014-04-19T22:55:44Z","content_type":null,"content_length":"53576","record_id":"<urn:uuid:300da8c9-5207-4870-8c3f-c27192969d67>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Relatiove computability for curves An Isaac Newton Institute Programme Model Theory and Applications to Algebra and Analysis Relatiove computability for curves 18th May 2005 Author: Kim, Minhyong (University of Arizona) We will discuss the relationship between a number of computability/decidability problems for equations in two variables. We will discuss the relationship between a number of computability/decidability problems for equations in two variables.
{"url":"http://www.newton.ac.uk/programmes/MAA/kim.html","timestamp":"2014-04-19T22:51:30Z","content_type":null,"content_length":"2006","record_id":"<urn:uuid:008db8e1-8db6-47cb-99a6-2f059dbfb58c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Milton Village Algebra 1 Tutor ...I love teaching, but tutoring really allows that one-on-one time that helps students build the desire for lifelong learning. It is a joy to be part of that process.I am a full time High School Chemistry Teacher. We exceed the MA state standards and usually delve into organic Chemistry at the end of the year. 14 Subjects: including algebra 1, chemistry, Spanish, English ...I am well aware of study methods to improve standardized test scores, and am able to communicate these methods effectively. I have over ten years of formal musical training (instrumental), so I have a solid foundation in music theory. I have been informally singing for as long as I can remember, and formally singing for over four years. 38 Subjects: including algebra 1, English, chemistry, reading ...I found my passion to be a teacher or tutor again when I moved to Boston In 2011. I have a teacher certificate as a physics teacher for teaching at University in China. I used to lecture over 100 students. 5 Subjects: including algebra 1, physics, algebra 2, precalculus ...So many kids have told me after having me in class or as a tutor, "this is the first time I actually like math and feel good at it". It would be my joy to help you feel the same way.I have taught Algebra 1 for more than 25 years. I have also served as a tutor of Algebra 1, Algebra 2, and Trigonometry for the same time period. 6 Subjects: including algebra 1, geometry, algebra 2, prealgebra I have a Master's degree in Mechanical Engineering and a Bachelor's in Material Science Engineering. I can cover any engineering topic and Math, Physics and Chemistry for all levels. During my education I have the experience of teacher assisting for more than 4 years both in college and grad school. 23 Subjects: including algebra 1, chemistry, physics, calculus
{"url":"http://www.purplemath.com/milton_village_ma_algebra_1_tutors.php","timestamp":"2014-04-19T14:57:24Z","content_type":null,"content_length":"24300","record_id":"<urn:uuid:153cfc34-65e1-41b5-8379-d1405602ae18>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Cuthbert Nyack Page Index For Control These pages are mainly intended as supplementary for students pursuing electronics or related subjects. It is not the intention to repeat the explanations that can readily be found in books, but to supplement them with visual interactive illustrations of some of the concepts that students find difficult at first encounter. Most of these illustrations are done with java applets and pages are best viewed with a java enabled browser. The applets were originally written to be projected onto a screen during lecture/demonstration and have been tested with a 3GHz PC with Windows XP. With early versions of WinXP the Java virtual machine must be downloaded from Sun Microsystems. Later releases of WinXP block Java applets(This is the continuing saga of Microsoft trying to bring an end to Java). The following message appears: "To help protect your security, Internet Explorer has restricted this file from showing active content that could access your computer. Click here for options..." . To see the applets here active content must be unblocked. The applets only interact with the screen. Some sample GIF files of the applets are shown. For the Laplace transform used in the analysis of linear systems see:- ASP Introduction to Control For Digital Signal Processing see:- Dspcan For Analog Signal Processing see:- cnyack For Circuits see:- circuits-can Return to main page Copyright © 2006 Cuthbert A. Nyack. Send Feedback on improvements or any questions to:- Email
{"url":"http://controlcan.homestead.com/files/idxpages.htm","timestamp":"2014-04-20T11:01:54Z","content_type":null,"content_length":"39372","record_id":"<urn:uuid:97ab1c29-9f14-46a0-95f1-29de650c59f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Frugal Path Mechanisms Aaron Archer # ’ Eva Tardos + We consider the problem of selecting a low cost s - t path in a graph, where the edge costs are a secret known only to the various economic agents who own them. To solve this problem, Nisan and Ronen applied the celebrated Vickrey­Clarke­Groves (VCG) mechanism, which pays a premium to induce the edges to reveal their costs truthfully. We observe that this premium can be unacceptably high. There are simple instances where the mechanism pays #(n) times the actual cost of the path, even if there is an alternate path available that costs only (1 + #) times as much. This inspires the frugal path problem, which is to design a mechanism that selects a path and induces truthful cost revelation without paying such a high premium. This paper contributes negative results on the frugal path problem. On two large classes of graphs, including ones having three node­disjoint s - t paths, we prove that no reasonable mechanism can always avoid paying a high premium to induce truthtelling. In particular, we introduce a general class of min function mechanisms, and show that all min function mechanisms can be forced to overpay just as badly as VCG. On the other hand, we prove that (on two large classes of graphs) every truthful mechanism satisfying some reasonable properties is a min function mechanism. 1 Introduction
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/324/3859999.html","timestamp":"2014-04-19T14:38:08Z","content_type":null,"content_length":"8612","record_id":"<urn:uuid:9fa61468-05de-4c3d-b131-bbf65aa27d83>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Tough Antiderivative $\int \frac{10x\cos x-10\sin x}{x^{2}}dx$ How would you go about finding this antiderivative? Thanks, Keith Last edited by Keithfert488; February 10th 2010 at 09:32 AM. Reason: added dx So the quotient rule says ${(\frac{u}{v})}'=\frac{v{u}'-u{v}'}{v^{2}}$ So do I just integrate both sides of that with respect to x? Oh, I see now. So $u=\sin x$ ${u}'=\cos x$ and $v=x$ ${v}'=1$ So it'll end up being $\int \frac{10x\cos x-10\sin x}{{x}^2}dx=\frac{10\sin x}{x}+C$? Last edited by Keithfert488; February 10th 2010 at 09:32 AM. Reason: added dx and +C No. The function which you want to integrate it is just the derivative of a function. Find this function. Use the quoteint rule to find it. when you want to solve an integral of a function, and you know that derivative of this function, then the integral will be easy. $\int f'(x) dx = \int \frac{d}{dx}\left(f(x)\right)=f(x)+C$ as an example: $\int x dx = \int \frac{d}{dx} \left(\frac{1}{2}x^2\ right)=\frac{1}{2}x^2+C$. The function which you want to integrate, is just a derivative of another function. Try to find it. I hope you understood what I want to say. I understand. haha it just came to me. I get it. I edited in the answer. But also what I was trying to get at is trying to get an "inverse quotient rule" that can apply to the quotient rule much as integration by parts applies to the product rule. so ${(\frac{u}{v})}'=\ frac{v{u}'-u{v}'}{v^2}$ $\frac{u}{v}=\int\frac{du}{v}-\int\frac{udv}{v^2}$ Or would it be better just to try and recognize the quotient rule? Is there a rule like this? I have not learned it but I have learned substitution and integration by parts. we just integrate by parts: \begin{aligned}<br /> \int{\frac{x\cos x-\sin x}{x^{2}}\,dx}&=\int{\left( -\frac{1}{x} \right)'(x\cos x-\sin x)\,dx} \\ <br /> & =\frac{\sin x-x\cos x}{x}+\cos (x)+k \\ <br /> & =\frac{\sin x}{x}+k.<br /> \end{aligned} there're lots of problems involving nasty integrands, but the great key here is to know when to use a good integration by parts. this is not the most important case, of course.
{"url":"http://mathhelpforum.com/calculus/128185-tough-antiderivative.html","timestamp":"2014-04-21T13:11:26Z","content_type":null,"content_length":"73417","record_id":"<urn:uuid:7fa79235-fdaf-400b-8944-d528fa934c8c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Essential Mathematics for Middle School Teachers Essential Mathematics for Middle School Teachers (EMMST) is a project in the Department of Mathematics and Statistics at James Madision University supported by a grant from the National Science Foundation Course, Curriculum, and Laboratory Improvement (CCLI) program. EMMST supports the new Interdisciplinary Liberal Studies (IDLS) major at JMU. The IDLS major is recommended for students interested in early or middle grades education. The IDLS major requires a collection of lower-division courses along with two upper-level concentrations. These courses are also a portion of a program at JMU leading to an Algebra I add-on endorsement. For more information, contact DavidCarothers at JMU. EMMST Development Teams: │MATH 207 Mathematical Problem Solving │ ● Jeanne Fitzgerald and Judy Kidd, JMU │ │Initial offering: Spring 2002 │ ● Ginger Carrico, Montevideo Middle School │ │MATH 304 Principles of Algebra │ ● Carter Lyons, JMU │ │Initial offering: Spring 2001 │ ● Kathy Buracker, Pence Middle School │ │MATH 305 Principles of Geometry & Measurement │ ● Robert Hanson, JMU │ │Initial offering: Fall 2001 │ ● Bruce Hemp, Fort Defiance HS (formerly of S. Gordon StewartMiddleSchool) │ │MATH 306 Principles of Analysis │ ● David Carothers, JMU │ │Initial offering: Fall 2002 │ ● Bruce Hemp, Fort Defiance HS (formerly of S. Gordon StewartMiddleSchool) │ │MATH 307 Principles of Probability and Statistics│ ● Steve Garren and Mike Deaton, JMU │ │Initial offering: Spring 2002 │ ● Virginia Healy, Thomas Harrison Middle School │ │ │ │ Consultant for all teams: Lou Ann Lovin, JMU-School of Education Middle School Mathematics Specialist Program Illinois State University The ISU Middle School Program is a comprehensive major program for prospective middle school teachers in a department with a large mathematics-education emphasis. JMU seeks to provide a model for adapting to somewhat smaller programs in which mathematics credits are more limited. Sample of Other Resources: Mathematics for MiddleSchool Mathematics Teachers, Portland State University Middle School Mathematics Concentration, East Carolina University Teacher Preparation Archives, Case Studies of NSF-Funded Middle School Scienceand Mathematics Teacher Preparation Projects, CIRCE, College of Education,University of Illinois, 1993 Mathematics Teaching in the Middle School , a journal of the National Council of Teachers of Mathematics Conference Board of the Mathematical Sciences report on the Mathematical Education of Teachers EMMST seeks to provide a program meeting teacher preparation content standards of the National Council of Teachers of Mathematics, MAA recommendations, and recommendations of the Conference Board on the Mathematical Sciences Mathematical Education of Teachers report for grades 5-8 in a context emphasizing effective teaching. Data compiled by the Virginia Mathematicsand Science Coalition indicate that over the next decade the state willannually require 150 new mathematics/science teachers in the middle schools,but that as of 1998 Virginia colleges and universities were annually preparingonly15 new mathematics/science teachers per year. . This material is based upon work supported by the National Science Foundation under Grant No. 9952799. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
{"url":"http://educ.jmu.edu/~carothdc/middle/emmst.html","timestamp":"2014-04-17T13:32:49Z","content_type":null,"content_length":"15634","record_id":"<urn:uuid:14fa309b-0d81-43da-b0ec-8a9420d28972>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
We built a patio to the one side of the house that allows us to watch the birds feed, and some of these pictures I have shown on this site. We found that quite a variety were coming as long. While the birds still do come, the squirrel population is doing well, as they come to these feeders. It seems to attract our friend Smokey too. This is a black bear, and you wouldn't know it. My wife shot this photo, as I was talking to her from work. The cub was apparently interested on what was inside the house. Good thing the patio doors were not left open like we usually do. The interesting thing was the bear was actually coming to the patio window to have a look inside, and at one point actually stood on the house with it's front paws while leaning against it. This is the little bears second season. Not fifteen minutes before this little one showed up, my wife seen a Mother bear and her cub. It was obvious that Smoky was causing some distraction for the mother as the cub was chased up a tree. While string theory does not, at this point, predict our world, it can at the very least plausibly encompass it. No other theory has been shown to do that. Aaron Bergman's book review of Peter I think there is always an upper limit with which we can assign "our beliefs" and when given a set of tools with which to assess our current situations in science, we learn that what was once "inconceivable" can now be believable. So having assumed "this set of tools and the analysis's of the beauty" and it's allure, one can move forward, as I have, based on these premises. To take it into the world we know and operate in. Indeed, how fragile a "house of Glass." While "this view" of myself is inclined to a metaphysical point of view, and less then adequate to the valuations of science, I can be thought of, "as less then," and sent to the exclusions of the evaluation that science demands. This does not change "my philosophical point of view." We know what science thinks of this too.:) The Inconceivable This enlightenment experience is a realization about the nature of the mind which entails recognizing it (in a direct, experiential way) as liminocentrically organized. The overall structure is paradoxical, and so the articulation of this realization will 'transcend' logic - insofar as logic itself is based on the presumption that nested sets are not permitted to loop back on themselves in a non-heirarchical manner. 11 I have over my time researching the process here in science, learn to see the scientists in one form or another, equate themself according to the "peak realization and beyond" as something either God like, or, the allure of the "not believable." To me such a "systemic behaviour" can cause outward afflictions to the associates in science. These are less then wanting in regards to characterization. A religiosity's appeal to the beyond, whether atheistic as a position or not. This must be perceived as either being realistic, or felt wanting for, as an adherence to the ethics and morality of science. The Believable Has become something more then the topic of string theory itself. While we think about the issue in regard to science's pursuance, this has been deterred by other issues, as I relate them in regards too, "the Conceivable and the not Believable." The Inconceivable being Believable Anomaly and the Emergence of Scientific Discoveries[/b] Kuhn now moves past his initial topic of paradigm to scientific discovery saying that in order for there to be a discovery, an anomaly must be detected within the field of study. He discusses several different studies and points out the anomaly that invoked the scientific discovery. Later in the chapter he begins to discuss how the anomaly can be incorporated into the discovery to satisfy the scientific community. There are three different characteristics of all discoveries from which new sorts of phenomena emerge. These three characteristics are proven through an experiment dealing with a deck of cards. The deck consisted of anomalous cards (e.g. the red six of spades shown on the previous page) mixed in with regular cards. These cards were held up in front of students who were asked to call out the card they saw, and in most cases the anomaly was not detected. I of course weight the relations and counterpoints held by Steven Weinberg in this case. I would rather think about the "essence of observation here" rather then the foundational ideas exemplified by the whole issue of paradigm change. The Revolution that Didn't Happen by Steven Weinberg I first read Thomas Kuhn's famous book The Structure of Scientific Revolutions1 a quarter-century ago, soon after the publication of the second edition. I had known Kuhn only slightly when we had been together on the faculty at Berkeley in the early 1960s, but I came to like and admire him later, when he came to MIT. His book I found exciting. Evidently others felt the same. Structure has had a wider influence than any other book on the history of science. Soon after Kuhn's death in 1996, the sociologist Clifford Geertz remarked that Kuhn's book had "opened the door to the eruption of the sociology of knowledge" into the study of the sciences. Kuhn's ideas have been invoked again and again in the recent conflict over the relation of science and culture known as the science wars. So we come to the real topic here. How one opens the door to what is considered "beyond." I will only point to the previous persons who have allowed themself the freedoms to move from a position of the inconceivable, who have worked the process in science, and come up with an idealization of what they have discovered. What it means to them now, as they assume this new "paradigm change," to the way the work had always seemed to them. I will point to the "airs with which such transitions" take place that the environment is conducive to such journeys, that the place selected, could be the most idealistic in terms of where one may feel that their creativity is most aptly felt to themselves. Of course such issues as to the temperances of such creativity is always on my mind too, yet it is of essence that life be taken care of, and that such nurturing understand that the best of society is always the luxuries with which we can assign happiness? Then these in society become the grandeur of art and culture to become, the freedoms of expression, while there is always this struggle to survive. What good is a universe without somebody around to look at it? Robert Dicke This summer, CERN gave the starting signal for the long-distance neutrino race to Italy. The CNGS facility (CERN Neutrinos to Gran Sasso), embedded in the laboratory's accelerator complex, produced its first neutrino beam. For the first time, billions of neutrinos were sent through the Earth's crust to the Gran Sasso laboratory, 732 kilometres away in Italy, a journey at almost the speed of light which they completed in less than 2.5 milliseconds. The OPERA experiment at the Gran Sasso laboratory was then commissioned, recording the first neutrino tracks. Now of course most of you know the namesake with which I use to explain, is an aspect of the development of what are "shadows" to many of us, also, reveal a direction with which we know is "illuminated." We are streaming with the "decay path" all the while there is a sun behind us that shines. Now it is always an interesting thing for me to know that secret rooms can be illuminated, given the right piece of equipment to do the job. Somethings that will stop the process, and others, that go on to give indications of which these "massless particles" can travel. But no where is the penetration of the pyramidal model more apparent to me, is when it is used to explain the "rise of the colour theory" used on this site, to explain the nature of emotive sufferings, and it's ascensions, with which we can place the "colour of gravity" to it's rightful place. While one can discern the patterns in an ancient philosophical game of chance, what use to explain the underlying structure of abstraction, as we peer into the materiality of the object of this post? Do you know it's inherent geometrical nature, as an expression? Maybe, this is the Plato in me? Not a criminal "who hides" having perpetrated crimes against humanity, spouting a philosophy that some would pretend hides behind "the garb" of some "quantum Yes, no where is this measurable in nature at this time, other then to know that a philosophical position is being adopted. It may allow one to understand the brain's workings, alongside of the fluids that emotively run through our bodies. The "eventual" brain development toward it's evolutionary discourse, with the matter distinctions becoming apparent in the brain's structure, may be greatly enhanced in our futures? This is what is progressive to me about the work of Kip Thorne and Archibald Wheeler, as we look at the experimental processes of gravitational waves and the like, in LIGO. Is this proof of the gravitational waves? Is this proof of the Geon denoted by Wheeler to express, or the bulk, teaming with the gravitons? An event in the cosmos, allows us, while standing in the decay path of the expression, and as we turn with it, to know that a source can initiate, and allows us to see it's disintegration. WE are concerned with all the matter distinctions, while beyond this, is the expression of these schematically drawn rooms of energy, as we particularize them into neat boxes(things) for our entangled views, and loss of sight? To me, such a sun exists at our centres and such analogies, as I have drawn them here is to recognize that such a "heliocentric view" is not the idea behind our observations of the ego distinctions about self in the world, but a recognition of our connection to what pervades all of us, and connects us. Now this path streams onwards, no different then in the way we move into the materiality of the world we live in. While of course you see the bodies of our expression. You see the "emotive functionings" on our faces, primitive as it can be, as well as, the intellectual abstraction that is part of the inherent pattern of that expression into materiality. The "sun still shines" from that deeper place inside. Secrets of the PyramidsIn a boon for archaeology, particle physicists plan to probe ancient structures for tombs and other hidden chambers. The key to the technology is the muon, a cousin of the electron that rains harmlessly from the sky. I am Lost/Not Lost While the descent into the matters, one tends to loose sight of what is happening around them. Such a thing is the human part of us, as we think we are in the moment. While one may think they are in this "way station" it is ever the spot that we assign ourselves with or selections and happenings that we are connected too, in ways that are never understood, or looked for, as we progress these views about the reality we live in? How much farther is our eyesight granted into the materiality of things as we progress ever deeper into nature's structure, to think, this will bring us ever closer to that sun that shines inside? Lost souls were given directions in the manuals of the ancients to decipher this relationship with the world we live in, so that the understanding about perplexing paradigms that ensue the mind, may be set, "to live life" not to experience it's death. But to prepare that life beyond the limitations with which we assign our perception according to these material things. This is not to bring "the doom and gloom of micro blackhole creation" into the picture although I do see that the QGP arrived at can bring other perspectives forward, that would relegate questions to my mind. For instance. So to be clear then, the QGP is relativistic. This I understood already. This to me was an indication of string theories work to bring a GUT to the process. Of course I speculate. I am also speculating on the "loss of energy" in the collider process. MIT physicists create new form of matter by Lori Valigra, Special to MIT News Office June 22, 2005 "In superfluids, as well as in superconductors, particles move in lockstep. They form one big quantum-mechanical wave," explained Ketterle. Such a movement allows superconductors to carry electrical currents without resistance. To cool it, brings the "same process," as to the condition extended to the QGP? This is the point I am trying to make. If they are aligned? Now the quote above was addressed for clarification, and was caught by a spam filter. So the answer may or may not be forth coming. As a common folk, I am asking the question from one of ignorance, and would of course like an answer . It is not my wish to "propagate the untruthfulness" that any good scientist would wish to find deteriorates the quality of our current scientific endeavours as a society. String Theorists, for a million bucks, do you think you can answer "the question" and it's applicability? Now it should be clear here that while I speak of extra dimensions I am referring to that energy that is not accountable, "after the collision process and particle identifications have been For the first time the LHC reaches temperatures colder than outer space Geneva, 10 April 2007. The first sector of CERN1's Large Hadron Collider (LHC) to be cooled down has reached a temperature of 1.9 K (–271°C), colder than deep outer space! Although just one-eighth of the LHC ring, this sector is the world’s largest superconducting installation. The entire 27–kilometre LHC ring needs to be cooled down to this temperature in order for the superconducting magnets that guide and focus the proton beams to remain in a superconductive state. Such a state allows the current to flow without resistance, creating a dense, powerful magnetic field in relatively small magnets. Guiding the two proton beams as they travel nearly the speed of light, curving around the accelerator ring and focusing them at the collision points is no easy task. A total of 1650 main magnets need to be operated in a superconductive state, which presents a huge technical challenge. "This is the first major step in the technical validation of a full-scale portion of the LHC," explained LHC project leader Lyndon Evans. There are three parts to the cool down process, with many tests and intense checking in between. During the first phase, the sector is cooled down to 80 K, slightly above the temperature of liquid nitrogen. At this temperature the material will have seen 90% of the final thermal contraction, a 3 millimetre per metre shrinkage of steel structures. Each of the eight sectors is about 3.3 kilometres long, which means shrinkage of 9.9 metres! To deal with this amount of shrinkage, specific places have been designed to compensate for it, including expansion bellows for piping elements and cabling with some slack. Tests are done to make sure no hardware breaks as the machinery is cooled. The second phase brings the sector to 4.5 K using enormous refrigerators. Each sector has its own refrigerator and each of the main magnets is filled with liquid helium, the coolant of choice for the LHC because it is the only element to be in a liquid state at such a low temperature. The final phase requires a sophisticated pumping system to help bring the pressure down on the boiling Helium and cool the magnets to 1.9 K. To achieve a pressure of 15 millibars, the system uses both hydrodynamic centrifugal compressors operating at low temperature and positive-displacement compressors operating at room temperature. Cooling down to 1.9 K provides greater efficiency for the superconducting material and helium's cooling capacity. At this low temperature helium becomes superfluid, flowing with virtually no viscosity and allowing greater heat transfer capacity. “It's exciting because for more than ten years people have been designing, building and testing separately each part of this sector and now we have a chance to test it all together for the first time,” said Serge Claudet, head of the Cryogenic Operation Team. For more information and to see regular updates, see http://lhc.web.cern.ch/lhc/. The conditions are now established to allow testing of all magnets in this sector to their ultimate performance. I am not going to go into the relevance here but to describe how "I speculate" the "extra energy is lost" while delivering the expected results of the LHC microscope in it's efforts. This is based on the Navier–Stokes existence and smoothness that "may be" responsible for this loss. The understanding as I have come to see it is that the QGP by it's very nature is conclusively reached it total state, and that by reaching it, it brought in line, with the Superconductors relations. The principal here that a relativistic conditon is arrived at in the super fluid condition that I perceive is, in relation to the aspect of the Helium used to cool the LHC Navier-Stokes Equation Waves follow our boat as we meander across the lake, and turbulent air currents follow our flight in a modern jet. Mathematicians and physicists believe that an explanation for and the prediction of both the breeze and the turbulence can be found through an understanding of solutions to the Navier-Stokes equations. Although these equations were written down in the 19th Century, our understanding of them remains minimal. The challenge is to make substantial progress toward a mathematical theory which will unlock the secrets hidden in the Navier-Stokes equations. Take the Test here. * Your type formula according to Carl Jung and Isabel Myers-Briggs typology along with the strengths of the preferences * The description of your personality type * The list of occupations and educational institutions where you can get relevant degree or training, most suitable for your personality type - Jung Career Indicator™ About 4 Temperaments So you acquiescence to systemic methods in which to discern your "personality type." You wonder what basis this system sought to demonstrate, by showing the value of these types? So why not look? Which temperament do you belong too? Idealist Portrait of the Counselor (INFJ) Counselors have an exceptionally strong desire to contribute to the welfare of others, and find great personal fulfillment interacting with people, nurturing their personal development, guiding them to realize their human potential. Although they are happy working at jobs (such as writing) that require solitude and close attention, Counselors do quite well with individuals or groups of people, provided that the personal interactions are not superficial, and that they find some quiet, private time every now and then to recharge their batteries. Counselors are both kind and positive in their handling of others; they are great listeners and seem naturally interested in helping people with their personal problems. Not usually visible leaders, Counselors prefer to work intensely with those close to them, especially on a one-to-one basis, quietly exerting their influence behind the scenes. ounselors are scarce, little more than one percent of the population, and can be hard to get to know, since they tend not to share their innermost thoughts or their powerful emotional reactions except with their loved ones. They are highly private people, with an unusually rich, complicated inner life. Friends or colleagues who have known them for years may find sides emerging which come as a surprise. Not that Counselors are flighty or scattered; they value their integrity a great deal, but they have mysterious, intricately woven personalities which sometimes puzzle even Counselors tend to work effectively in organizations. They value staff harmony and make every effort to help an organization run smoothly and pleasantly. They understand and use human systems creatively, and are good at consulting and cooperating with others. As employees or employers, Counselors are concerned with people's feelings and are able to act as a barometer of the feelings within the organization. Blessed with vivid imaginations, Counselors are often seen as the most poetical of all the types, and in fact they use a lot of poetic imagery in their everyday language. Their great talent for language-both written and spoken-is usually directed toward communicating with people in a personalized way. Counselors are highly intuitive and can recognize another's emotions or intentions - good or evil - even before that person is aware of them. Counselors themselves can seldom tell how they came to read others' feelings so keenly. This extreme sensitivity to others could very well be the basis of the Counselor's remarkable ability to experience a whole array of psychic phenomena. When you "discover a symbol" as indicated in the wholeness definition presented below, you get to understand how far back we can go in our discoveries. While I talk of Mandalas, I do for a reason. While I talk of the inherent nature of "this pattern" at the very essence of one's being, this then lead me to consider the mathematical relations and geometries that become descriptive of what we may find in nature with regards to the geometric inclinations to a beginning to our universe? How nice? Wholeness. A state in which consciousness and the unconscious work together in harmony. (See also self.) Although "wholeness" seems at first sight to be nothing but an abstract idea (like anima and animus), it is nevertheless empirical in so far as it is anticipated by the psyche in the form of spontaneous or autonomous symbols. These are the quaternity or mandala symbols, which occur not only in the dreams of modern people who have never heard of them, but are widely disseminated in the historical records of many peoples and many epochs. Their significance as symbols of unity and totality is amply confirmed by history as well as by empirical psychology.[The Self," ibid., par. 59.] See:Expressions of Compartmentalization This of course was first introduced to me by the show called a Beautiful Mind. It is about the story of John Nash It is true such mathematics could seem cold and austere. Realizing the complexity of emotive and intellectual pursuances, on how such a gathering can be conducive to propelling society forward with the idealizations developed by looking within self. Something inherent, "as a pattern" within our nature? So self discovery and journaling become a useful tool, when all of life's events can be "different from day today." Emotive reactive mental changes, arising from some inherent understanding as a constituent of that group? Intellectual mathematical embracing to new societal futures? Knowledge. To become aware.I thought it better to remove from the comment lineup at Bee's. Backreaction: Openness in Science posting is linked here to show dynamical behaviour that has a basis with which to consider. The fundamental constituent of each individual by contribution can change the whole dynamics of society "by adding value" from the context of self, it's idealizations, which can become an operative function of that society as a whole. It was a "early recognition" for me as my pursuance to understand "mathematical relations" which can be drawn at the basis of society, our being, and it's commutative organizational faculties. These of course helped me to recognize that not only psychological models can be drawn, but that these dynamics could have been expanded upon by such diagrams, to illustrate, the patterns inherent in our natures and conduct toward other people. Now without understanding the evolution of the philosophy which I had developed along side of my everyday thinking, what use to mention emotive or abstraction nature of the mind if it cannot find it's relations to the physiological functions of the human body and brain? Game Theory Game theory is the study of the ways in which strategic interactions among rational players produce outcomes with respect to the preferences (or utilities) of those players, none of which might have been intended by any of them. The meaning of this statement will not be clear to the non-expert until each of the italicized words and phrases has been explained and featured in some examples. Doing this will be the main business of this article. First, however, we provide some historical and philosophical context in order to motivate the reader for all of this technical work 6. Evolutionary Game Theory has recently felt justified in stating baldly that "game theory is a universal language for the unification of the behavioral sciences." This may seem an extraordinary thing to say, but it is entirely plausible. Binmore (1998, 2005a) has modeled social history as a series of convergences on increasingly efficient equilibria in commonly encountered transaction games, interrupted by episodes in which some people try to shift to new equilibria by moving off stable equilibrium paths, resulting in periodic catastrophes. (Stalin, for example, tried to shift his society to a set of equilibria in which people cared more about the future industrial, military and political power of their state than they cared about their own lives. He was not successful; however, his efforts certainly created a situation in which, for a few decades, many Soviet people attached far less importance to other people's lives than usual.) Furthermore, applications of game theory to behavioral topics extend well beyond the political arena. While I have always pushed to indicate the very idea that "mathematical organization" exists at the very fundamental levels of our being, this would not mean much to person in society who goes about their lives living the mundane. Without considerations of a larger context at play in society while ever recognizing the diversity that such probabilities such actions can take when groups of individuals gather together in this communicative relationship of chance and change. This Nobel Prize award was of interest to me. The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2007 "for having laid the foundations of mechanism design theory" Leonid Hurwicz Eric S.Maskin Roger B. Myerson I first started to come to the conclusion in regards to the "social construct" and the relationship it had to the mathematical environmental when I saw the movie, "The Beautiful Mind." It was based on the story of John Nash. A Theory is Born This science is unusual in the breadth of its potential applications. Unlike physics or chemistry, which have a clearly defined and narrow scope, the precepts of game theory are useful in a whole range of activities, from everyday social interactions and sports to business and economics, politics, law, diplomacy and war. Biologists have recognized that the Darwinian struggle for survival involves strategic interactions, and modern evolutionary theory has close links with game theory. Game theory got its start with the work of John von Neumann in the 1920s, which culminated in his book with Oskar Morgenstern. They studied "zero-sum" games where the interests of two players were strictly opposed. John Nash treated the more general and realistic case of a mixture of common interests and rivalry and any number of players. Other theorists, most notably Reinhard Selten and John Harsanyi who shared the 1994 Nobel Memorial Prize with Nash, studied even more complex games with sequences of moves, and games where one player has more information than others. It is important to keep present the work in science that is ongoing so one sees the consistency with which this process has been unfolding and is part of what awareness does not take in with our everyday life. How a simple mathematic formula is starting to explain the bizarre prevalence of altruism in society Why do humans cooperate in things as diverse as environment conservation or the creation of fairer societies, even when they don’t receive anything in exchange or, worst, they might even be penalized? Inside the Mathematical Universe Discrete mathematics, also called finite mathematics or decision mathematics, is the study of mathematical structures that are fundamentally discrete in the sense of not supporting or requiring the notion of continuity. Objects studied in finite mathematics are largely countable sets such as integers, finite graphs, and formal languages. Discrete mathematics has become popular in recent decades because of its applications to computer science. Concepts and notations from discrete mathematics are useful to study or describe objects or problems in computer algorithms and programming languages. In some mathematics curricula, finite mathematics courses cover discrete mathematical concepts for business, while discrete mathematics courses emphasize concepts for computer science majors. For me this becomes the question that is highlighted in bold as to such a thing as discrete mathematics being suited to the nature of the Quark Gluon Plasma that we would say indeed that "all the discreteness is lost" when the energy becomes to great? Systemically the process while measured in "computerization techniques" this process is one that I see entrenched at the PI Institute, as to holding "this principal" as to the nature of the PI's research status. Derek B. Leinweber's Visual QCD* Three quarks indicated by red, green and blue spheres (lower left) are localized by the gluon field. * A quark-antiquark pair created from the gluon field is illustrated by the green-antigreen (magenta) quark pair on the right. These quark pairs give rise to a meson cloud around the proton. * The masses of the quarks illustrated in this diagram account for only 3% of the proton mass. The gluon field is responsible for the remaining 97% of the proton's mass and is the origin of mass in most everything around us. * Experimentalists probe the structure of the proton by scattering electrons (white line) off quarks which interact by exchanging a quantum of light (wavy line) known as a photon. Now indeed for me, thinking in relation to the 13th Sphere I would have to ask how and when we loose focus on that discreteness)a particle or a wave?). I now ask that what indeed is the fluidity of the Gluon plasma that we see we have lost the "discrete geometries" to the subject of "continuity?" So the question for me then is that if such a case presents itself in these new theoretical definitions, as pointed out in E8, how are we ever to know that such a kaleidescope will have lost it's distinctive lines? This would require a change in "math type" that we present such changes to consider the topologies in expression(this fluidity and continuity), in relation to how we see the QCD in developmental In a metric space, it is equivalent to consider the neighbourhood system of open balls centered at x and f(x) instead of all neighborhoods. This leads to the standard ε-δ definition of a continuous function from real analysis, which says roughly that a function is continuous if all points close to x map to points close to f(x). This only really makes sense in a metric space, however, which has a notion of distance. Note, however, that if the target space is Hausdorff, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is I suppose you are two fathoms deep in mathematics, and if you are, then God help you, for so am I, only with this difference, I stick fast in the mud at the bottom and there I shall remain. -Charles Darwin How nice that one would think that, "like Aristotle" Darwin held to what "nature holds around us," that we say that Darwin is indeed grounded. But, that is a whole lot of water to contend with, while the ascent to land becomes the species that can contend with it's emotive stability, and moves the intellect to the open air. One's evolution is hard to understand in this context, and maybe hard for those to understand the math constructs in dialect that arises from such mud. For me this journey has a blazon image on my mind. I would not say I am a extremely religious type, yet to see the image of a man who steps outside the boat of the troubled apostles, I think this lesson all to well for me in my continued journey on this earth to become better at what is ancient in it's descriptions, while looking at the schematics of our arrangements. How far back we trace the idea behind such a problem and Kepler Conjecture is speaking about cannon balls. Tom Hales writes,"Nearly four hundred years ago, Kepler asserted that no packing of congruent spheres can have a density greater than the density of the face-centered cubic packing." Kissing number problem In three dimensions the answer is not so clear. It is easy to arrange 12 spheres so that each touches a central sphere, but there is a lot of space left over, and it is not obvious that there is no way to pack in a 13th sphere. (In fact, there is so much extra space that any two of the 12 outer spheres can exchange places through a continuous movement without any of the outer spheres losing contact with the center one.) This was the subject of a famous disagreement between mathematicians Isaac Newton and David Gregory. Newton thought that the limit was 12, and Gregory that a 13th could fit. The question was not resolved until 1874; Newton was correct.[1] In four dimensions, it was known for some time that the answer is either 24 or 25. It is easy to produce a packing of 24 spheres around a central sphere (one can place the spheres at the vertices of a suitably scaled 24-cell centered at the origin). As in the three-dimensional case, there is a lot of space left over—even more, in fact, than for n = 3—so the situation was even less clear. Finally, in 2003, Oleg Musin proved the kissing number for n = 4 to be 24, using a subtle trick.[2] The kissing number in n dimensions is unknown for n > 4, except for n = 8 (240), and n = 24 (196,560).[3][4] The results in these dimensions stem from the existence of highly symmetrical lattices: the E8 lattice and the Leech lattice. In fact, the only way to arrange spheres in these dimensions with the above kissing numbers is to center them at the minimal vectors in these lattices. There is no space whatsoever for any additional balls. So what is the glue that binds all these spheres in in the complexities that they are arrange in the dimensions and all that we shall have describe gravity along with the very nature of the particle that describe the reality and makeup that we have been dissecting with the collision process? As with good teachers, and "exceptional ideas" they are those who gather, as if an Einstein crosses the room, and for those well equipped, we like to know what this energy is. What is it that describes the nature of such arrangements, that we look to what energy and mass has to say about it's very makeup and relations. A crystal in it's molecular arrangement? Look's like grapefruit to me, and not oranges?:) Symmetry's physical dimension by Stephen Maxfield Each orange (sphere) in the first layer of such a stack is surrounded by six others to form a hexagonal, honeycomb lattice, while the second layer is built by placing the spheres above the “hollows” in the first layer. The third layer can be placed either directly above the first (producing a hexagonal close-packed lattice structure) or offset by one hollow (producing a face-centred cubic lattice). In both cases, 74% of the total volume of the stack is filled — and Hales showed that this density cannot be bettered..... In the optimal packing arrangement, each sphere is touched by 12 others positioned around it. Newton suspected that this “kissing number” of 12 is the maximum possible in 3D, yet it was not until 1874 that mathematicians proved him right. This is because such a proof must take into account all possible arrangements of spheres, not just regular ones, and for centuries people thought that the extra space or “slop” in the 3D arrangement might allow a 13th sphere to be squeezed in. For similar reasons, Hales’ proof of greengrocers’ everyday experience is so complex that even now the referees are only 99% sure that it is correct.... Each sphere in the E8 lattice is surrounded by 240 others in a tight, slop-free arrangement — solving both the optimal-packing and kissing-number problems in 8D. Moreover, the centres of the spheres mark the vertices of an 8D solid called the E8 or “Gosset” polytope, which is named after the British mathematician Thorold Gosset who discovered it in 1900. Coxeter–Dynkin diagram The following article is indeed abstract to me in it's visualizations, just as the kaleidescope is. The expression of anyone of those spheres(an idea is related) in how information is distributed and aligned. At some point in the generation of this new idea we have succeeded in in a desired result, and some would have "this element of nature" explained as some result in the LHC? A while ago I related Mendeleev's table of elements, as an association, and thought what better way to describe this new theory by implementing "new elements" never seen before, to an acceptance of the new 22 new particles to be described in a new process? There is an "inherent curve" that arises out of Riemann's primes, that might look like a "fingerprint" to some. Shall we relate "the sieves" to such spaces? At some point, "this information" becomes an example of a "higher form "realized by it's very constituents and acceptance, "as a result." Math Will Rock Your World by Neal Goldman By the time you're reading these words, this very article will exist as a line in Goldman's polytope. And that raises a fundamental question: If long articles full of twists and turns can be reduced to a mathematical essence, what's next? Our businesses -- and, yes, ourselves. Intuition and Logic in Mathematics by Henri Poincaré On the other hand, look at Professor Klein: he is studying one of the most abstract questions of the theory of functions to determine whether on a given Riemann surface there always exists a function admitting of given singularities. What does the celebrated German geometer do? He replaces his Riemann surface by a metallic surface whose electric conductivity varies according to certain laws. He connects two of its points with the two poles of a battery. The current, says he, must pass, and the distribution of this current on the surface will define a function whose singularities will be precisely those called for by the enunciation. It is necessary to see the stance Poincaré had in relation to Klein, and to see how this is being played out today. While I write here I see where such thinking moved from a fifth dimensional perspective, has been taken down "to two" as well as" the thinking about this metal sheet. So I expound on the virtues of what Poincaré saw, versus what Klein himself was extrapolating according to Poincaré's views. We have results today in our theoretics that can be describe in relation. This did not take ten years of equitation's, but a single picture in relation. Graduating the Sphere Imagine indeed a inductive/deductive stance to the "evolution of this space" around us. That some Klein bottle "may be" the turning of the "inside out" of our abstractness, to see nature is endowably attached to the inside, that we may fine it hard to differentiate.I give a example of this In a link below. While I demonstrate this division between the inner and outer it is with some hope that one will be able to deduce what value is place about the difficulties such a line may be drawn on this circle that we may say how difficult indeed to separate that division from what is inside is outside. What line is before what line. IN "liminocentric structure" such a topology change is pointed out. JohnG you may get the sense of this? All the time, a psychology is playing out, and what shall we assign these mental things when related to the sound? Related to the gravity of our situations? You see, if I demonstrate what exists within our mental framework, what value this if it cannot be seen on the very outskirts of our being. It "cannot be measured" in how the intellect is not only part of the "sphere of our influence" but intermingles with our reason and emotive conduct. It becomes part of the very functions of the abstractness of our world, is really set in the "analogies" that sound may of been proposed here. I attach colour "later on" in how Gravity links us to our world. I call it this for now, while we continued to "push for meaning" about the nature of space and time. Savas Dimopoulos Here’s an analogy to understand this: imagine that our universe is a two-dimensional pool table, which you look down on from the third spatial dimension. When the billiard balls collide on the table, they scatter into new trajectories across the surface. But we also hear the click of sound as they impact: that’s collision energy being radiated into a third dimension above and beyond the surface. In this picture, the billiard balls are like protons and neutrons, and the sound wave behaves like the graviton. While of course I highlighted an example of the geometer in their visual capabilities, it is with nature that such examples are highlighted. Such abstractness takes on "new meaning" and settles to home, the understanding of all that will ensue. The theorems of projective geometry are automatically valid theorems of Euclidean geometry. We say that topological geometry is more abstract than projective geometry which is turn is more abstract than Euclidean geometry. It is of course with this understanding that "all the geometries" following from one another, that we can say that such geometries are indeed progressive. This is how I see the move to "non-euclidean" that certain principals had to be endowed in mind to see that "curvatures" not only existed in the nature of space and time, that it also is revealed in the Gaussian abstracts of arcs and such. These are not just fixations untouched abstractors that we say they have no home in mind, yet are further expounded upon as we set to move your perceptions beyond just the paper and thought of the math alone in ones mind. Felix Klein on intuition It is my opinion that in teaching it is not only admissible, but absolutely necessary, to be less abstract at the start, to have constant regard to the applications, and to refer to the refinements only gradually as the student becomes able to understand them. This is, of course, nothing but a universal pedagogical principle to be observed in all mathematical instruction .... I am led to these remarks by the consciousness of growing danger in Germany of a separation between abstract mathematical science and its scientific and technical applications. Such separation can only be deplored, for it would necessarily be followed by shallowness on the side of the applied sciences, and by isolation on the part of pure mathematics .... Felix Christian Klein (April 25, 1849 – June 22, 1925) was a German mathematician, known for his work in group theory, function theory, non-Euclidean geometry, and on the connections between geometry and group theory. His 1872 Erlangen Program, classifying geometries by their underlying symmetry groups, was a hugely influential synthesis of much of the mathematics of the day. Inside Out No Royal Road to Geometry? In an ordinary 2-sphere, any loop can be continuously tightened to a point on the surface. Does this condition characterize the 2-sphere? The answer is yes, and it has been known for a long time. The Poincaré conjecture asks the same question for the 3-sphere, which is more difficult to visualize. On December 22, 2006, the journal Science honored Perelman's proof of the Poincaré conjecture as the scientific "Breakthrough of the Year," the first time this had been bestowed in the area of I have been following the Poincaré work under the heading of the Poincaré Conjecture. It would serve to point out any relation that would be mathematically inclined to deserve a philosophically jaunt into the "derivation of a mind in comparative views" that one might come to some conclusion about the nature of the world, that we would see it differences, and know that is arose from such philosophical debate. Poincaré, almost a hundred years ago, knew that a two dimensional sphere is essentially characterized by this property of simple connectivity, and asked the corresponding question for the three dimensional sphere (the set of points in four dimensional space at unit distance from the origin). This question turned out to be extraordinarily difficult, and mathematicians have been struggling with it ever since. Previous links in label index on right and relative associative posts point out the basis of the Poincaré Conjecture and it's consequent in developmental attempts to deduction about the nature of the world in an mathematical abstract sense? Jules Henri Poincare (1854-1912) The scientist does not study nature because it is useful. He studies it because he delights in it, and he delights in it because it is beautiful. Mathematics and Science:Last Essays 8 Last Essays But it is exactly because all things tend toward death that life is an exception which it is necessary to explain. Let rolling pebbles be left subject to chance on the side of a mountain, and they will all end by falling into the valley. If we find one of them at the foot, it will be a commonplace effect which will teach us nothing about the previous history of the pebble; we will not be able to know its original position on the mountain. But if, by accident, we find a stone near the summit, we can assert that it has always been there, since, if it had been on the slope, it would have rolled to the very bottom. And we will make this assertion with the greater certainty, the more exceptional the event is and the greater the chances were that the situation would not have occurred. How simple such a view that one would speak about the complexity of the world in it's relations. To know that any resting place on the mountain could have it's descendants resting in some place called such a valley? Stratification and Mind Maps Pascal's Triangle By which path, and left to some "Pascalian idea" about comparing some such mountains in abstraction to such a view, we are left to "numbered pathways" by such a design that we can call it "a resting" by nature selection of all probable pathways? Diagram 6. Khu Shijiei triangle, depth 8, 1303. The so called 'Pascal' triangle was known in China as early as 1261. In '1261 the triangle appears to a depth of six in Yang Hui and to a depth of eight in Zhu Shijiei (as in diagram 6) in 1303. Yang Hui attributes the triangle to Jia Xian, who lived in the eleventh century' (Stillwell, 1989, p136). They used it as we do, as a means of generating the binomial coefficients. It wasn't until the eleventh century that a method for solving quadratic and cubic equations was recorded, although they seemed to have existed since the first millennium. At this time Jia Xian 'generalised the square and cube root procedures to higher roots by using the array of numbers known today as the Pascal triangle and also extended and improved the method into one useable for solving polynomial equations of any degree' (Katz, 1993, p191.) Even the wisest of us does not realize what Boltzmann in his expressions would leave for us that such expression would leave to chance such pebbles in that valley for such considerations, that we might call this pebble, "some topological form," left to the preponderance for us in our descriptions to what nature shall reveal in those same valleys? The Topography of Energy Resting in the Valleys The theory of strings predicts that the universe might occupy one random "valley" out of a virtually infinite selection of valleys in a vast landscape of possibilities Most certainly it should be understood that the "valley and the pebble" are two separate things, and yet, can we not say that the pebble is an artifact of the energy in expression that eventually lies resting in one of the possible pathways to that energy at rest. The mountain, "as a stratification" exists. Here in mind then, such rooms are created. The ancients would have us believe in mind, that such "high mountain views do exist." Your "Olympus," or the "Fields of Elysium." Today, are these not to be considered in such a way? Such a view is part and parcel of our aspirate. The decomposable limits will be self evident in what shall rest in the valleys of our views? Such elevations are a closer to a decomposable limit of the energy in my views. The sun shall shine, and the matter will be describe in such a view. Here we have reverted to such a view that is closer to the understanding, that such particle disseminations are the pebbles, and that such expressions, have been pushed back our views on the nature of the cosmos. Regardless of what the LHC does not represent, or does, in minds with regards to the BIG Bang? The push back to micros perspective views, allow us to introduce examples of this analogy, as artifacts of our considerations, and these hold in my view, a description closer to the source of that energy in expression. To be bold here means to push on, in face of what the limitations imposed by such statements of Lee Smolin as a statement a book represents, and subsequent desires now taken by Hooft, in PI's Status of research and development. It means to continue in face of the Witten's tiring of abstraction of the landscape. It means to go past the "intellectual defeatism" expressed by a Woitian design held of that mathematical world. Research Interests My research concerns string theory. At present I am interested in finding an explicit expression for the n-loop superstring amplitude and proving that it is finite. My field of research is particle theory, more specifically string theory. I am also interested in the recent results of Seiberg and Witten in supersymmetric field theories. Current Projects My present research concerns the problem of topology changing in string theory. It is currently believed that one has to sum over all string backgrounds and all topologies in doing the functional integral. I suspect that certain singular string backgrounds may be equivalent to topology changes, and that it is consequently only necessary to sum over string backgrounds. As a start I am investigating topology changes in two-dimensional target spaces. I am also interested in Seiberg-Witten invariants. Although much has been learned, some basic questions remain, and I hope to be able at least to understand the simpler of these questions. Stanley Mandelstam (b. 1928, Johannesburg) is a South African-born theoretical physicist. He introduced the relativistically invariant Mandelstam variables into particle physics in 1958 as a convenient coordinate system for formulating his double dispersion relations. The double dispersion relations were a central tool in the bootstrap program which sought to formulate a consistent theory of infinitely many particle types of increasing spin. Mandelstam, along with Tullio Regge, was responsible for the Regge theory of strong interaction phenomenology. He reinterpreted the analytic growth rate of the scattering amplitude as a function of the cosine of the scattering angle as the power law for the falloff of scattering amplitudes at high energy. Along with the double dispersion relation, Regge theory allowed theorists to find sufficient analytic constraints on scattering amplitudes of bound states to formulate a theory in which there are infintely many particle types, none of which are fundamental. After Veneziano constructed the first tree-level scattering amplitude describing infinitely many particle types, what was recognized almost immediately as a string scattering amplitude, Mandelstam continued to make crucial contributions. He interpreted the Virasoro algebra discovered in consistency conditions as a geometrical symmetry of a world-sheet conformal field theory, formulating string theory in terms of two dimensional quantum field theory. He used the conformal invariance to calculate tree level string amplitudes on many worldsheet domains. Mandelstam was the first to explicitly construct the fermion scattering amplitudes in the Ramond and Neveu-Schwarz sectors of superstring theory, and later gave arguments for the finiteness of string perturbation theory. In quantum field theory, Mandelstam and independently Sidney Coleman extended work of Tony Skyrme to show that the two dimensional quantum Sine-Gordon model is equivalently described by a thirring model whose fermions are the kinks. He also demonstrated that the 4d N=4 supersymmetric gauge theory is power counting finite, proving that this theory is scale invariant to all orders of perturbation theory, the first example of a field theory where all the infinities in feynman diagrams cancel. Among his students at Berkeley are Joseph Polchinski and Charles Thorn. Education: Witwatersrand (BSc, 1952); Trinity College, Cambridge (BA, 1954); Birmingham University (PhD, 1956). Just wanted to say it has been quite busy here because of the work having come back from vacation and preparing for my daughter in law and son's twins, which are to arrive any day now. Intellectual defeatism This statement reminded me of the idea about what is left for some to ponder, while we rely on our instincts to peer into the unknown, and hopefully land in a place that is correlated somehow in our This again is being bold to me, because there are no rules here about what a schooling may provide for, what allows an individual the freedoms to explore great unknowns for them. For sure education then comes to check what these instincts have provided, and while being free to roam the world, sometimes it does find a "certain resonance" in what is out there. Is this then a sign of what intellectual defeatism is about? I want to give an example here about my perceptions about what sits in the valleys in terms of topological formations, that until now I had no way of knowing would become a suitable explanation for me, "about what is possible" even thought this represented a many possibility explanation in terms of outcomes. Scientists should be bold. They are expected to think out of the box, and to pursue their ideas until these either trickle down into a new stream, or dry out in the sand. Of course, not everybody can be a genuine “seer”: the progress of science requires few seers and many good soldiers who do the lower-level, dirty work. Even soldiers, however, are expected to put their own creativity in the process now and then -and that is why doing science is appealing even to us mortals. To Be Bold One possible way the Higgs boson might be produced at the Large Hadron Collider. "Observables of Quantum Gravity," is a strange title to me, since we are looking at perspectives that are, how would one say, limited? Where is such a focus located that we make talk of observables? Can such an abstraction be made then and used here, that we may call it, "mathematics of abstraction" and can arise from a "foundational basis" other then all the standard model distributed in particle attributes? Observables of Quantum Gravity at the LHC Sabine Hossenfelder Perimeter Institute, Ontario, Canada The search for a satisfying theory that unifies general relativity with quantum field theory is one of the major tasks for physicists in the 21st century. Within the last decade, the phenomenology of quantum gravity and string theory has been examined from various points of view, providing new perspectives and testable predictions. I will give a short introduction into these effective models which allow to extend the standard model and include the expected effects of the underlying fundamental theory. I will talk about models with extra dimensions, models with a minimal length scale and those with a deformation of Lorentz-invariance. The focus is on observable consequences, such as graviton and black hole production, black hole decays, and modifications of standard-model cross-sections. So while we have created the conditions for an experimental framework, is this what is happening in nature? We are simulating the cosmos in it's interactions, so how is it that we can bring the cosmos down to earth? How is it that we can bring the cosmos down to the level of mind in it's abstractions that we do not just call it a flight of fancy, but of one that arises in mind based on the very foundations on the formation of this universe? Robert Harris's painting of the Fathers of Confederation. The scene is an amalgamation of the Charlottetown and Quebec City conference sites and attendees. Colonial organization All the colonies which would become involved in Canadian Confederation in 1867 were initially part of New France and were ruled by France. The British Empire’s first acquisition in what would become Canada was Acadia, acquired by the 1713 Treaty of Utrecht (though the Acadian population retained loyalty to New France, and was eventually expelled by the British in the 1755 Great Upheaval). The British renamed Acadia Nova Scotia. The rest of New France was acquired by the British Empire by the Treaty of Paris (1763), which ended the Seven Years' War. Most of New France became the Province of Quebec, while present-day New Brunswick was annexed to Nova Scotia. In 1769, present-day Prince Edward Island, which had been a part of Acadia, was renamed “St John’s Island” and organized as a separate colony (it was renamed PEI in 1798 in honour of Prince Edward, Duke of Kent and Strathearn). In the wake of the American Revolution, approximately 50,000 United Empire Loyalists fled to British North America. The Loyalists were unwelcome in Nova Scotia, so the British created the separate colony of New Brunswick for them in 1784. Most of the Loyalists settled in the Province of Quebec, which in 1791 was separated into a predominantly-English Upper Canada and a predominantly-French Lower Canada by the Constitutional Act of 1791. Canadian Territory at Confederation. Following the Rebellions of 1837, Lord Durham in his famous Report on the Affairs of British North America, recommended that Upper Canada and Lower Canada should be joined to form the Province of Canada and that the new province should have responsible government. As a result of Durham’s report, the British Parliament passed the Act of Union 1840, and the Province of Canada was formed in 1841. The new province was divided into two parts: Canada West (the former Upper Canada) and Canada East (the former Lower Canada). Ministerial responsibility was finally granted by Governor General Lord Elgin in 1848, first to Nova Scotia and then to Canada. In the following years, the British would extend responsible government to Prince Edward Island (1851), New Brunswick (1854), and Newfoundland (1855). The remainder of modern-day Canada was made up of Rupert's Land and the North-Western Territory (both of which were controlled by the Hudson's Bay Company and ceded to Canada in 1870) and the Arctic Islands, which were under direct British control and became part of Canada in 1880. The area which constitutes modern-day British Columbia was the separate Colony of British Columbia (formed in 1858, in an area where the Crown had previously granted a monopoly to the Hudson's Bay Company), with the Colony of Vancouver Island (formed 1849) constituting a separate crown colony until its absorption by the Colony of British Columbia in 1866. John A. Macdonald became the first prime minister of Canada. The shear number of people in the United States at approx. 200 million,can be an reminder of what "we", in the approx. same land mass of Canada can be compared to the United States. Our paltry 36 million "being overshadowed" might be better understood from that perspective. Happy Canada Day
{"url":"http://www.eskesthai.com/2008_07_01_archive.html","timestamp":"2014-04-18T21:10:20Z","content_type":null,"content_length":"328248","record_id":"<urn:uuid:7370fa3c-1f18-4af5-a06b-aba010c7c90d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 10 CHAPIER 2 RESILIENT MODULUS TESTING OF ASPHALT CONCRETE INTRODUCTION This chapter evaluates resilient modulus testing methodology attest details for asphalt concrete. The measurement of He resilient properties of asphalt concrete has been the subject of considerable research. Different testing devices and techniques have been used in these studies. All of these efforts have led He American Society for Testing and Materials to standardize the resilient modulus testing method of asphalt concrete (ASTM D 4123-82~. However, as demonstrated in the "Workshop on Resilient Modulus Testing" held at Oregon State University in March 1989, there was strong consensus among pavement engineers that the ASTM D 4123 nroce`1ure is ~,nnen~nrilv fim~.-~nn~,mins' ~nr1 that the. tP..Ct results are difficult to reproduce. Recognizing the importance and existing problems of resilient modulus testing of asphalt concrete, the Strategic Highway Research Program (SHRP) has developed a resilient modulus test procedure for asphalt concrete (SHRP Protocol P07) as a part of He Long Term Pavement Performance Monitoring CUTUP) program. This testing procedure incorporates recent findings on resilient modulus testing into He existing ASTM D 4123-82. A comparison between ASTM D 4123 and the November 1992 version of SHRP Protocol P07 is summarued in Table I. An important overall objective of this study is to develop laboratory resilient modulus testing procedures suitable for use by a state transportation agency. To help achieve this goal, the emphasis of the study was placed on evaluating the effects on resilient modulus of laboratory testing details such as, for example, equipment calibration and testing conditions. Detailed laboratory studies were therefore carried out using different devices to evaluate the effect of laboratory test apparatus and testing details on resilient modulus test results. Based on the findings from the present study, a number of revisions are suggested to SHRP Protocol P07 (November, 1992~. METHODS FOR DETERMINATION OF MODULUS OF ASPHALT CONCRETE The resilient modulus of asphalt concrete has in the past been determined by two approaches: (~) predict the resilient modulus using physical and mechanical properties of He mixture using available correlations, and (2) measure the resilient modulus by laboratory testing. Empirical Predictive Methods The most well-known predictive methods are the Marshall stability-flow ratio, the Shell Nomograph, and the Asphalt Institute predictive model. Nijboer [~] suggested He use of the Marshall stability-flow ratio as follows: S60°C 4 S4tC = ~ .6(stability/flow) where S is the modulus given in kilograms per square centimeter, stability in kilograms, and flow in 10 OCR for page 10 ~o o · - ~ ~- z ~ ~ ~ ~j ~ 2 ~ ~ e ~- ~· ~. 2 ~ , ~ . . ~ . . ~ . ~ I ~ ~ , L o ~ _ .c , E 11 OCR for page 10 - 5 o C) _' E 1 0 ;- 3 :, ~:r O ~OCL ~ ~ ~o ~ to C _ R ~ ~ ~ ~ ~ ~ - A (I.) :r ;- _ ~ C C ~; 3 o Id, ~ D ~ 0 ~.0 _ a o ~C .s ~ C ~ ~ ~ C ~ ~ U. U. ~ ~7 lo £ o - Hi o A - - et A: . Cal . U. . U. 8 o o V) o ·2 ._ C~ - o ·O - c Ct c s ~ ~ U) ._ o C ~ o ,~ .E ~ ~ ~ o o ~ - o ~ :5 ~ · ~ E * ~ * ~* 12 ~ U:) 04 0 o ._ Ct ~ _ O C~ C~ · U. ·:: ~ ~ O C~ ~ e,a - ·_ C~ ·Ct ~ ~ ~ ~_ =, =0, 0 0 ~ ~n _ ~ ., ~ ~ ·= t_ - ° ;- 3 ~ C~ · - - ·= ~ ~ - ~ · - o · c ~- o ~ ~ ~ ~ ~ - ~ s - ~o ~ e~ · ce · > o c ~ ~ oD ~ c ~o ~ s ~ ~ o ~ > o c: 3 ~ ~ U, ·_ U, _ C~ ~a S D · - u, - s c~ 5 > 0 ~ ~ 3 _ ~ E . ~ ~ ~ o ~ ~ o ~o ·-~ C~ ~ ', ~ ~ ~ ·° ~ _ o ~ ~ ~ ~ ~ ~ Z ~ s~ C~ ~ es =, ·. ~ o ~ ~- 1 ~ Q ,1 ~ ._ 8. ~ ~O O ~·_ - ,~ t ~C~ O ~ ° o: ~E ~ U, ~o ._ E ~, _ . ._ _ Z 3 ~0 ~ OCR for page 10 be - - ~ in ~ ~ ·O · - · - c~ cat c ~ c) in - · · en - c~ · ~ - ~ [L, 3 ~ ~ _ t- Cal t_ en C ~ ~ S - tell O ~ ~ a_ C- ~ Cal cat E ~ ~ . _ ~ ._ ~ en ~ ~ ~ C ~ ~ ._ C) D ._ us -3 · E cat sE ~I'd ~' 8 3 ~ 4,, _ ~ I, ~ o ~ 8 ~ - ~. i~ ' ~ E j &= 5 C o ~1 = ~ ~ = ~ o , I,,, c: * * * * * G) Us O _ ._ By' ~·~ AS 13 OCR for page 10 millimeters. This relationship was recommended for use in high temperature ranges by Heukelom and Klomp 191. McLeod [101 modified this equation using He English units: Modulus = 40(stability/flow) where modulus is given in pounds per square inch, stability in pounds, and flow in inches. Shell Nomograph. The Shell Nomograph was originally developed by Van der Poel [111. He defined tile stiffness as a modulus which is a function of temperature and loading time. Later Heukelom and Klomp [91 developed a relationship between He bitumen stiffness and He mixture stiffness based on volume concentration of aggregates. After McLeod [10] modified the nomograph by changing the entry temperature criterion, finally CIaessen et al. [121 produced a pair of nomographs used in the current Shell design manual. where E* = P = ac P - opt C - 1 - C2 = P - 200 - f = V = v n(106,70) = T = Asphalt Institute Method. The Asphalt Institute resilient modulus method was originally developed by Kallas and Shook [131 using cyclic biaxial test results. Their equation was refined by Witczak [141 from an expanded data base which relates dynamic modulus of asphalt concrete with percentage passing the No. 200 sieve, loading frequency, volume of voids, viscosity of asphalt cement at 70°F, temperature, and percentage of asphalt cement by weight of mix. Since this data base was based on mixtures of crushed stone and gravel, Miller et al. [151 modified the equation for a broader range of material types. The final form of the equation by Miller et al. [15] is: logic ~ E ~ = C1 + C2 (pa pop + 4.0) 0 s dynamic modulus (1Os psi) percentage of asphalt cement by weight of mix optimum asphalt content (2, 0.553833 + 0.028829(P2,,o/f° l7°33) - 0.03476Vy + 0.070377~(106,70) + (0.93 1757/f° °2774) 0.000005 T exp(1.3 + 0.498251ogl00 - 10.00189 T exp (1.3 +49825l°glof~lfl ll percentage passing the No. 200 sieve loading frequency (Hz) volume of voids viscosity of asphalt cement at 70°F (megapoises) temperature of pavement (°F) 14 OCR for page 10 The Asphalt Institute Method Laboratory Test Methods i' s available in the form of an easy-to-use computer program. In He preceding section, the modulus of asphalt concrete was presented in several different forms including dynamic modulus, stiffness, and resilient modulus. In the following sections, typical laboratory testing methods are discussed for the determination of resilient modulus for asphalt concrete. Stress State. The stiffness (modulus) characteristics of asphalt-bound materials can be considered not to be significantly influenced by stress state at moderate to low temperatures. However, at temperatures above 25°C the stress state, and therefore test configuration, have an influence on the stiffness characteristics of these materials. This influence becomes more pronounced as the binder becomes less stiff [161. Anisotropic Behavior. The determination of He resilient modulus of asphalt concrete involves using various types of repeated load tests. The most commonly used tests are as follows: Uniaxial tension test Uniaxial compression test Beam flexors (bending or rotating cantilever) test 4. Indirect diametral tension test S. Triaxial compression test A pavement layer has cross anisotropy in which radial properties are constant in all directions but are different from properties in He vertical direction. Wallace and Monismi~ [10] have claimed ~at, for an adequate description of the resilient characteristics of such a material, the following five parameters are required: 2. 3. 4. 5. Vertical strain due to an increment in vertical stress Radial strain due to an increment in vertical stress Radial strain due to an increment in radial stress Vertical strain due to an increase in radial stress Radial strain due to an increment in radial stress in a direction perpendicular to the strain They reported that the biaxial test measures the first and sometimes the second parameter whereas the diametral test measures a composite of the third and Fox parameters with roughly equal weight being given to each parameter. Due to anisotropy of asphalt concrete, the resultant discrepancy in resilient modulus between diametral testing and biaxial testing can be quite pronounced. Wallace and Monismith [171 carried out tests on an asphaltic concrete core taken from San Diego test road [id. They showed Hat as a result of placement and compaction efforts, the material was about twice as stiff in He radial direction as in the vertical direction. An asphalt layer of typical thickness is subjected to a bending action which is primarily resisted by He radial rather Man He vertical stiffness of the asphalt layer. Therefore for vertical cores taken from the pavement, the diametral test or flexural bending test should give a more relevant assessment of the 15 OCR for page 10 stiffness of the asphalt layer than tests performed in He vertical direction. Diametral test results are hence particularly attractive for evaluating radial tensile strain for a fatigue analysis. The diametral test has additional advantages since thin cores can be tested which permits more measurements over the depth of thick asphalt layers. FIexural Test. Early work to evaluate the resilient modulus of asphalt concrete was conducted by testing beam specimens under a third-point loading configuration. The flexural stiffness of beam specimens can be determined from the following equation: Pa `3L2 _4a2, where Es = 48 f ~3 Es = flexural stiffness (nisi) P = repetitive load applied on the specimen (Ib) a = ~h(L~) (in.) = reaction span length (in.) = moment of inertia of beam cross section (in.4) ~= measured deflection at the center of the beam specimen (in.) A number of different flexural test procedures have been developed to study the resilient and fatigue characteristics of asphalt concrete mixtures including: FIexure tests in which the loads are applied repeatedly or sinusoidally under center-point or third-point load Rotating cantilever beams subjected to sinusoidal loads Trapezoidal cantilever beams subjected to sinusoidal loads or deformations The advantages of the flexure test are [191: (~) it is well known, widespread in use, and readily understood; (2) The basic technique measures a fundamental property that can be used for both mixture evaluation and design; (3) Results of controlled-stress testing can be used for the design of thick asphalt pavements whereas results of controlled-strain testing can be used for the design of thin asphalt pavements. The method, however, is costly, time consuming, and requires specialized equipment [191. Also, the stress state within the pavement structure is biaxial, whereas the state of stress is essentially uniaxial in the flexure test. Triaxial Test. Numerous advantages are inherent in using the cyclic or repeated load biaxial test memos. The stress system that acts upon a specimen during the biaxial test approaches the system of stresses that are present in the upper portion of Be asphalt concrete layer of a pavement during loading. Furthermore, the strength of asphalt concrete can be determined when specimens are tested to failure under a single loading. The chief objections to the use of this method are its cost and the relative complexity of Me necessary testing equipment. In addition, the size of specimens required for testing of coarse aggregate mixtures and number of specimens needed for a test series discourage the adoption of the memos for routine testing. The analysis of biaxial data for bituminous mixtures is often complicated by a curved envelope of failure for which there is no well defined or proven application [201. One big advantage of 16 OCR for page 10 ~ - biaxial testing is Mat stress levels and strains are generally much larger than for diametral testing so that greater testing accuracy can be achieved for stiff asphaltic materials. The influence in the biaxial test of secondary factors such as poor contact of deformation sensors, minor sample disturbance, etc. are less important than for the diametral test. Indirect Tensile Test. The indirect tensile test was developed simultaneously but independently in Brazil and in Japan [211. The test has been used to determine the tensile strength of Marshall-s~ze asphalt . · . . . . concrete specimens. lne testing system includes indirect tensile loading apparatus, deformation measurement devices and data recording system. The indirect tensile loading apparatus consists of upper and lower loading plates and upper and lower 0.5 in. wide loading strips having the same curvature as the specimen. Load is vertically applied to the sides of the specimen and maximum tensile stress plane develops along the vertical diameter. The indirect tension test simulates the state of stress in the lower position of the asphalt layer which is a tension zone [221. Schmidt [23] proposed the use of a repeated load indirect tension test (which is called the diametral test) to determine the resilient moduli of asphalt concrete specimens. Figure 6 shows that the values of the resilient moduli obtained from this test compare favorably with those obtained from the direct tension, biaxial compression and beam flexure tests. Baladi and Harichandran 1241 conducted a comparative study of the following test methods: 1. 2. 3. 4. 5. Triaxial test (constant and repeated cyclic loads) Cyclic flexural test Marshall test Indirect tension test (constant and variable cyclic loads) Creep test The results of this study indicated that: 1. The repeatability of test results is poor. 2. . . 3. -line material properties obtained from the dltterent tests are substantially different. The results from the indirect tension test were Me most Promising although Rev were not consistent. - r ~ ~lo, _ _ _~ The advantages of the indirect tensile test are summarized as follows [17, 2l, 22, 251: 1. 2. 3. 4. 5. 6. The test is relatively simple and expedient to conduct. The type of specimen and the equipment can be used for other testing. Failure is not seriously affected by surface conditions. Failure is initiated in a region of relatively uniform tensile stress. The variation of test results is low compared to other test methods (refer to Figure 7~. A specimen can be tested across various diameters, and the results can be used to determine whether the sample is homogeneous and undisturbed. 7. The test can provide information on Me tensile strength, Poisson's ratio, fatigue characteristics, and permanent deformation characteristics of asphalt concrete. The main disadvantage of the test is its failure to completely simulate the stress conditions encountered in practice. As previously discussed, the diametral test does reasonably well simulate the tensile stress condition existing in the bottom of the asphalt concrete layer. The American Society of 17 OCR for page 10 7GO 6m 5m - ~ c 400 ~- ~ - - ~ c) . - - ~ c ~ 3a , 200 - 1W 1~0 :E ~Assumc V Q5 - c ~ _ ~ ~ ~ 4~30 ~ . 3m ~ ._ _ ~ c ~ ._ ~. 2m a: ~ - ~ Diam~ral [ensign _ x D'r~ Compress~on 0 Dir~ Tension `,_ ~Assumev~Q5 ~,~Assume2, 0.2 . , ~ ~ ~ A, ~ Notes: 1. Diametral tensile stress shown an the atcissa is the werage value from center to edge, i. e., 4Z ~ of max. ~x ' ~P.Q710. Oirect tensile or compressive stress aionq ar~s of 8" tall 4" diamMer specimen, strain measured w~th 2''long strain qa~es. I 1 I ,,,, I I ~ 0.2 0.3 0.4 0.5 I.0 2 3 S tres s, p s i 1 t.,, . I 4 510 (a) Direct tension, compression, and diametral methods Ffe~cural 8tams - t. &. ~,. Diametrai Cores - O. ~, c,0 ~V 0 35 - --~ ~· ~Flexural Values N50tes: 1Ca ~ Assume V ~ Q 2 . 1 ~ 1 2 1. Stresses on tIexural values are maximum fiber stross. Stresses 3n diametral values are maximum throwh center. _ . ~,, ., 1 1 1 1 ! , ,,, 1 1 3 ~510 20 30 40 50 100 150 Maximum Stress. psi (b) Flexural and diametral methods Figure 6. Companson of resilient modului of AC specimens using direct tension, compression flexural, ar~d diametral methods (After Ref (22) ) 18 OCR for page 10 o - x 100. _. us P. - LO i: u, ~4 ~4 - 10. us - ~1.0 . 0 c a cut _ _ O . 1 . . . , ° C = ( F-32)ll.8/ l()5psi = 689 spa/ 1 ~z,L, ~ / / ~ ~ A r ~- LEGEND: Tes c Temperature - 40°F (23 Test Temperature ~ 70°F Test Temperature - lOO.F 0.1 1.0 10. 100. Appeal t Concrete Modules (Compression Samples Without Confinement), psi x 105 Figure 7. Comparison of test results between the unconfined compression and indirect tension tests (After Ref (23) ) 19 OCR for page 10 Testing and Materials has adopted the repetitive indirect tensile test as a standardized method of measuring _ . , the resilient modulus of asphalt concrete (ASTM D 4123-82~. Control Mode. Two basic types of loading have been used in laboratory tests: controlled-strain and controlled-stress. Repetitive load is applied to produce a constant amplitude of repeated deformation or strain. Asphalt concrete in thin pavements (surface layer thickness of less than 3 in.) is considered to be in a controlled-strain condition. In controlled-stress tests, a constant amplitude of load is applied. The controlled-stress test simulates Me asphalt concrete in thick pavements (surface layer thickness greater than 6 ink. A comparative evaluation of controlled-stress and controlled-strain tests is presented in Table 2 [191. If Me testing procedure measures a real material property, the resilient moduli from the controlled- stress mode and controll~-strain mode must be the same because the sample does not know whether it is under the controlled-stress mode or controlled-strain mode. Bow Me controlled-stress and controlled-strain modes have been used in flexural beam and uniaxial tests. Only the controlled-stress mode has been applied to the indirect tensile test. The reason why the controlled-strain mode has not been used in the indirect tensile test is because the mechanism of forcing the deformation (either horizontal or vertical) back to the original position is not available. One can glue the upper loading strip to the specimen in order to control the vertical strain. However, this mechanism will develop a plane of maximum tensile stress along Be horizontal diameter when the loading head moves upward, which violates the theory behind the indirect tensile test. DIAMETRAL RESILIENT MODULUS TESTING DEVICES AND MEASUREMENT SYSTEMS USED IN EXPERIMENT Loading Devices Based upon He previous discussion of testing methods, the diametral test was selected for use in developing a standard test procedure for resilient modulus testing of asphalt concrete. The diametral test can be performed on small field core specimens and hence is practical for routine use. Also, the indirect tension to which the specimen is subjected during loading simulates reasonably well the tensile stress condition in He bottom of the asphalt concrete layer. Because of the popularity of the diametral test method, a number of test apparatuses have been developed which have important fundamental differences in equipment design concepts. Practically no research, however, has been previously performed to evaluate these testing systems. An experiment was therefore designed to identify the most reliable and accurate diametral test apparatus available and then to develop appropriate test procedures to allow the device to be used for routine testing. A representative group of the most promising diametral testing devices were carefully selected for evaluation In He testing program. The testing devices chosen are as follows: Retsina device, MTS device, Baladi's device and the SHRP Load Guide device. A detailed description of these testing systems is given in Appendix A. The simplest comparisons between He test devices can be made with respect to Heir loading configurations and diametrical deformation measurement systems. The Retsina device has fully independently aligned upper and lower loading strips and EVDTs that are clamped on the specimen to measure deformation. The MTS system has a guide rod that semi-rigidly aligns He upper and lower loading platens and extensometers Hat clamp on to ache specimen for deformation measurements. Both the Baladi and SHRP devices have heavy guide posts Hat rigidly align the upper and lower loading strips. 20 OCR for page 10 for the data collected for five consecutive load cycles, might indicate whether the deformations have become stable. At 41°F, no significant trend was observed. The second preconditioning level (100 cycles) appeared to be reasonable with regard to the variation in Poisson's ratio and resilient moduli and was chosen as Me preconditioning level for 41°F. As seen earlier, a significant trend was not evident in MR values so the choice of this level should not affect the resilient modulus value. At 77°F, the coefficient of variation of the five cycle resilient modulus data decreased as the number of preconditioning cycles increased from 25 to 75 or 150 repetitions. Also, the resilient moduli decreased suggesting more damage. For 77°F, the number of preconditioning cycles to use was selected to be between the 2nd and 3rd preconditioning levels. (i.e., between 75 and 100 preconditioning cycles). At 104°F, We 2nd and 3rd levels showed improved performance of resilient modulus values, both being almost comparable. The MR values kept decreasing with increasing number of preconditioning cycles. Also, keeping the number of preconditioning cycles small means less damage to the specimen. Thus, at 104°F a preconditioning level of 50 cycles was chosen for final testing. Calculation of Poisson's Ratio. In the first stage of testing, the values of resilient modulus and Poisson's ratio were computed in accordance with the SHRP P07(Nov. 1992) procedure as well as the elastic analysis. Poisson's ratios calculated from the SHRP P07 analysis (pr.sh.xv) were usually lower than those obtained using the elastic analysis (pr.el.xv). The difference was approximately 0.01 to 0.05 at 41°F, 0.05 to 0.15 at 77°F, and 0.05 to 0.20 at 104°F. Also, the SHRP P07 analysis with an assumed value of Poisson's ratio always gave significantly higher values of resilient moduli than those calculated from the elastic analysis with an assumed Poisson's ratio (i.e., mr.sh.x.a were always significantly higher than mr.el.x.a). Discussion of Results of Stage 2 Testing The average values of resilient modulus and Poisson's ratio from five consecutive load cycles and the associated coefficients of variation have been tabulated in Appendix ~ (rabies I-! ~ to I-14~. The test ID consists of four letters which indicate He following: (~) specimen type: MI, M2, M3, CI, C2, or C3; (2) load amplitude level: I, 2, or 3; (3) preconditioning level: I, 2, or 3. The elastic analysis was used since He over me~ods do not consider different gage lengths for measurement of vertical and horizontal deformations. Poisson's Ratio. Better Poisson's ratio results were obtained at the higher temperatures. For the first trial at 77°F (testing was done at 77°F before testing at 104°F), most of He Poisson's ratio values were between 0.2 and 0.5 with small five-cycle variances. The coefficient of variation for the Poisson's ratio values (pr.el.xm) improved (reduced) at higher load amplitudes. At the highest level of load, the coefficient of variation was less Can 5% in almost. all He cases. For the second trial at 77°F (testing done at 77°F after testing at 104°F), all He same specimens showed an increase in Poisson's ratio values, indicating that damage does increase Poisson's ratio (Figure 42~. Hence, Poisson's ratio serves as an indicator to the damage occurring in the specimen. However, for high load amplitudes Poisson's ratio values were not, comparatively, as different as Hey were for low amplitudes. This behavior could be a result of the load-history dependence of asphalt concrete. The specimen goes through different thermodynamic states and some of this process is irreversible. The thermodynamic state of He specimen changes after the largest load has been applied during the first trial. Hence, although the magnitude of 86 OCR for page 10 \ L , ~ \' ~ 17S: X~ V ~ X X on · . ~X\ ~ X~ 'X X At \ \ _= O _ _ O ~ ._ O o ·~ - c Co O O ~ o Q CM _ \Lo lo T r T I I r ~cs~ ··~ 000 ~COCal ··~ 000 ~ LL 18 Ie!ll proofs '°!~ s~uoss!od 87 i_ ._ 3 - V: `9 8 a CQ rat ' 1 o · _ cat o be 0 x `0 0~ 0 . cat A) 1 ·. LO x ~ - ~ ~ - s~ c ~ - o ~ · ~ o In c: o ~ c~ ~ ~ LO o ~ T~ ~ _ 4- . ~ Ct C~ ~ ~ C;, · _ LL OCR for page 10 loads for the smaller amplitude in the first and second trial is the same, the specimen behavior is different. However, when the largest load is applied again in the second trial, the specimen shows comparable behavior as it is very nearly at the same thermodynamic state as it was when the larger load was applied during the first trial. As before, the variation in Poisson's ratio for the second trial was small and it decreased with increasing load amplitude (Figure 43). At 77°F it appears that higher load amplitudes result in more reasonable Poisson's ratios and smaller variance in five cycle data. At 104°F, most Poisson's ratio values were between 0.4 and 0.6 with the coefficient of variation usually less than 5%. A trend of smaller coefficient of variation with increasing load amplitude was not seen. At load amplitude levels 2 and 3 the coefficients of variation were similar. For field cores, the use of a measured Poisson's ratio may lead to a higher estimation of resilient moduli values. For example, a constructed surface course has a resilient modulus of 500,000 psi and measured Poisson's ratio of 0.3. After 10 yes., the pavement needs maintenance and resilient moduli of field cores are evaluated. Because of an increase in Poisson's ratio due to damage, the measured resilient moduli after 10 yrs. might not show any significant decrease, thus resulting in an overestimation of the pavement condition. In such cases, the resilient moduli should be based on horizontal deformation and the initially measured value of Poisson's ratio of 0.3. Then, the calculated resilient moduli would represent the deterioration of the pavement. As more data becomes available, consideration should be given to the use of Poisson's ratio as a direct indicator of deterioration in an asphalt concrete pavement. Load Amplitude. Overall, at higher load amplitudes, the f~ve-cycle coefficient of variation for Poisson's ratio became lower with increased load amplitude at 41°F and 77°F. However, at 104°F, the five-cycle variance was almost equivalent for load amplitude levels 2 and 3. At 41°F there was an overall increase in resilient moduli (mr.el.x.a) with increasing load amplitude. The coefficient of variation in the resilient moduli decreases at higher load amplitudes. Also, the highest load level did not seem to cause any significant damage (based on measured vertical deformation as the tests progressed). Higher load amplitude is required to generate adequate values of deformation for measurement purposes. Thus, at 41°F, testing at the SHRP P07 (Nov. 1992) level of 30% of the indirect tensile strength at 77°F is recommended. The resilient moduli (mr.el.x.a) from the first trial at 77°F exhibited a small decrease with increasing load amplitude. However, resilient moduli values from the EXSUM setup (mr.el.xm.c) usually increased with increasing load amplitude. At the higher loads the resilient moduli (mr.el.x.a and mr.el.xm.c) agree better than at other levels. As before, the coefficient of variation reduced with increasing load amplitude. The results from the second trial at 77°F showed no trend in the mr.el.x.a values, but the coefficient of variation reduced with increases in load amplitude. A comparison of the resilient moduli values (mr.el.xm.c) at the highest load amplitude with the first trial at 77°F, revealed a good similarity except for the 4 in. diameter and 2.5 in. thick specimens (types M! and CI). The reduction in resilient moduli resulting from damage to the specimens was counteracted by the increase in Poisson's ratio due to damage. Thus it seems that once we are able to measure Poisson's ratio with confidence, the resilient moduli values should be calculated based on that value. So, if slight damage is occurring to the specimen, the decrease in resilient modulus due to the damage can be compensated by an increase in Poisson's ratio values. A comparison of resilient modulus (mr.el.x.a ~ and Poisson's ratio (pr.el.xm) values between the two trials suggest that while there was a decrease in resilient modulus values (Figure 42) of approximately 5% to 20%, there was a corresponding 5% to 25% increase in pr.el.xm values (Figure 42~. 88 OCR for page 10 A: o c) cr. a: · - ct ~ ~- o t - 1 //li! ~ l l l m ~{9 ~N O C] o O O O Hi o ~o o (ulx la Jd)= 89 -8 o . _ Ct Cat ~-= Cat Cat ._ o O m ·~ o .= O tD -~ ~ o ~ .o s o · Cat _ ~ ~ CQ c `_ Ct U: so As - Ct Ct o - - ._ 3 _ - U. ~ V) ·- to A ~ ~ o V ~ Ct so ._ - C~ so OCR for page 10 At 104°F, the overall trends indicate an increase in resilient moduli with increased amplitude for both mr.el.x.a data, from assumed Poisson's ratio and mr.el.xm.c data, from calculated Poisson's ratio. As with Poisson's ratio, the variance in the resilient moduli at the 2nd and 3rd load levels was reasonably similar although the scatter was relatively large. At 104°F, the specimen undergoes significant damage, as can be seen from a rapid increase in the permanent vertical deformation as the test progresses. Thus, to Limit the damage to minimal values, it becomes important to keep the load levels as small as possible, but large enough to maintain adequate specimen deformations and load control. Significant deformations were obtained at 104°F, even with small loads in the resilient modulus test. Hence, it is recommended that a smaller load amplitude of 3.5 to 4 % of He failure load should be used. This load should give essentially He same resilient moduli values (Appendix J. Table J-14). Recommended Seating Loads. SHRP recommended seating loads are suitable for testing at 41°F and 77°F, but at 104°F, the seating load should be reduced. The 10% seating load recommended by the SHRP P07 protocol is not necessary. Instead seating loads of 5%, 4%, and 4% of the total load to be applied at each cycle for resilient modulus testing are recommended at 41, 77 and 104°F, respectively. At 104°F, a minimum seating load of 5 Ibs. must be maintained, and the seating load should not exceed 20 Ibs. Specimen Size and Type Specimen Diameter. The effect of specimen diameter can be studied by comparing the resilient moduli and Poisson's ratios obtained for specimen types M! and M3, and C! and C3. From a comparison of resilient moduli and Poisson's ratios and their coefficients of variation, an influence of specimen diameter on specimen response is not apparent at 41°F. Tables I-~! to ]-14 (Appendix ]) show that the effect of specimen diameter on resilient modulus and Poisson's ratio seems to increase with increasing temperature. At higher temperatures the coefficient of variation reduced for the 6 in. specimen diameter. At 104°F, there was a 24% (medium gradation specimens) to 50% (coarse gradation specimens) decrease in resilient modulus (mr.el.x.a) compared to the 4 in. specimens. An assumed Poisson's ratio was used for both the gradations for the 6 in. diameter specimen (Figure 44~. However, at 77°F, only the coarse gradation specimen showed a decrease in resilient modulus with increase in diameter. Thus, it seems that at lower temperatures testing specimens of different sues is less likely to affect results than at higher temperatures. At higher temperatures specimens possess more non-homogeneity and this would cause a change in MR value with a change in diameter based on the aggregate to size ratio. Specimen Thickness. The effect of specimen thickness can be investigated by observing the difference in behavior between specimen types M! and M2, and C! and C2. Tables I-! ~ to I-14 (Appendix ]) show Hat no-significant trend was seen at any temperature in He five-cycle coefficients of variation. Also, there was no consistent trend in the resilient moduli values (mr.el.x.a). Since there is no difference in the five cycle variance, increasing the sample thickness will not help attain more repeatable results in the resilient modulus test. Specimen Gradation. The resilient moduli for coarse gradation specimens (C! & C2) were as much as 75 % higher than for the corresponding medium gradation specimens. This difference in MR increased with increasing temperature which is of significance for pavements constructed in regions having warm summer temperatures. For the 4 in. diameter specimens, the effect of gradation was smaller for the 4 in. thick specimens than for the 2.5 in. thick specimens (Figure 45~. Thus, as He specimen size, thickness, and diameter increases, He effect of gradation decreases. The coefficients of variation obtained for medium gradation specimens were less than for He coarse gradation specimens. The 90 OCR for page 10 4.0E+OS 3.SE+OS .= 3.0E+OS / /1 / 2SE+OS 20E+OS , , , 1.SE+OS 1.5E+OS 20E+OS 2SE+OS 3.0E+OS 3.SE+OS 4.0E+OS MR (psi), 4" diameter 3~ MG o CG Figure 44. Effect of specimen diameter on MR (mr.el.x.a - obtained from horiz. extensometer deformation using Elastic analysis and assumed By, for Stage 2 tests at 104°F 91 OCR for page 10 -- 3~ - ~ 2£- - ·zO~- ' 3~ 08 3~ 2£ +~ . - - 2£ . - 3.~ . - 1§ .. - - ,~ 1 ~ 4.~ o ~ x~ ~ Isis 1 1 x+ - , Ate- l E+ - 1~ - o Cc3 (b) 77°F qCE.gq~a art ~ 2~+a. I. - --,~ ~ ~ . x2 ~o ~ sol ~ USED .~ - 3£ - - , z~- Z£ + - - SEAM +- Z£~= Z£.- 3~.- ~R - , USA l ' 4'.~2~ ~ · sir ~ ~1= Figure 45. Effect of gradation on MR (mr.el.x.a - obtained from horiz. extensometer deformation using Elastic analysis and assumed it), at different temperatures and different specimen sizes for Stage 2 tests 92 OCR for page 10 differences In coefficient of variation for the two gradations were close to zero for the larger 6 in. diameter specimen that were 4.5 in. thick. Thus the medium gradation specimens can be tested using 4 in. diameter and 2;5 in. thick specimens, but the coarse gradation specimens should be tested using 6 in. diameter, 3.75 in. thick specimens to obtain good values of resilient moduli. Use of a 4 in. diameter specimen 2.5 in. thick is acceptable for testing medium gradation asphalt minces have a maximum aggregate size of about 3/4 in. However, for aggregate sizes greater than 3/4 in., a 6 in. diameter specimen 4.5 in. thick should be used to test these coarse gradation mixes. MULTI-LAB VALIDATION STUDY The details of the limited, multi-lab validation diametral resilient modulus test can be found in Appendix H. The general purpose of this study was to determine, with statistical analysis, the effects of multiple operators, retesting specimens, and assumed versus calculated Poisson's ratio on the resilient moduli determined for identical specimens. The specimens were tested at three different laboratories, using similar equipment but different equipment operators. The specific conclusions of the multi-lab validation study are: 1. The recommended testing protocol yields better estimates of Poisson's ratio, but the use of an assumed Poisson's ratio yields more consistent moduli. 2. The experience of the operator has a significant effect on the resilient modulus values during testing. The more experienced the operator, the less variation in the resilient moduli values. The difference in the resilient moduli for calculated versus assumed Poisson's ratio is also much smaller with an experienced operator. The coefficient of variation of the resilient moduli determined from calculated and assumed Poisson's ratios does, as indicated by the primary test program, increase with increasing temperature. 4. The number of times a specimen has been tested also has an effect on the resilient modulus. Structural damage to the retested specimens has a larger effect on resilient modulus than He lab-to- lab variation. The statistical analysis was performed on a very limited number of samples, making the interpretation of the statistical results somewhat less reliable. An extensive validation study is recommended to obtain better evaluations on sample-to-sample and lab-to-lab variations. COMPARISON OF LABORATORY AND BACKCALCULATED RESILIENT MODULI Resilient moduli for use in design are presently determined by both direct laboratory measurement and by backcalculation from falling weight deflectometer (FOOD) tests. Both laboratory tests and backcalculation procedures from field data have important limitations and advantages. In laboratory tests, fabrication of test specimens Hat duplicate field conditions and simulating in-situ stress states and environmental factors are difficult. In backcalculation procedures, the theory used assumes very ideal behavior of the materials (i.e., homogeneous, linear elastic, isotopic). Such materials are not found in pavements. The purpose of this study was to compare He resilient modulus determined using the proposed 93 OCR for page 10 laboratory procedure with FWD backcalculated values. A field test section on U.S. 421 in Norm Carolina was used to obtain field data and laboratory test specimens. The details of this study are given in Appendix T. however the specific conclusions of this study are: The backcalculated and laboratory AC resilient modulus values are similar if the field data is obtained on asphalt concrete layers less than 4 in. Hick. 2. There is a significant difference between the laboratory and field AC resilient moduli if the field data is obtained on sections with total asphalt concrete thicknesses of 9 in. or more. Since the data used In this study is rawer Emoted, it is not clear how much different the laboratory resilient modulus is from the FWD backcalculated modulus. Once LTPP field test data is fully collected and analyzed, a better assessment can be made on this issue. SUMMARY AND CONCLUSIONS Existing methods were reviewed for empirically predicting the resilient modulus for asphalt concrete. The Asphalt Institute Method, corrected for locally used materials and testing devices appears to offer a beginning point for developing a practical alternative to performing resilient modulus test on a routine basis. Different laboratory test methods, such as the repeated load biaxial test and the diametral test, apply different stress conditions to a specimen. As a result, the resilient moduli obtained from these different methods do not always agree. The repeated load diametral test was concluded to be the most practical, realistic method for evaluating the resilient modulus of asphalt concrete. An extensive resilient modulus testing program was, therefore, carried out using the diametral test. All tests were conducted using a 0. ~ sec., haversine shaped loading pulse using a closed loop, electro-hydraulic testing system. Experiments were performed to identify the most accurate and reliable diametral testing device. Loading equipment evaluated in the study were as follows: (~) Retina device, (2) MTS device, (3) Baladi's device and (4) SHRP Load Guide ~G) device. The following four deformation measuring devices were also studied: (~) stand along EVDTs, (2) an extensometer mounted on the specimen, (3) a gage-point-mounted (GPM) setup and (4) a special combined measurement system using a surface mounted EVDT to measure vertical deformation and externally mounted transducers to measure horizontal deformation. Diametral resilient modulus tests were performed on laboratory prepared asphalt concrete specimens having a coarse and medium gradation as well as on field cores and synthetic specimens. Temperatures of 41°F, 77°F and 104°F were used in these tests. In performing these tests, equipment calibration, including the use of synthetic specimens, was found to be a critical aspect required to obtain reliable test results. Specimen rocking was also determined to be an important consideration in selecting an appropriate loading device. A square wave load pulse produces more damage and a significantly smaller resilient modulus compared to a haversine wave. Therefore the square wave load pulse should not be used for resilient modulus testing since it is not representative of He pulse developed in the field. 94 OCR for page 10 5. The loading pulse time significantly affects He resilient modulus. A loading time of 0.2 sec. reduces Me values. and produces more damage than for a 0. ~ sec. pulse. A shorter loading time ~ , . ~ , _ ~ rat ~- ~ ~ ~ ~ ~ ~ ~ ~ ~ · . · . . ~ ~ .~ . ~ a~ -, ~ .~ of ().()5 sec. is representative ot~ high vehicle speeds, but is not practical as the repeatability of the test is poor and accurate load control at higher temperatures is hard to achieve. A loading time of 0.] sec. is therefore proposed which is in agreement with the SHRP P07 (Nov. 1992) procedure. 6. 7 8. 9. The ratio of rest period to loading period of 4 and 24 used in this StU6Y 40 not have a significant _ , - , . . ~ . . . .. . . . . . . . . . #. .. . . enect on the resilient module values. Also past research has shown that a rest perlocl/loaulng time ratio greater Han ~ provides no extra benefit. A rest period/Ioading time ratio of 9, as presently used by SHRP, is a good choice. At 77 and 104°F the resilient moduli decreased wig increasing number of preconditioning cycles. Based on a study of trends in the coefficient of variation for MR , the following preconditioning levels were Axed at the three deferent test temperatures to make resilient modulus test results more repeatable: 41° F: 77° F: 104° F: 100 cycles 100 cycles 50 cycles A significant difference between resilient moduli and Poisson's ratio is obtained using the SHRP P07 (Nov. 1992) analysis and the elastic analysis. The elastic analysis is essentially the same as the ASTM analysis except it allows the use of different deformation measurement gage distances while the ASTM analysis does not. The SHRP equations give resilient moduli values as much as 45% higher when an assumed Poisson's ratio is used. The elastic analysis with the appropriate coefficients for measurement geometry used in testing is the recommended approach. 10. ~- 12. Poisson's ratios obtained using the EXSUM system at higher temperatures are reasonable. The system can be modified with further use to make it simpler. The 4 in. diameter and 2.5 in. thick specimens may be acceptable for testing medium gradation mixes, but 6 in. diameter and 4.5 in. thick specimens should be used to test coarse gradation mixes. Grain sue distributions for the medium and coarse gradation mixes are given in Appendix B. Table Be-. SHRP recommended loads are suitable for testing at 41°F and 77°F, but at 104°F, the load should be reduced. The 10% seating load recommended by the SHRP P07 protocol is not necessary. Instead seating loads of 5, 4, and 4% of the total load to be applied at each load cycle for resilient modulus testing are recommended at Ill, 77, and 104°F, respectively. At 104°F, a minimum load of 5 Ibs. must be maintained, and the seating load should not exceed 20 Ibs. An Improved diametral test was developed to evaluate the resilient modulus of asphalt concrete. The proposed test procedure is given in Appendix C. A closed loop, electro-hydraulic testing system and also a data acquisition system is used to apply a 0. ~ sec. haversine-shaped load pulse to a disk-shaped specimen. 13. Loading Device. The SHRP EG device minimizes rocking of the specimen. The good performance is apparently due to (~) use of two guide columns, (2) a counter-balance system, (3) 95 OCR for page 10 an innovative semi-rigid connection between the upper plate and the load actuator, and (4) its sturdiness. The disadvantages are its bulkiness, complication of use, possible inertia from the counter-balance system, friction in the guide columns, and limitation of the sue of the sample that can be used. 14. 15. 16. 17. Ad. 19. Mountable Extensometer. A mountable extensometer device, compared to the stand-alone EVDT measurement device, provides less variance and hence better repeatability within the five consecutive cycles used for resilient modulus determination. However, using the SHRP EG device EVDTs gave comparable performance to the mountable extensometer. Mountable deformation measurement devices are recommended for resilient modulus testing because of the smaller variability. Poisson's Ratio Importance. Poisson's ratio is one of the most important parameters influencing the resilient modulus. The variation In MR values due to the testing axis dependency and different lengths of rest periods are almost negligible compared to the magnitude of difference in the MR values from assumed and calculated Poisson's ratios. Poisson's ratio should be evaluated using the EXSUM deformation measurement system. EXSUM Deformation Measurement Device. The proposed EXSUM deformation measurement system provides a promising measurement method for determination of consistent and reasonable Poisson's ratios. At 41°F, however, increase In variability occurs due to misalignment and rocking which become more Important for the small deformations occurring at low temperatures. Use of the SHRP EG device, or its modification, together with the EXSUM setup ensures obtaining reasonable values of Poisson's ratio even at low temperatures. The use of the EXSUM setup requires an increase in testing time compared to conventional measurement systems because of the significant time required for mounting the EVDT on the specimen. Preconditioning. Specimens were subjected to 3 different numbers of preconditioning load cycles at each temperature. No significant difference was observed in the variation of resilient moduli and Poisson's ratio for the last 5 load cycles for He largest two preconditioning cycles. However, MR values did decrease with increasing number of preconditioning cycles. For 41 °F and 77°F, 1 00 p recond itio ning cycl es are reco mmend ed wh il. e th e us e of 50 cy c I es i s r e co mm en d ed at 1 04 ° F . Calculation of MR. A significant difference exists between resilient moduli and Poisson's ratio values computed using the SHRP PO7 analysis and the elastic analysis used in this study which is similar to the ASTM analysis. The SHRP analysis gives higher values when an assumed Poisson's ratio is used as compared to the elastic analysis with an assumed Poisson's ratio. Load Amplitude. The load amplitudes recommended in SHRP protocol are suitable for testing at 41°F and 77°F, but at 104°F a smaller load should be used. Load levels corresponding to 30, 15, and 4% of the indirect tensile strength at 77°F are recommended for testing at 41°F, 77°F, and 104°F, respectively. 96
{"url":"http://www.nap.edu/openbook.php?record_id=6353&page=10","timestamp":"2014-04-19T12:03:32Z","content_type":null,"content_length":"81027","record_id":"<urn:uuid:1a690f23-e351-4acf-ace8-5eccbe875021>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Search variation.com Products (HOME) Acceptance Sampling Process Validation CAPAs and Trending of Quality Data Measurement Systems Analysis Spec Setting, Tolerance Analysis and Robust Design General Statistics Statistical Process Control Design of Experiments Six Sigma What's New Technical Library Contact Info Ann Taylor 1 (847) 367-1032 1 (847) 367-1037 Postal address 5510 Fairmont Rd. Libertyville, IL 60048 Electronic mail Subscribe to our Web Site By entering your e-mail address and clicking the Subscribe button, you will automatically be added to our mailing list. You will receive an e-mail when new versions of our software or books are available as well as other significant announcements. (privacy policy). Alternative Sample Sizes for Verification Dose Experiments and Dose Audits Dr. Wayne A. Taylor Joyce M. Hansen ISO 11137 (1995), "Sterilization of Health Care Products - Requirements for Validation and Routine Control - Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2) This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report. Verification Dose Experiments, Dose Audits, Radiation Sterilization, Acceptance Sampling, OC Curves, ASN Curves, Double Sampling Plans, Quick Switching Systems. The minimum sterilization dose depends on the product bioburden, i.e., the number and type of microorganisms found on the product. The bioburden is influenced by raw materials along with manufacturing personnel, environment and procedures. It may change over time. Therefore, the resistance of these microorganisms to radiation must be monitored on an ongoing basis. ISO 11137 (1995), "Sterilization of Health Care Products - Requirements for Validation and Routine Control - Radiation Sterilization", provides three methods for determining the sterilization dose. This paper addresses the procedures used by Methods 1 and 2 to monitor bioburden resistance. To evaluate resistance of the microorganisms, a specified number of product units are irradiated at a dose that is less than the normal sterilization dose. This dose, called the verification dose, is calculated to give a Sterility Assurance Level (SAL) of 10-2. The SAL is the probability that a unit of product contains one or more viable microorganisms. Units containing viable microorganisms will exhibit growth following a test of sterility. This growth is referred to as a positive. At a SAL of 10-2, one would expect on average that 1% of the units will test positive. A larger than expected number of units testing positive indicates that the resistance may be greater than assumed. The bioburden resistance is initially tested using the verification dose experiment. Then, dose audits are performed quarterly to ensure the bioburden resistance has not increased. Both procedures are attribute acceptance sampling plans. This article uses the theory of acceptance sampling to evaluate the protection provided by these two procedures and then to identify alternate sampling plans that provide equivalent protection but may do so at a lower cost. The plans so selected can reduce the number of units tested by as much as 50%. This article expands on Phillips, Taylor, Sargent, & Hansen (1996). The Method 1 verification dose experiment requires that 100 product units be irradiated at the verification dose and a test of sterility performed on each product unit. The results are acceptable if no more than two units test positive. This procedure is an attribute single sampling plan with sample size n=100 and accept number a=2 (Taylor, 1992). What protection does this sampling plan provide? To understand its protection, one must understand how it behaves. First, assume the bioburden resistance is such that 1% of the units test positive. Figure 1 shows the results of repeated application of this sampling plan. 92% of the time the verification dose experiment is passed. Figure 1. Behavior verification dose experiment for a 1% positive rate Now assume the bioburden resistance increases to cause a 3% positive rate. Figure 2 shows the expected behavior. Now only 42% pass the verification dose experiment. Figure 2. Behavior verification dose experiment for a 3% positive rate Finally assume the bioburden resistance increase even further to a 6% positive rate. Figure 3 shows the expected behavior. In this case only 6% pass the verification dose experiment. Figure 3. Behavior verification dose experiment for a 6% positive rate Summarizing the behavior of the verification dose experiment, a 1% positive rate routinely passes, a 6% positive rate routinely fails, while a 3% positive rate sometimes passes and sometimes fails. The behavior of a sampling plan is generally displayed in the form of an Operating Characteristic Curve (OC Curve). The OC curve of the verification dose experiment is shown in Figure 4. The bottom axis gives different positive rates. The left axis gives the corresponding probability of passing. The three cases covered in Figures 1-3 are highlighted. Figure 4. OC curve of Method 1 verification dose experiment Rather than drawing the OC of each sampling plan, the protection provided by a sampling plan is often summarized using two points on the OC curve called the AQL and LTPD. The AQL (Acceptable Quality Level) is that positive rate that has a 95% chance of passing. For the verification dose experiment, the AQL is 0.823% (See Figure 5). The AQL represent a positive rate routinely passed by the sampling plan. The LTPD (Lot Tolerance Percent Defective) is that positive rate that has a 10% chance of passing (90% chance of failing). For the verification dose experiment, the LTPD is 5.23%. The LTPD represents a positive rate routinely failed by the sampling plan. Figure 5. AQL and LTPD of Method 1 verification dose experiment Together the AQL and LTPD summarize the protection of the verification dose experiment. It routinely passes when the positive rate is 0.823% (AQL) or better. It routinely fails when the positive rate is 5.23% (LTPD) or higher. It sometimes passes and sometimes fails when the positive rate is between the AQL and LTPD. Ideally we would like the verification dose experiment to always pass if the positive rate is 1% or less and to always fail if the positive rate is above 1%. However, since the decision must be based on a sample, there is always a risk of making an incorrect decision. As a result, there is always a chance of passing when the positive rate is above 1%. There is also a chance of failing when the positive rate is below 1%. Together, the AQL, LTPD and OC curves describe these risks. Other sampling plans exist which have similar AQLs and LTPDs. One such plan is the attribute double sampling plan: Alternative Sampling Plan Verification Dose Experiment First sample size: n[1] = 52 First accept number: a[1] = 0 First reject number: r[1] = 3 Second sample size: n[2] = 52 Second accept number: a[2] = 2 This plan works as follows. Initially 52 (n[1]) units are sterilized at the verification dose and the number of positive sterility tests tallied. If 0 (a[1]) positives are found, the verification dose experiment passes. If 3 (r[1]) or more positives are found, the verification dose experiment is not passed. In the event that 1 or 2 positives are found, 52 (n[2]) more units are sterilized at the verification dose and sterility tested. If the total number of positives in the combined sample of 102 units is 2 (a[2]) or less, the verification dose experiment passes. If the total number of positives in the combined sample of 102 units is 3 or greater, the verification dose experiment is not passed. The OC curve, AQL and LTPD of this alternative double sampling plan is shown in Figure 6. This sampling plan was obtained using the software accompanying Taylor (1992). Figure 6: OC curves of Method 1 verification dose experiment and alternative double sampling plan If two sampling plans have similar AQLs and LTPDs, they will have similar OC curves. They will provide the same protection against an increase in bioburden resistance. They will also falsely fail reduced bioburden resistance at the same rate. The protection and risks are similar. As a result, the two sampling plans are substantially equivalent procedures. Figure 7 shows the Average Sample Number (ASN) curves of these two sampling plans. The bottom axis gives different positive rates. The left axis gives the corresponding average number of units tested. The current verification dose experiment is a single sampling plan that always requires 100 units. However, the number of units required by the alternative double sampling plan varies. If the positive rate is near zero, the verification dose experiment generally passes after taking the initial 52 samples. As the positive rate starts to increase, the second sample will be required more often as the ASN curve starts to increase. As the positive rate increases further, the verification dose experiment will fail more frequently on the first sample. As a result, the ASN curves starts dropping again till it again approaches 52. Figure 7: ASN curves of Method 1 verification dose experiment and alternative double sampling plan When comparing ASN curves, you should concentrate on that region corresponding to the average positive rate that you have observed over time. In Hansen (1993), it is estimated that the industry average positive rate is around 0.5%. At 0.5%, the double sampling plan averages 63.8 units tested compared to 100 for the current procedure. This is a 46% reduction in the number of units tested. The number of units tested is not the only consideration. The double sampling is a more complex procedure that sometimes requires an additional irradiation processing of samples. Figure 8 shows the probability of a second sample for the two procedures. The current verification dose experiment is a single sampling plan that never requires a second sample. Therefore the probability is always 0%. However, the probability of a second sample for the alternative double sampling plan varies depending on the positive rate. At a positive rate of 0.5%, the probability of a second sample is 0.2272. Figure 8: Probability of second sample of Method 1 verification dose experiment and alternative double sampling plan So which plan is best? This depends on the cost associated with testing an individual unit (c[unit]) and the cost of going to the second stage (c[stage]). When the cost of testing the individual units is the predominate cost, the alternative double sampling plan is preferred. When the cost of going to the second stage is the predominate cost, the current single sampling plan is preferred. Neither plan is best for all situations. The total inspection costs (c[total]) associated with the two plans are: Single sampling plan: c[total] = 100 c[unit] Double sampling plan: c[total] = ASN(p) c[unit] +P2 (p) c[stage] where ASN(p) is the average sample number and P2(p) is the probability of the second stage at a positive rate of p. When the positive rate is 0.5%, ASN(0.5%) = 63.8 and P2(0.5%) is 0.2272. In this case, the best plan is: Verification Dose Experiment Choosing the Best Sampling Plan for a 0.5% Positive Rate The Method 1 single sampling plan is best when The alternative double sampling plan is best when For a positive rate of 0.5%, Figure 9 compares the total cost of the two sampling plans dependent on the ratio of the two inspection costs. The point of intersection of these two curves is 159. Figure 9: Cost comparison of Method 1 verification dose experiment and alternative double sampling plan for a positive rate of 0.5% DOSE AUDIT Dose audits are required for both Methods 1 and 2. The dose audit requires that 100 product units be irradiated at a 10^-2 SAL dose and a test of sterility performed on each product unit. The results are acceptable if 2 or fewer units test positive. If 3 or 4 positives are found, a retest is allowed. Essentially this is a double sampling plan. Dose Audit as Double Sampling Plan First sample size: n[1] = 100 First accept number: a[1] = 2 First reject number: r[1] = 5 Second sample size: n[2] = 100 Second accept number: a[2] = 2 This is a nonstandard double sampling plan in that a[2]^* applies only to the number of positives in the second sample rather than the normal procedure where it applies the cumulative number of positives in both samples. The OC curve, AQL and LTPD of the dose audit are given in Figure 10. Figure 10: OC curves of dose audit and alternative 1 double sampling plans Three alternative sampling plans are shown below. One is a single sampling plan and the other two are standard double sampling plans. Dose Audit Alternative 1 First sample size: n[1] = 50 First accept number: a[1] = 0 First reject number: r[1] = 4 Second sample size: n[2] = 100 Second accept number: a[2] = 4 Dose Audit Alternative 2 First sample size: n[1] = 70 First accept number: a[1] = 1 First reject number: r[1] = 6 Second sample size: n[2] = 130 Second accept number: a[2] = 5 Dose Audit Alternative 3 Sample size: n = 140 Accept number: a = 4 The OC curves, AQLs and LTPDs of the three alternative plans are also shown in Figure 10. These three procedures are substantially equivalent to the dose audit procedure including retest. While the protection is the same, the costs are not. Figure 11 shows the ASN curves of these procedures and Figure 12 shows the probabilities of second samples. Alternative 1 can reduce the number of units tested by 29% (from 101.4 to 72.2). On the other hand, alternative 3 can eliminate the need for a second sample. Figure 11: ASN curves of dose audit and alternative sampling plans Figure 12: Probability of second sample of dose audit and alternative sampling plans Figure 13 compares the total cost of the four procedures when the positive rate is 0.5%. Criteria for selecting the best plan are given below. Modifications and corrections to the criteria given in Phillips (1996) were suggested by Kyprianou (1996). Dose Audit Choosing the Best Sampling Plan for a 0.5% Positive Rate The alternative 1 double sampling plan is best when: The alternative 2 double sampling plan is best when: The Method 1 retest procedure is best when: The alternative 3 single sampling plan is best when: Figure 13: Cost comparison of dose audit and alternative sampling plans for a positive rate of 0.5% QUICK SWITCHING SYSTEMS The Method 1 verification dose experiment combined with the dose audit is a quick switching system (QSS). QSSs are investigated in Taylor (1992, 1996). A QSS consists of two sampling plans along with a set of rules for switching between them. The first plan, called the reduced plan, is intended for use when the process is running well. The dose audit sampling plan serves the role of the reduced plan. The second plan, called the tightened plan, is intended for use when a problem is encountered. One always starts a QSS with the tightened plan. The verification dose experiment sampling plan serves the role of the tightened plan. A QSS has a set of rules for switching between the two plans. Figure 14 shows the Method 1 verification dose experiment and dose audit represented as a QSS. Figure 14: Method 1 as a quick switching system While Method 1 is a QSS, it differs from most QSSs in that the reduced inspection actually requires more units to be tested than the tightened plan. This is a result of the fact that it has a steeper OC curve. Generally, the tightened inspection requires the greater number of units in order to provide superior protection. It is obviously preferable to detect a sterilization dose problem before it is used, rather than after several months. When designing a QSS, one must protect against the bioburden resistance initially exceeding that of the theoretical bioburden resistance distribution. One must also protect against future increases in resistance. If both situations are equally likely, a more traditional QSS with tighter initial inspection makes sense. Using this approach, an alternative QSS was selected using the software accompanying Taylor (1992). This QSS is shown in Figure 15. It was selected to offer: (1) slightly increased protection against an initially high level of bioburden resistance, (2) slightly decreased protection following a sudden increase in the bioburden resistance, and (3) equivalent protection against more gradual changes in the bioburden resistance. Figure 15: Alternative quick switching system The scheme or stationary OC curves for the two QSSs are shown in Figure 16. The stationary OC curve of a QSS describes its protection during periods where the positive rate is constant or changing gradually. Because their OC curves are nearly the same, both QSSs offer equivalent protection under this scenario. Figure 16: Stationary OC curves Of special importance is the protection provided during start-up. Figure 17 shows the OC curves of the different tightened plans used at startup. Since its OC curve is lower, the alternative QSS offers slightly better protection. Figure 17: Start-up protection Finally, one must be concerned with the protection provided following a sudden increase in the bioburden resistance. Suppose the QSS is in its reduced state when the resistance suddenly increases. Figure 18 shows the probabilities of passing the first dose audit following such an increase. The alternative QSS offers slightly decreased protection. However, this decreased protection only occurs following a jump in resistance between one quarterly audit and the next. When the resistance trends upward over time, the stationary OC curve better describes the protection, in which case both plans offer equivalent protection. Figure 18: Protection following increase in positive rate Figures 16 to 18 show that the alternative QSS offers increased protection under certain scenarios while offering decreased protection under others. In no case is the difference large. Taken all together, the alternative QSS provides equivalent protection. While the alternative QSSs offers equivalent protection, it differs greatly in terms of the number of units tested. Figure 19 shows the ASN curves of these two QSSs. At a 0.5% positive rate, the alternative QSS decreases the number of units tested by 55%. Figure 19: ASN curves of method 1 and alternative quick switching system CONCLUSION The alternative sampling plans presented in this article provide equivalent protection to the current procedures for the verification dose experiment and dose audit. While they offer the same protection, the procedures result in different testing costs. These alternative sampling plans may significantly reduce the costs of performing this testing. No one plan is best for all situations. The best sampling plan depends on the per unit cost of testing and the cost of going to a second sample. Criteria are provided for deciding which sampling plan is best suited for a particular application. Hansen, J. (1993). "AAMI Dose Setting: Ten Years Experience in Sterilization of Medical Products." Proceedings of Kilmer Conference, Brussels, Belgium. ISO 11137 (1995), "Sterilization of Health Care Products - Requirements for Validation and Routine Control - Radiation Sterilization", International Organization for Standardization, Switzerland. Kyprianou, E. (1996), A review of the article "Reducing sample sizes of AAMI Gamma Radiation sterilization verification experiments and dose audits", private correspondence. Phillips, G.W.; Taylor, W. A.; Sargent, H. E. and Hansen, J. M. (1996), "Reducing Sample Sizes of AAMI Gamma Radiation Sterilization Verification Experiments and Dose Audits," Quality Engineering, Volume 8, Number 3, pp. 489-496,. Taylor, Wayne A. (1992). Guide to Acceptance Sampling, Taylor Enterprises, Inc., Libertyville, Illinois. Taylor, Wayne A. (1996). "Quick Switching Systems," Journal of Quality Technology, Vol. 28, No. 3, pp. 460-472. Presented at 10th International Meeting on Radiation Processing, Anaheim, California, 1997 To appear in Radiation Physics and Chemistry, Elseviar Science Ltd., Exeter, UK Copyright © 1997 Taylor Enterprises, Inc.
{"url":"http://www.variation.com/techlib/as-8.html","timestamp":"2014-04-16T07:13:33Z","content_type":null,"content_length":"49337","record_id":"<urn:uuid:41eb964e-1ab5-4abf-9be1-9b75e1b88330>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Explain What Reactive Power Q Flowing Into A One ... | Chegg.com Image text transcribed for accessibility: Explain what reactive power Q flowing into a one port is in terms of sinusoids v(t) and i(t) and phasors V and I and explain what Q means physically (you may use an example). The instantaneous power into a one port containing resistors and inductors has maximum value 100(1 + ) W and minimum value 100(1 - ) W. Find the real and reactive powers P and Q into the one port. Suppose the one port is a series RL circuit and the one port voltage has amplitude 100 V. Calculate R. L and the inductor current I. (Use 60Hz for the frequency.) Calculate the inductor current and voltage as functions of time and hence the instantaneous power into the inductor. Confirm that the amplitude of the instantaneous power into the inductor is Q. What assumptions about the system are we tacitly making for the calculations in (a).(b) and (c)? Demonstrate with a simple one phase AC example that adding a small parallel capacitor to a series or parallel RL load raises the load voltage VL. (Model the g Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/explain-reactive-power-q-flowing-one-port-terms-sinusoids-v-t-t-phasors-v-explain-q-means--q3581241","timestamp":"2014-04-21T15:52:45Z","content_type":null,"content_length":"21254","record_id":"<urn:uuid:1748eb67-15b0-4762-8105-f754234eda62>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
areas of two spheres December 7th 2010, 03:46 AM #1 Dec 2010 areas of two spheres This came from an engineering board exam and I can't solve it. Find approximately the difference between the areas of two spheres whose radii area 4 ft. and 4.05 ft. The surface area of a sphere is $S=4\pi r^2$. I'm guessing that the problem wants you to use differentials to get the approximation. $dS = 8\pi rdr = 8\pi (4)(.05) = 1.6\pi$. December 7th 2010, 03:54 AM #2 Senior Member Nov 2010 Staten Island, NY
{"url":"http://mathhelpforum.com/calculus/165560-areas-two-spheres.html","timestamp":"2014-04-20T14:45:18Z","content_type":null,"content_length":"31201","record_id":"<urn:uuid:23b9ea81-3f10-4a6c-a1b4-20390d547f1f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Black holes - LucasForums Interesting... I found this website about black holes.. Read on.. My friend Penelope is sitting still at a safe distance, watching me fall into the black hole. What does she see? Penelope sees things quite differently from you. As you get closer and closer to the horizon, she sees you move more and more slowly. In fact, no matter how long she waits, she will never quite see you reach the horizon. In fact, more or less the same thing can be said about the material that formed the black hole in the first place. Suppose that the black hole formed from a collapsing star. As the material that is to form the black hole collapses, Penelope sees it get smaller and smaller, approaching but never quite reaching its Schwarzschild radius. This is why black holes were originally called frozen stars: because they seem to 'freeze' at a size just slightly bigger than the Schwarzschild radius. Why does she see things this way? The best way to think about it is that it's really just an optical illusion. It doesn't really take an infinite amount of time for the black hole to form, and it doesn't really take an infinite amount of time for you to cross the horizon. (If you don't believe me, just try jumping in! You'll be across the horizon in eight minutes, and crushed to death mere seconds later.) As you get closer and closer to the horizon, the light that you're emitting takes longer and longer to climb back out to reach Penelope. In fact, the radiation you emit right as you cross the horizon will hover right there at the horizon forever and never reach her. You've long since passed through the horizon, but the light signal telling her that won't reach her for an infinitely long time. There is another way to look at this whole business. In a sense, time really does pass more slowly near the horizon than it does far away. Suppose you take your spaceship and ride down to a point just outside the horizon, and then just hover there for a while (burning enormous amounts of fuel to keep yourself from falling in). Then you fly back out and rejoin Penelope. You will find that she has aged much more than you during the whole process; time passed more slowly for you than it did for her. So which of these two explanation (the optical-illusion one or the time-slowing-down one) is really right? The answer depends on what system of coordinates you use to describe the black hole. According to the usual system of coordinates, called "Schwarzschild coordinates," you cross the horizon when the time coordinate t is infinity. So in these coordinates it really does take you infinite time to cross the horizon. But the reason for that is that Schwarzschild coordinates provide a highly distorted view of what's going on near the horizon. In fact, right at the horizon the coordinates are infinitely distorted (or, to use the standard terminology, "singular"). If you choose to use coordinates that are not singular near the horizon, then you find that the time when you cross the horizon is indeed finite, but the time when Penelope sees you cross the horizon is infinite. It took the radiation an infinite amount of time to reach her. In fact, though, you're allowed to use either coordinate system, and so both explanations are valid. They're just different ways of saying the same thing. In practice, you will actually become invisible to Penelope before too much time has passed. For one thing, light is "redshifted" to longer wavelengths as it rises away from the black hole. So if you are emitting visible light at some particular wavelength, Penelope will see light at some longer wavelength. The wavelengths get longer and longer as you get closer and closer to the horizon. Eventually, it won't be visible light at all: it will be infrared radiation, then radio waves. At some point the wavelengths will be so long that she'll be unable to observe them. Furthermore, remember that light is emitted in individual packets called photons. Suppose you are emitting photons as you fall past the horizon. At some point, you will emit your last photon before you cross the horizon. That photon will reach Penelope at some finite time -- typically less than an hour for that million-solar-mass black hole -- and after that she'll never be able to see you again. (After all, none of the photons you emit *after* you cross the horizon will ever get to her.) How do black holes evaporate? This is a tough one. Back in the 1970's, Stephen Hawking came up with theoretical arguments showing that black holes are not really entirely black: due to quantum-mechanical effects, they emit radiation. The energy that produces the radiation comes from the mass of the black hole. Consequently, the black hole gradually shrinks. It turns out that the rate of radiation increases as the mass decreases, so the black hole continues to radiate more and more intensely and to shrink more and more rapidly until it presumably vanishes entirely. Actually, nobody is really sure what happens at the last stages of black hole evaporation: some researchers think that a tiny, stable remnant is left behind. Our current theories simply aren't good enough to let us tell for sure one way or the other. As long as I'm disclaiming, let me add that the entire subject of black hole evaporation is extremely speculative. It involves figuring out how to perform quantum-mechanical (or rather quantum-field-theoretic) calculations in curved spacetime, which is a very difficult task, and which gives results that are essentially impossible to test with experiments. Physicists *think* that we have the correct theories to make predictions about black hole evaporation, but without experimental tests it's impossible to be sure. Now why do black holes evaporate? Here's one way to look at it, which is only moderately inaccurate. (I don't think it's possible to do much better than this, unless you want to spend a few years learning about quantum field theory in curved space.) One of the consequences of the uncertainty principle of quantum mechanics is that it's possible for the law of energy conservation to be violated, but only for very short durations. The Universe is able to produce mass and energy out of nowhere, but only if that mass and energy disappear again very quickly. One particular way in which this strange phenomenon manifests itself goes by the name of vacuum fluctuations. Pairs consisting of a particle and antiparticle can appear out of nowhere, exist for a very short time, and then annihilate each other. Energy conservation is violated when the particles are created, but all of that energy is restored when they annihilate again. As weird as all of this sounds, we have actually confirmed experimentally that these vacuum fluctuations are real. Now, suppose one of these vacuum fluctuations happens near the horizon of a black hole. It may happen that one of the two particles falls across the horizon, while the other one escapes. The one that escapes carries energy away from the black hole and may be detected by some observer far away. To that observer, it will look like the black hole has just emitted a particle. This process happens repeatedly, and the observer sees a continuous stream of radiation from the black hole. Won't the black hole have evaporated out from under me before I reach it? We've observed that, from the point of view of your friend Penelope who remains safely outside of the black hole, it takes you an infinite amount of time to cross the horizon. We've also observed that black holes evaporate via Hawking radiation in a finite amount of time. So by the time you reach the horizon, the black hole will be gone, right? Wrong. When we said that Penelope would see it take forever for you to cross the horizon, we were imagining a non-evaporating black hole. If the black hole is evaporating, that changes things. Your friend will see you cross the horizon at the exact same moment she sees the black hole evaporate. Let me try to describe why this is true. Remember what we said before: Penelope is the victim of an optical illusion. The light that you emit when you're very near the horizon (but still on the outside) takes a very long time to climb out and reach her. If the black hole lasts forever, then the light may take arbitrarily long to get out, and that's why she doesn't see you cross the horizon for a very long (even an infinite) time. But once the black hole has evaporated, there's nothing to stop the light that carries the news that you're about to cross the horizon from reaching her. In fact, it reaches her at the same moment as that last burst of Hawking radiation. Of course, none of that will matter to you: you've long since crossed the horizon and been crushed at the singularity. Sorry about that, but you should have thought about it before you jumped in. Here's the original website: Also, I enjoy talking about space. That's why I started this thread. :P
{"url":"http://lucasforums.com/showthread.php?t=124847","timestamp":"2014-04-21T12:35:01Z","content_type":null,"content_length":"100116","record_id":"<urn:uuid:0f9eb0ed-5967-473f-adf8-af428c726399>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
18 October 2004 Vol. 9, No. 42 THE MATH FORUM INTERNET NEWS AWM Essay Contest | Mathwords | Mr. R.'s Math Poems AWM ESSAY CONTEST Biographies of Contemporary Women in Mathematics Deadline: October 29, 2004 To increase awareness of women's ongoing contributions to the mathematical sciences, the Association for Women in Mathematics (AWM) is sponsoring an essay contest: "Biographies of Contemporary Women Mathematicians." The essays will be based primarily on an interview with a woman currently working in a mathematical sciences career, whether academic, industrial, or governmental. This contest is open to students in the following categories: Middle School, High School, Undergraduate, and Graduate. At least one winning submission will be chosen from each category. Winners will receive a prize, and their essays will be published online at the AWM web site. Additionally, a grand prize winner will have his or her submission published in the AWM Newsletter. View Past Results: 2003 - http://www.awm-math.org/biographies/contest/2003.html 2002 - http://www.awm-math.org/biographies/contest/2002.html 2001 - http://www.awm-math.org/biographies/contest/2001.html Bruce Simmons, a math teacher at St. Stephen's Episcopal School in Austin, Texas, designed Mathwords for students who need an easy-to-use, easy-to-understand math resource all in one place. It is a comprehensive listing of formulas and definitions from Algebra I to Calculus. The explanations are readable for average math students, and over a thousand illustrations and examples are provided. Use the alphabetical sidebar to browse, or type the word you are seeking in the search field. MR. R.'S MATH POEMS Mr. R. has written math poems to introduce different math concepts to his fourth grade students in a fun way. The stories integrate language and math. Math Poems titles include: - Boo-Hoo Math Saga - Eyes Aren't Squares - My Dog, Multiplication - My Ten Fingers - My Dog, Addition - The Day 1 + 1 = 3 - My Dog, Numerator - Math Wrath - Number Thief - Number Thief II - Mr. Geometry - Missing Math Mystery - Infinity - Jenna's Subtraction Problem - Circle - Angles - PEMDAS - The Teenaged Rectangle - Signed Number Suite - Perimeter Paul CHECK OUT OUR WEB SITE: The Math Forum http://mathforum.org/ Ask Dr. Math http://mathforum.org/dr.math/ Problems of the Week http://mathforum.org/pow/ Mathematics Library http://mathforum.org/library/ Math Tools http://mathforum.org/mathtools/ Teacher2Teacher http://mathforum.org/t2t/ Discussion Groups http://mathforum.org/discussions/ Join the Math Forum http://mathforum.org/join.forum.html Send comments to the Math Forum Internet Newsletter editors Donations http://deptapp.drexel.edu/ia/GOL/giftsonline1_MF.asp Ask Dr. Math Books http://mathforum.org/pubs/dr.mathbooks.html _o \o_ __| \ / |__ o _ o/ \o/ __|- __/ \__/o \o | o/ o/__/ /\ /| | \ \ / \ / \ /o\ / \ / \ / | / \ / \
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews9.42.html","timestamp":"2014-04-16T22:45:23Z","content_type":null,"content_length":"7765","record_id":"<urn:uuid:7212f140-0142-4262-b9c9-656512d8c7b6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
This material (including images) is copyrighted!. See my copyright notice for fair use practices. The brightness of stars are specified with the magnitude system. The Greek astronomer Hipparchus devised this system around 150 B.C.E. He put the brightest stars into the first magnitude class, the next brightest stars into second magnitude class, and so on until he had all of the visible stars grouped into six magnitude classes. The dimmest stars were of sixth magnitude. The magnitude system was based on how bright a star appeared to the unaided eye. By the 19th century astronomers had developed the technology to objectively measure a star's brightness. Instead of abandoning the long-used magnitude system, astronomers refined it and quantified it. They established that a difference of 5 magnitudes corresponds to a factor of exactly 100 times in intensity. The other intervals of magnitude were based on the 19th century belief of how the human eye perceives differences in brightnesses. It was thought that the eye sensed differences in brightness on a logarithmic scale so a star's magnitude is not directly proportional to the actual amount of energy you receive. Now it is known that the eye is not quite a logarithmic detector. Your eyes perceive equal ratios of intensity as equal intervals of brightness. On the quantified magnitude scale, a magnitude interval of 1 corresponds to a factor of 100^1/5 or approximately 2.512 times the amount in actual intensity. For example, first magnitude stars are about 2.512^2-1 = 2.512 times brighter than 2nd magnitude stars, 2.512×2.512 = 2.512^3-1 = 2.512^2 times brighter than 3rd magnitude stars, 2.512×2.512×2.512 = 2.512^4-1 = 2.512^3 times brighter than 4th magnitude stars, etc. (See the math review appendix for what is meant by the terms ``factor of'' and ``times''.) Notice that you raise the number 2.512 to a power equal to the difference in magnitudes. Also, many objects go beyond Hipparchus' original bounds of magnitude 1 to 6. Some very bright objects can have magnitudes of 0 or even negative numbers and very faint objects have magnitudes greater than +6. The important thing to remember is that brighter objects have smaller magnitudes than fainter objects. The magnitude system is screwy, but it's tradition! (Song from Fiddler on the Roof could be played here.) The apparent brightness of a star observed from the Earth is called the apparent magnitude. The apparent magnitude is a measure of the star's flux received by us. Here are some example apparent magnitudes: Sun = -26.7, Moon = -12.6, Venus = -4.4, Sirius = -1.4, Vega = 0.00, faintest naked eye star = +6.5, brightest quasar = +12.8, faintest object = +30 to +31. │How do you do that? │ │ │ │Star A has an apparent magnitude = 5.4 and star B has an apparent magnitude = 2.4. Which star is brighter and by how many times? Star B is brighter than star A because it has a lower apparent │ │magnitude. Star B is brighter by 5.4 - 2.4 = 3 magnitudes. In terms of intensity star B is 2.512^(5.4-2.4) = 2.512^3.0 = approximately 15.8 times brighter than star A. The amount of energy you │ │receive from star B is almost 16 times greater than what you receive from star A. │ If the star was at 10 parsecs distance from us, then its apparent magnitude would be equal to its absolute magnitude. The absolute magnitude is a measure of the star's luminosity---the total amount of energy radiated by the star every second. If you measure a star's apparent magnitude and know its absolute magnitude, you can find the star's distance (using the inverse square law of light brightness). If you know a star's apparent magnitude and distance, you can find the star's luminosity (see the table below). The luminosity is a quantity that depends on the star itself, not on how far away it is (it is an "intrinsic" property). For this reason a star's luminosity tells you about the internal physics of the star and is a more important quantity than the apparent brightness. A star can be luminous because it is hot or it is large (or both!). The luminosity of an object = the amount of energy every square meter produces multiplied by its surface area. Recall from the electromagnetic radiation chapter that the amount of energy pouring through every square meter = ^4, where Because the surface area is also in the luminosity relation, the luminosity of a bigger star is larger than a smaller star at the same temperature. You can use the relation to get another important characteristic of a star. If you measure the apparent brightness, temperature, and distance of a star, you can determine its size. The figure below illustrates the inter-dependence of measurable quantities with the derived values that have been discussed so far. In the left triangular relationship, the apparent brightness, distance, and luminosity are tied together such that if you know any two of the sides, you can derive the third side. For example, if you measure a glowing object's apparent brightness (how bright it appears from your location) and its distance (with trigonometric parallax), then you can derive the glowing object's luminosity. Or if you measure a glowing object's apparent brightness and you know the object's luminosity without knowing its distance, you can derive the distance (using the inverse square law). In the right triangular relationship, the luminosity, temperature, and size of the glowing object are tied together. If you measure the object's temperature and know its luminosity, you can derive the object's size. Or if you measure the glowing object's size and its temperature, you can derive the glowing object's luminosity---its electromagnetic energy output. Finally, note that a small, hot object can have the same luminosity as a large, cool object. So if the luminosity remains the same, an increase in the size (surface area) of the object must result in a DEcrease in the temperature to compensate. Most famous apparently bright stars are also intrinsically bright (luminous). They can be seen from great distances away. However, most of the nearby stars are intrinsically faint. If you assume we live in a typical patch of the Milky Way Galaxy (using the Copernican principle), then you deduce that most stars are puny emitters of light. The bright stars you can see in even the city are the odd ones in our galaxy! The least luminous stars have absolute magnitudes = +19 and the brightest stars have absolute magnitudes = -8. This is a huge range in luminosity! See the ``How do you do that?'' box below the following table for examples of using the apparent and absolute magnitudes to determine stellar distances and luminosities of stars. Even the intrinsically faintest star's luminosity is much, much greater than all of the power we generate here on the Earth so a "watt" or a "megawatt" are too tiny a unit of power to use for the stars. Star luminosities are specified in units of solar luminosity---relative to the Sun (so the Sun generates one solar luminosity of power). One solar luminosity is about 4 × 10^26 watts. Magnitudes and Distances for some well-known Stars (from the precise measurements of the Hipparcos mission) Star App.Mag.^* Distance(pc) Abs.Mag.^* Visual Luminosity(rel. to Sun)^** Sun -26.74 4.84813×10^-6 4.83 1 Sirius -1.44 2.6371 1.45 22.5 Arcturus -0.05 11.25 -0.31 114 Vega 0.03 7.7561 0.58 50.1 Spica 0.98 80.39 -3.55 2250 Barnard's Star 9.54 1.8215 13.24 1/2310 Proxima Centauri 11.01 1.2948 15.45 1/17700 ^*magnitudes measured using ``V'' filter, see the next section. ^**The visual luminosity is the energy output in the ``V'' filter. A total luminosity (``bolometric luminosity'') would encompass the energy in all parts of the electromagnetic spectrum. │A quantity that uses the inverse square law and the logarithmic magnitude system is the ``distance modulus''. The distance modulus = the apparent magnitude - absolute magnitude. This is equal to 5 │ │× log(distance in parsecs) - 5. The ``log()'' term is the ``logarithm base 10'' function (it is the ``log'' key on a scientific calculator). If you measure a star's apparent magnitude and its │ │distance from its trigonometric parallax, the star's absolute magnitude = the apparent magnitude - 5 × log(distance + 5. For example, Sirius has an apparent magnitude of -1.44 and Hipparcos │ │measured its distance at 2.6371 parsecs, so it has an an absolute magnitude of -1.44 - 5×log(2.6371) + 5 = -1.44 - (5×0.421127) + 5 = 1.45. │ │ │ │If you know a star's absolute magnitude, then when you compare it to calibration stars, you can determine its distance. │ │ │ │ Its distance = 10^(apparent magnitude - absolute magnitude + 5)/5. │ │ │ │For example, Spica has an apparent magnitude of 0.98 and stars of its type have absolute magnitudes of about -3.55, so Spica is at a distance of 10^[0.98 - (-3.55) + 5]/5 = 10^1.906 = 80.54 which │ │is very close to the trig. parallax value measured by Hipparcos (Spica's absolute magnitude of -3.546 was rounded to -3.55 in the table above). │ │ │ │If you know two star's absolute magnitudes, you can directly compare their luminosities. The ratio of the two stars' luminosities is (Lum.[*1])/(Lum.[*2]) = 10^-0.4(abs mag*1 - abs mag*2) or in an │ │approximate relation: Lum.[*1]/Lum.[*2] = 2.512^(abs mag*2 - abs mag*1). Remember the more luminous star has an absolute magnitude that is less than a fainter star's absolute magnitude! Try out │ │this relation on the stars given in the table above. │ last updated: November 2, 2010 Is this page a copy of Strobel's Astronomy Notes? Author of original content: Nick Strobel
{"url":"http://www.astronomynotes.com/starprop/s4.htm","timestamp":"2014-04-20T20:54:52Z","content_type":null,"content_length":"14870","record_id":"<urn:uuid:983e4c83-a3df-4ea2-a5d0-c65978e905af>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Arrow's impossibility theorem From Electowiki Arrow’s impossibility theorem, or Arrow’s paradox demonstrates the impossibility of designing a set of rules for social decision making that would obey every ‘reasonable’ criterion required by The theorem is named after economist Kenneth Arrow, who proved the theorem in his Ph.D. thesis and popularized it in his 1951 book Social Choice and Individual Values. Arrow was a co-recipient of the 1972 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel (popularly known as the “Nobel Prize in Economics”). The theorem’s content, somewhat simplified, is as follows. A society needs to agree on a preference order among several different options. Each individual in the society has a particular personal preference order. The problem is to find a general mechanism, called a social choice function, which transforms the set of preference orders, one for each individual, into a global societal preference order. This social choice function should have several desirable (“fair”) properties: • unrestricted domain or the universality criterion: the social choice function should create a deterministic, complete societal preference order from every possible set of individual preference orders. (The vote must have a result that ranks all possible choices relative to one another, the voting mechanism must be able to process all possible sets of voter preferences, and it should always give the same result for the same votes, without random selection.) • non-imposition or citizen sovereignty: every possible societal preference order should be achievable by some set of individual preference orders. (Every result must be achievable somehow.) • non-dictatorship: the social choice function should not simply follow the preference order of a single individual while ignoring all others. • positive association of social and individual values or monotonicity: if an individual modifies his or her preference order by promoting a certain option, then the societal preference order should respond only by promoting that same option or not changing, never by placing it lower than before. (An individual should not be able to hurt an option by ranking it higher.) • independence of irrelevant alternatives: if we restrict attention to a subset of options, and apply the social choice function only to those, then the result should be compatible with the outcome for the whole set of options. (Changes in individuals’ rankings of “irrelevant” alternatives [i.e., ones outside the subset] should have no impact on the societal ranking of the “relevant” Arrow’s theorem says that if the decision-making body has at least two members and at least three options to decide among, then it is impossible to design a social choice function that satisfies all these conditions at once. Another version of Arrow’s theorem can be obtained by replacing the monotonicity criterion with that of: • unanimity or Pareto efficiency: if every individual prefers a certain option to another, then so must the resulting societal preference order. This statement is stronger, because assuming both monotonicity and independence of irrelevant alternatives implies Pareto efficiency. With a narrower definition of “irrelevant alternatives” which excludes those candidates in the Smith set, some Condorcet methods meet all the criteria. Systems which violate only one of Arrow's criteria MCA-P, as a rated rather than ranked system, violates only unrestricted domain. A system which arbitrarily chose two candidates to go into a runoff would violate only sovereignty. Random ballot violates only non-dictatorship. None of the methods described on this wiki violate only monotonicity. The Schulze method violates only independence of irrelevant alternatives, although it actually satisfies the similar independence of Smith-dominated alternatives criterion. See also External links This page uses Creative Commons Licensed content from Wikipedia (view authors).
{"url":"http://wiki.electorama.com/wiki/Arrow's_impossibility_theorem","timestamp":"2014-04-20T01:28:04Z","content_type":null,"content_length":"21502","record_id":"<urn:uuid:bf3adcf0-8aac-4a15-8410-ee1fb7a77400>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Valuation Multiples Explained What exactly is a valuation multiple and why is it important to understand? When you purchase a home, you typically calculate the price per square foot that you are paying for the house (by the way, price per square foot is a multiple). Well for purchasing a stock, an investor needs to understand the valuation multiple they are paying for a stock. Without understanding the multiples and what they mean, you are only betting on the qualitative aspect or the story of the stock, without understanding the basic quantitative aspects or valuation metrics. You are essentially gambling and no longer investing. This article attempts to simply what valuation multiples mean and how you can apply them when considering a stock investment. Multiples are Like the Inverse of a Dividend Yield Most people have heard about price to earnings (P/E), however there are also price-to-earnings-to-growth (P/E/G), price-to-book (P/B), and total enterprise value-to-EBITDA (TEV/EBITDA), just to name a few. What exactly does a multiple represent though? Well, the best way to think about it is that just like you look at interest rates to compare different bonds, you need to look at multiples to compare different stocks (you can also look at dividend yield for stocks, but a lot of stocks do not pay dividends so not as comparable). In fact, multiples are like the inverse of a dividend yield. For dividend yield for bonds, you take income divided by the price of the bond. For multiples for stocks, you take the price divided by the income of the stock. Multiples are Like Price per SF When Buying a Home Let's also think of multiples in terms of buying a home. When you're looking for a house, you will first look at the list price of the home, let's say it is worth $250,000. You will next look at the total square feet of the home, let's say its 1,000 SF. You will then compare the list price relative to the total square feet to see what you are paying on a "price per square foot" basis. In this case, you're paying $250 per SF ($250,000 divided by 1,000 SF). That $250 price per square foot is a type of multiple, price as a multiple of square foot. By calculating the price per square foot multiple, you now can look at relative prices across comparable homes. Say you find another house in the same community with similar amenities that is selling at $300,000, but it has a total of 1,500 SF. At first, this 2nd house may seem more expensive, given it's listed at $300,000 versus $250,000. However, if you look at the price per square foot multiple, the 2nd house is actually cheaper by 20%, given that its multiple is only $200/SF ($300,000 divided by 1,500 SF) versus $250/SF for the 1st house. You would never buy a home without understanding how much price per square foot for the home, as well as how much price per square foot for comparable homes in the same neighborhood. The same applies for buying a stock, you need to understand how much price per earnings you are paying for the stock, as well as price per earnings for comparable stocks in the same industry. Equity Value versus Enterprise Value When you look at a multiple, you need to make sure you are comparing apples-to-apples. Meaning when you're looking at Net Income (which are earnings only for equity holders), you need to compare that to Equity Value or Market Capitalization by looking at a P/E multiple. When you're looking at EBITDA (which are earnings for all stake holders including equity, debt, preferred), then you need to compare that with Total Enterprise Value, or "TEV", by looking at a TEV / EBITDA multiple. Before we define and explain the various multiples, we need to first highlight the difference between Equity Value and Enterprise Value. Equity Value represents the market value of equity in the firm. It is straight forward to calculate, just take the current share price multiplied by the shares outstanding (fully diluted). Equity Value does NOT include debt, preferred, minority interest. It is the equivalent to the home equity of a house, which does not represent the entire home's value and does not include the mortgage debt. Enterprise Value (or "Total Enterprise Value" or "TEV" or "Firm Value") represents the entire value of the firm, including not only equity value but also debt value, preferred value and minority interest value. This is the equivalent of the entire home value of a home, including both mortgage debt and equity built up in the house. As mentioned above, Equity Value is like your Home Equity and Enterprise Value is like your Entire Home Value (including Mortgage Debt). In addition, Equity Value is like the Stockholder's Equity on your Balance Sheet, and Enterprise Value is like Total Assets on your Balance Sheet. As covered in the Accounting Tutorial, Assets = Liabilities + Stockholder's Equity. This is the same as Enterprise Value = Net Debt/Preferred/Minority Interest + Equity Value. The only difference is that Assets and Stockholder's Equity represent the Book Value (or Accounting Value shown on their financial statements), while Enterprise Value and Equity Value represent the Market Value (calculated based on the current share price listed in the market). The following terms or multiples are associated with either Equity Holders (Equity Value) or All Stake Holders (Enterprise Value): Equity Value Multiples a) P/E Multiple (P/E = current stock price divided by the earnings per share): P/E ratio helps to indicate whether a stock is overvalued or undervalued, relative to its peers. Consider the following example, Company A has a stock price of $10.00, while Company B has a stock price of $20.00. If you just looked at the stock price, then Company A appears "cheaper". However, because Company B's P/E ratio is 8.0x ($20.00 divided by $2.50) compared to Company A's P/E ratio of 10.0x ($10.00 divided by $1.00), Company B is cheaper, at least on a P/E multiple basis. But what does 10.0x P/E mean? It means that you are paying a price of 10 times the company's annual net income of $1.00, which means in 10 years, you will have earned back your money. This assumes EPS is constant and does not increase/decrease for the next 10 year. We'll talk about P/E/G multiple next which accounts for growth in EPS. Invert the P/E: Another way to think about P/E is to invert the 10.0x multiple, or take $1.00 EPS divided by the $10.00 per share you paid for it. This now shows that you earn 10% return on your investment every year, compared to probably 2% you earn if you put it in a CD. The extra 8% premium is to compensate you for investing in a risky asset class (stocks versus a guaranteed CD). Equity Value / Net Income Multiple: There is no difference between P/E and Equity Value / Net Income. It is the same multiple, except the latter takes into account total shares outstanding. It is still helpful to know what a stock's equity value or market capitalization is (i.e: a company with $200mm market cap probably has much more room to grow than a company with $200bn market cap), but you do not need to calculate market capitalization to get to a P/E multiple. The lower the P/E, the better. P/E multiple of less than 10x generally indicates the stock is cheap or undervalued. b) P/E/G Multiple (P/E multiple divided by growth rate divided by 100): Because different companies have different growth rates, therefore another helpful multiple to look at is P/E compared to expected EPS growth rate for the next 5 years. The actual calculation of P/E/G is P/E multiple divided by the growth rate, then divided by 100 (which helps to convert the growth rate, which is a percentage, to a metric that can be compared against P/E). If we look at our previous example, we see that Company A has projected 5 year EPS growth of 10%, compared to only 6% for Company B. When we calculate the P/E/G ratios, we see that Company A's PEG is 1.00x (10.0x divided by 10% divided by 100) compared to Company B's PEG of 1.33x (8.0x divided by 6% divided by 100). Therefore, although Company B initially looked cheapest when we looked only at P/ E ratio (8x compared to 10x), we now see that Company A is actually the cheapest when we factor in the projected growth of the two companies, or when we look at the P/E/G ratios (1.00x compared to Different companies will grow at different growth rates given where they are in their business cycle or what industry they are in (tech companies like Facebook (FB) or Amazon (AMZN) will have high growth rates of +20-30%, compared to mature utilities companies like Duke Energy (DUK) with growth rates of 1-5%), or if they have a new product line coming out (say if Apple (AAPL) introduces new iWatch or iTV). Also affecting growth is how much dividend the company pays the shareholders versus how much earnings is plowed back into the company for growth (Google (GOOG) pays no dividend even though they have cash because they're focused on growth, compared to Altria (MO) which pays 6% dividend which is 85% of net income, meaning only 15% of net income is retained and plowed back into business for The lower the P/E/G, the better. P/E/G multiple of less than 1.0x generally indicates the stock is cheap or undervalued. c) Price / Free Cash Flow Multiple: EPS is an accounting definition of earnings and does not necessarily reflect how much cash earnings the company made in one year. As a result, some value investors prefer to look at a cash flow multiple, either Price / Free Cash Flow or TEV / EBITDA multiple. Note the difference between EBITDA and FCF. EBITDA = Earnings before Interest, Taxes and Depreciation and Amortization. EBITDA is for all stakeholders given that it is before Interest and therefore you need to use TEV. FCF = Net Income + Depreciation and Amortization - Capital Expenditures - Working Capital Needs. FCF is for equity holders only (because it starts with Net Income which is after interest expense) and therefore you need to use Equity Value. Price /FCF is basically a more pure or honest P/E multiple, given that FCF represents true cash earnings or cash flow and not just an accounting earnings number. However, unlike EPS which has to be reported by public companies in their SEC filings, FCF per share has to be interpreted by an outside analyst. Therefore, it is a more difficult metric to calculate, given the interpretation required. Lastly, I'd like to make one more subtle point regarding levered FCF versus unlevered FCF. The FCF above is levered FCF, because it is defined as Net Income + Depreciation and Amortization - Capital Expenditures + Working Capital. The FCF used in a Discounted Cash Flow Analysis ("DCF") is unlevered FCF, which basically means it adds back Interest Expense (and therefore for DCF, you use a weighted average cost of capital or WACC to discount, which is the average cost for both equity and debt holders). Levered FCF is cash flow just for equity holders. Unlevered FCF is for both equity and debt holders. d) Price / Book Value Multiple: Warren Buffett of Berkshire Hathaway (BRK.A) likes to reference Berkshire's current book value in his annual shareholder letters as a minimum value of what his company is worth. P/B is the equity value (share price x shares outstanding) divided by the shareholder's equity book value (found on the company's balance sheet). Usually companies are valued at a higher market capitalization than their accounting book value, because accounting rules tend to be very conservative. Therefore, you will usually see P/B of 2.0x or more. P/B around 1.0x is usually considered cheap (it means the market is valuing the company at its equity book value). Enterprise Value Multiples TEV/Revenue multiples are best associated with the heydays of the Internet boom, where Internet companies were going public without profit (therefore you could only value them on a revenue multiple basis), or sometimes even without sales (then, you would have to get creative and value them on a TEV / customer or eyeball multiple basis). TEV-to-Sales is sometimes called referred to as "Price to Sales", which is technically incorrect. Remember, "Price" refers to Equity Value, but for Sales and EBITDA multiples, you need to compare it to Enterprise Value. You cannot compare Price (or Equity Value) to Sales. Company A's TEV / Revenue multiple is 1.0x ($100 TEV divided by $100 Revenue). A few things to note. If a company has no debt, preferred or cash, then Equity Value = Enterprise Value. Note also the relationship between Revenue and Net Income, as they compare to their respective multiples of TEV/Revenue and P/E. In this example, since net income is 10% of revenue, then the P/E multiple is 10x compared to TEV/Revenue multiple of 1.0x. TEV / EBITDA Multiple: TEV / EBITDA is the other free cash flow multiple we can look out, besides P/FCF. TEV/EBITDA looks at the Total Enterprise Value divided by EBITDA. EBITDA is the total cash flow available to all stakeholders before it is divided up between debt holders, preferred holders, the government (therefore before taxes), and finally equity holders. EBITDA is defined as Earnings Before Interest, Taxes, Depreciation and Amortization. It is an approximation for total cash flows available to the firm. The main non-cash expense that is added back is Depreciation and Amortization or D&A. The best way to explain D&A is to think about when you buy a car. Every year, the car value or its blue book value depreciates, because of wear and tear of the automobile from use. Same thing happens with Fixed Assets or Equipment with a company; the equipment depreciates in value every year. If the equipment cost $100,000 to purchase and it is suppose to last for 10 years, then it will have annual depreciation expense of $10,000 ($100,000 divided by 10). Amortization is the same concept but for Intangible Assets (like patent rights, goodwill or brand of the company, etc). Hopefully this article helped to simply the concept of valuation multiples. Related to this article, I plan on publishing on a weekly basis, the valuation multiples (or comp sheet) for the 30 companies in the Dow Jones Index (DIA), to help readers get easy access to multiples for some of the major companies in the markets today. Calculating these multiples do take a bit of time, so given the limited time that I can commit to Seeking Alpha, the 30 companies in the Dow Jones is about all I can handle on a regular basis for now. The 30 companies span various industries including technology, financials, industrials, consumer goods, media & telecom, healthcare, and energy, so it should offer a good overview of current multiples for various sectors.
{"url":"http://seekingalpha.com/article/1315521-valuation-multiples-explained?v=1364986855&source=tracking_notify","timestamp":"2014-04-19T04:45:42Z","content_type":null,"content_length":"90871","record_id":"<urn:uuid:08dc6153-a44d-48dc-83f5-990f8b9c86e5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Tessellation Question [Archive] - OpenGL Discussion and Help Forums 06-04-2007, 02:01 PM I need some help with concave polygons. I wrote a program that can take 2d spatial data and create a list of vertices (counter clockwise) to define a contour. In this contouring of my data, I can have either convex or concave polygons... so GL_POLYGON is not an option in the case of a concave polygon (unless I decompose the polygon into convex primitives... which I don't intend to do). My question is this: How can I most efficiently render these polygon contours which consist of both concave and convex polygons? I would really like to use VBO's, since they're so fast in rendering and my data is large... but it looks like I'm forced to use gluTessCallback but I don't know how to use gluTessCallback with VBO's or vertex arrays (or it's impossible). Any help is appreciated...
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-132054.html","timestamp":"2014-04-18T03:08:15Z","content_type":null,"content_length":"6438","record_id":"<urn:uuid:81c9592c-540b-4a29-a9e4-19402ca84853>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical mechanics of nonlinear nonequilibrium financial markets Results 1 - 10 of 28 , 1989 "... This paper contributes to this methodology by presenting an improvement over previous algorithms. Sections II and III give a short outline of previous Boltzmann annealing (BA) and fast Cauchy fast annealing (FA) algorithms. Section IV presents the new very fast algorithm. Section V enhances this alg ..." Cited by 181 (33 self) Add to MetaCart This paper contributes to this methodology by presenting an improvement over previous algorithms. Sections II and III give a short outline of previous Boltzmann annealing (BA) and fast Cauchy fast annealing (FA) algorithms. Section IV presents the new very fast algorithm. Section V enhances this algorithm with a re-annealing modification found to be extremely useful for multi-dimensional parameter-spaces. This method will be referred to here as very fast reannealing (VFR) - PHYS. REV. A , 1991 "... A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single neuron lev ..." Cited by 47 (41 self) Add to MetaCart A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single neuron level, SMNI has demonstrated its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systematics. The necessity of including nonlinear and stochastic structures in this development has been stressed. In this paper, a more stringent test is placed on SMNI: The algebraic and numerical algorithms previously developed in this and similar systems are brought to bear to fit large sets of EEG and evoked potential data being collected to investigate genetic predispositions to alcoholism and to extract brain “signatures” of short-term memory. Using the numerical algorithm of Very Fast Simulated Re-Annealing, it is demonstrated that SMNI can indeed fit this data within experimentally observed ranges of its underlying neuronal-synaptic parameters, and use the quantitative modeling results to examine physical neocortical mechanisms to discriminate between high-risk and low-risk populations genetically predisposed to alcoholism. Since this first study is a control to span relatively long time epochs, similar to earlier attempts to establish such correlations, this discrimination is inconclusive because of other neuronal activity which can mask such effects. However, the SMNI model is shown to be consistent - Machine Learning , 2001 "... Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent ..." Cited by 47 (0 self) Add to MetaCart Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method proposed uses conversion into a symbolic representation with a selforganizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for - Rev. A , 1983 "... A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. Th ..." Cited by 37 (34 self) Add to MetaCart A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. The number of possible minima, their time scales of hysteresis and probable reverberations, and their nearestneighbor columnar interactions are all consistent with well-established empirical rules of human shortterm memory. Thus, aspects of conscious experience are derived from neuronal firing patterns, using modern methods of nonlinear nonequilibrium statistical mechanics to develop realistic explicit synaptic interactions. - in Neocortical Dynamics and Human EEG Rhythms, (Edited by P.L. Nunez , 1995 "... 14. Statistical mechanics of multiple scales of neocortical interactions ..." - Mathl. Comput. Modelling , 1991 "... Recent work in statistical mechanics has developed new analytical and numerical techniques to solve coupled stochastic equations. This paper applies the very fast simulated re-annealing and path-integral methodologies to the estimation of the Brennan and Schwartz two-factor term structure model. It ..." Cited by 32 (28 self) Add to MetaCart Recent work in statistical mechanics has developed new analytical and numerical techniques to solve coupled stochastic equations. This paper applies the very fast simulated re-annealing and path-integral methodologies to the estimation of the Brennan and Schwartz two-factor term structure model. It is shown that these methodologies can be utilized to estimate more complicated n-factor nonlinear models. 1. CURRENT MODELS OF TERM STRUCTURE The modern theory of term structure of interest rates is based on equilibrium and arbitrage models in which bond prices are determined in terms of a few state variables. The one-factor models of Cox, Ingersoll and Ross (CIR) [1-4], and the two-factor models of Brennan and Schwartz (BS) [5-9] have been instrumental in the development of the valuation of interest dependent securities. The assumptions of these models include: • Bond prices are functions of a number of state variables, one to several, that follow Markov processes. • Inv estors are rational and prefer more wealth to less wealth. • Inv estors have homogeneous expectations. - IEEE Trans. Biomed. Eng , 1985 "... Abstract—An approach is explicitly formulated to blend a local with a global theory to investigate oscillatory neocortical firings, to determine the source and the informationprocessing nature of the alpha rhythm. The basis of this optimism is founded on a statistical mechanical theory of neocortica ..." Cited by 29 (27 self) Add to MetaCart Abstract—An approach is explicitly formulated to blend a local with a global theory to investigate oscillatory neocortical firings, to determine the source and the informationprocessing nature of the alpha rhythm. The basis of this optimism is founded on a statistical mechanical theory of neocortical interactions which has had success in numerically detailing properties of short-term-memory (STM) capacity at the mesoscopic scales of columnar interactions, and which is consistent with other theory deriving similar dispersion relations at the macroscopic scales of electroencephalographic (EEG) and magnetoencephalographic (MEG) activity. "... Adaptive Simulated Annealing (ASA) is a C-language code developed to statistically find the best global fit of a nonlinear constrained non-convex cost-function over aD-dimensional space. This algorithm permits an annealing schedule for “temperature ” T decreasing exponentially in annealing-time k, T ..." Cited by 25 (2 self) Add to MetaCart Adaptive Simulated Annealing (ASA) is a C-language code developed to statistically find the best global fit of a nonlinear constrained non-convex cost-function over aD-dimensional space. This algorithm permits an annealing schedule for “temperature ” T decreasing exponentially in annealing-time k, T = T 0 exp(−ck 1/D). The introduction of re-annealing also permits adaptation to changing sensitivities in the multi-dimensional parameter-space. This annealing schedule is faster than fast Cauchy annealing, where T = T 0/k, and much faster than Boltzmann annealing, where T = T 0/lnk. ASA has over 100 OPTIONS to provide robust tuning over many classes of nonlinear stochastic systems. - REV. D , 1984 "... Several studies in quantum mechanics and statistical mechanics have formally established that nonflat metrics induce a difference in the potential used to define the path-integral Lagrangian from that used to define the differential Schrödinger Hamiltonian. Arecent study has described a statistical ..." Cited by 16 (16 self) Add to MetaCart Several studies in quantum mechanics and statistical mechanics have formally established that nonflat metrics induce a difference in the potential used to define the path-integral Lagrangian from that used to define the differential Schrödinger Hamiltonian. Arecent study has described a statistical mechanical biophysical system in which this effect is large enough to be measurable. This study demonstrates that the nucleon-nucleon velocity-dependent interaction derived from meson exchanges is a quantum mechanical system in which this effect is also large enough to be measurable. - EEG.” InInternational Conference on Neural Information Processing (ICONIP’96 , 1996 "... Abstract—A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabi ..." Cited by 16 (16 self) Add to MetaCart Abstract—A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. This approach to complex systems emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2437","timestamp":"2014-04-20T03:02:03Z","content_type":null,"content_length":"38163","record_id":"<urn:uuid:24bef87c-4181-4633-bbae-bfa53d59e5e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparing Projects with Unequal Lives 13.6 Comparing Projects with Unequal Lives PLEASE NOTE: This book is currently in draft form; material is not final. Learning Objectives 1. Explain the difficulty in choosing between mutually exclusive projects with unequal lives. 2. Calculate the equivalent annual annuity (EAA) of a project and use it evaluate which project is superior. Ruth is deciding which shingles to put on her roof. Shingle A costs $1 per sq. ft. and is rated to last 10 years. Shingle B costs $1.40 per sq. ft. and is rated to last 15 years. If Ruth intends to stay in her house for the rest of her life, which shingle should Ruth select? One particularly troublesome comparison that arises often is when two repeatable mutually exclusive projects have different time lengths. For example, we can use a cheaper substitute, but it won’t last as long, so we’ll need to replace it more frequently. How do we know which project is better? If the projects are either independent or not repeatable, we can use NPV confidently. All positive NPVs should be selected if they are independent, and the highest NPV will indicate the best choice if they aren’t repeatable. But it can be the case that the highest NPV project can be inferior to a shorter project with a lower NPV. To analyze this problem, we need to calculate the equivalent annual annuity (EAA)The steady cash payment received by an annuity with the same length and NPV as the project., which is the steady cash payment received by an annuity with the same length and NPV as the project. For example, we know that Gator Lover’s Ice Cream Project A lasted for 5 years and had an NPV of $8,861.80 at a rate of 10%. If we solve for the yearly payment of an annuity with a PV of $8,861.80, r = 10%, n = 5 years, and FV = 0, we get an EAA of $2,337.72. Thus, we should be indifferent between receiving the cash flows of Project A and receiving $2,337.72 per year for 5 years (since they both have the same NPV)! Once EAAs are calculated for all projects being considered, it’s a simple matter of picking the higher one. Key Takeaways • If projects are independent or not repeatable, the impact of differing life is irrelevant. • If the projects are mutually exclusive and repeatable, than the impact of the differing life must be accounted for by comparing their EAA. 1. Compute and compare the following projects’ NPVs and EAAs at a 10% discount rate. Project J costs $100,000 and earns $50,000 each year for five years. Project K costs $200,000 and earns $150,000 in the first year and then $75,000 for each of the next three years. Project L costs $25,000 and earns $20,000 each year for two years. 2. Which project should be selected if they are mutually exclusive and repeatable?
{"url":"http://2012books.lardbucket.org/books/finance-for-managers/s13-06-comparing-projects-with-unequa.html","timestamp":"2014-04-16T07:14:11Z","content_type":null,"content_length":"10636","record_id":"<urn:uuid:8378ddbc-9c48-45fa-9a54-cb51749ddb5a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Mechanising set theory: cardinal arithmetic and the axiom of choice Larry Paulson, Krzysztof Grabczewski July 1995, 33 pages Fairly deep results of Zermelo-Fraenkel (ZF) set theory have been mechanised using the proof assistant Isabelle. The results concern cardinal arithmetic and the Axiom of Choice (AC). A key result about cardinal multiplication is K*K=K, where K is any infinite cardinal. Proving this result required developing theories of orders, order-isomorphisms, order types, ordinal arithmetic, cardinals, etc.; this covers most of Kunen, Set Theory, Chapter I. Furthermore, we have proved the equivalence of 7 formulations of the Well-ordering Theorem and 20 formulations of AC; this covers the first two chapters of Rubin and Rubin, Equivalents of the Axiom of Choice. The definitions used in the proofs are largely faithful in style to the original mathematics. Full text PDF (0.3 MB) PS (0.1 MB) BibTeX record author = {Paulson, Larry and Grabczewski, Krzysztof}, title = {{Mechanising set theory: cardinal arithmetic and the axiom of choice}}, year = 1995, month = jul, url = {http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-377.pdf}, institution = {University of Cambridge, Computer Laboratory}, number = {UCAM-CL-TR-377}
{"url":"http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-377.html","timestamp":"2014-04-16T10:47:57Z","content_type":null,"content_length":"4963","record_id":"<urn:uuid:d7991f85-72df-4714-88d6-3471252abb8e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Compression Ratio and Clearance Volume DIESEL ENGINES DOE-HDBK-1018/1-93 Diesel Engine Fundamentals Compression Ratio and Clearance Volume Clearance volume is the volume remaining in the cylinder when the piston is at TDC. Because of the irregular shape of the combustion chamber (volume in the head) the clearance volume is calculated empirically by filling the chamber with a measured amount of fluid while the piston is at TDC. This volume is then added to the displacement volume in the cylinder to obtain the cylinders total volume. An engine's compression ratio is determined by taking the volume of the cylinder with piston at TDC (highest point of travel) and dividing the volume of the cylinder when the piston is at BDC (lowest point of travel), as shown in Figure 15. This can be calculated by using the following formula: Compression Ratio displacement volume clearance volume clearance volume Figure 15 Compression Ratio Horsepower Power is the amount of work done per unit time or the rate of doing work. For a diesel engine, power is rated in units of horsepower. Indicated horsepower is the power transmitted to the pistons by the gas in the cylinders and is mathematically calculated. ME-01 Rev. 0 Page 18
{"url":"http://nuclearpowertraining.tpub.com/h1018v1/css/h1018v1_38.htm","timestamp":"2014-04-21T04:32:08Z","content_type":null,"content_length":"20409","record_id":"<urn:uuid:a5fe120e-4a1c-423e-b820-134b7eabf7bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Longitudinal dispersion in laboratory and natural streams Fischer, Hugo B. (1966) Longitudinal dispersion in laboratory and natural streams. California Institute of Technology . (Unpublished) http://resolver.caltech.edu/CaltechKHR:KH-R-12 See Usage Policy. Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechKHR:KH-R-12 This study concerns the longitudinal dispersion of fluid particles which are initially distributed uniformly over one cross section of a uniform, steady, turbulent open channel flow. The primary focus is on developing a method to predict the rate of dispersion in a natural stream. Taylor's method of determining a dispersion coefficient, previously applied to flow in pipes and two-dimensional open channels, is extended to a class of three-dimensional flows which have large width-to-depth ratios, and in which the velocity varies continuously with lateral cross-sectional position. Most natural streams are included. The dispersion coefficient for a natural stream may be predicted from measurements of the channel cross-sectional geometry, the cross-sectional distribution of velocity, and the overall channel shear velocity. Tracer experiments are not required. Large values of the dimensionless dispersion coefficient D / rU* are explained by lateral variations in downstream velocity. In effect, the characteristic length of the cross section is shown to be proportional to the width, rather than the hydraulic radius. The dimensionless dispersion coefficient depends approximately on the square of the width to depth ratio. A numerical program is given which is capable of generating the entire dispersion pattern downstream from an instantaneous point or plane source of pollutant. The program is verified by the theory for two-dimensional flow, and gives results in good agreement with laboratory and field experiments. Both laboratory and field experiments are described. Twenty-one laboratory experiments were conducted: thirteen in two-dimensional flows, over both smooth and roughened bottoms; and eight in three-dimensional flows, formed by adding extreme side roughness to produce lateral velocity variations. Four field experiments were conducted in the Green-Duwamish River, Washington. Both laboratory and flume experiments prove that in three-dimensional flow the dominant mechanism for dispersion is lateral velocity variation. For instance, in one laboratory experiment the dimensionless dispersion coefficient D/rU* (where r is the hydraulic radius and U* the shear velocity) was increased by a factor of ten by roughening the channel banks. In three-dimensional laboratory flow, D/rU* varied from 190 to 640, a typical range for natural streams. For each experiment, the measured dispersion coefficient agreed with that predicted by the extension of Taylor's analysis within a maximum error of 15%. For the Green-Duwamish River, the average experimentally measured dispersion coefficient was within 5% of the prediction. Item Type: Report or Paper (Technical Report) Group: W. M. Keck Laboratory of Hydraulics and Water Resources Record Number: CaltechKHR:KH-R-12 Persistent URL: http://resolver.caltech.edu/CaltechKHR:KH-R-12 Usage Policy: You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format. ID Code: 25984 Collection: CaltechKHR Deposited By: Imported from CaltechKHR Deposited On: 14 Jun 2004 Last Modified: 26 Dec 2012 13:50 Repository Staff Only: item control page
{"url":"http://authors.library.caltech.edu/25984/","timestamp":"2014-04-20T13:26:12Z","content_type":null,"content_length":"23960","record_id":"<urn:uuid:947c27d1-4f23-446f-bf2f-07fcc41d8762>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
d P PLOT and PLOT3D Data Structures Important: This page describes the internal structures used by Maple to construct 2-D and 3-D plots. It is recommended that the plot or plot3d commands, or the commands in the plots package be used to generate plots. • The Maple plotting functions, plot, plot3d, and others, produce PLOT and PLOT3D data structures describing the images to be displayed. The plots[display] command also produces a _PLOTARRAY structure to represent an array of plots. These data structures are understood by the Maple prettyprinter, which prints the plots in a form like any other Maple object. Because the structures are Maple expressions, they can be manipulated, saved, and printed like any other expression. The remainder of this section describes the form and content of the data structures. The data structures can be viewed using the Maple lprint command. • Each graphics image is represented by a function call of the form PLOT(...), PLOT3D(...) or _PLOTARRAY(...). The data values within these function calls specify the objects to be drawn (for example, points or lines) and options that control how the objects are to be drawn (for example, axes style or color). The _PLOTARRAY structure has as its first argument a Matrix or a list of lists containing PLOT or PLOT3D structures. For example, PLOT(CURVES([[0, 0], [0, 1], [0.5, 0.5], [0, 0]]), COLOR(RGB, 0, 0, 1)) describes a blue triangle in 2-D. As in this example, all of the objects and options are represented by function calls where the function name is fully capitalized. Note: The structure name COLOUR may be used in place of COLOR. • Data structure names introduced in Maple 10 or later are prefixed by an underscore. • The TRANSPARENCY, _GLOSSINESS and _AXIS[n] structures are not available in all interfaces. • If the same option structure is provided more than once, with different values, then the final value specified is generally the one used. 2-D and 3-D Plot Objects There are four different object types for 2-D plotting: curves, points, polygons, and text. These four are also available for 3-D plotting. In addition, grid, mesh, and isosurface structures can be used in 3-D plots. When data is provided as an Array or Matrix, the Array or Matrix must have datatype equal to float[8]. Furthermore, all indices in the Array must start at 1. When data is provided as a list, the elements must be floating-point values. The CURVES structure defines one or more curves in 2-D or 3-D space, formed by joining a sequence of sample points. Each 2-D curve A1, A2, ... is represented by an n by 2 Matrix or Array or a list of the form [[x1, y1], [x2, y2], ..., [xn, yn]]. Each 3-D curve is represented by an n by 3 Matrix or Array or a list of the form [[x1, y1, z1], [x2, y2, z2], ..., [xn, yn, zn]]. The POINTS structure defines a set of 2-D or 3-D points, represented by A. For points in 2-D, A can be an n by 2 Matrix or Array or a sequence of the form [x1, y1], [x2, y2], ..., [xn, yn]. For points in 3-D, A can be an n by 3 Matrix or Array or a sequence of the form [x1, y1, z1], [x2, y2, z2], ..., [xn, yn, zn]. The POLYGONS structure defines one or more polygons in 2-D or 3-D space. The format of each A1, A2, ... is identical to that described for the CURVES structure. • TEXT(A, string, horizontal, vertical) The TEXT structure defines a text element in 2-D or 3-D space. In 2-D, the point A is a list [x, y], while in 3-D, A is a list [x, y, z]. The horizontal value can be one of the two keywords ALIGNLEFT or ALIGNRIGHT, in which case the string is placed to the left or right of the location defined by A. Similarly, vertical can be one of the keywords ALIGNABOVE or ALIGNBELOW. If horizontal and/or vertical are omitted, the string is centered in the appropriate dimension. The GRID structure represents surfaces in 3-D space defined by a uniform sampling over a rectangular (aligned) region of the plane. The GRID structure takes the form GRID(a..b, c..d, A) where a..b is the x-range, c..d is the y-range, and A is a two-dimensional Array. If you have an m-by-n grid, then element A[i, j] is the function value at grid point (i, j), for i in 1..m and j in 1..n. The Array A may be replaced by a list of the form [[z11,...z1n], [z21,...z2n],...[zm1...zmn]] where zij is the function value at grid point (i, j). The ISOSURFACE structure contains the samples of a function taken over a regular grid in 3-D space and is rendered as a 3-D surface approximating the zero surface of the function. The ISOSURFACE structure takes the form ISOSURFACE(A) where A is a four-dimensional Array. If you have an m-by-n-by-p grid, then A[i, j, k, 1..3] gives the (x, y, z) coordinates of grid point (i, j, k) and A[i, j, k, 4] is the function value at that point, for i in 1..m, j in 1..n and k in 1..p. The Array A can be replaced by a list of m lists. Each sublist in turn contains n lists with p elements, each of which is a list [xijk, yijk, zijk, fijk], representing the (x, y, z) coordinates and the function value of grid point (i, j, k). The MESH structure represents surfaces in 3-D space defined by a grid of values. It takes the form MESH(A) where A is a three-dimensional Array. If you have an m-by-n grid, then elements A[i, j, 1], A[i, j, 2] and A[i, j, 3] are the x-, y- and z-coordinates of grid point (i, j), for i in 1..m, j in 1..n. The Array A can be replaced by a list of the form [[[x11, y11, z11],...[x1n, y1n, z1n]], [[x21, y21, z21],...[x2n, y2n, z2n]],...[[xm1, ym1, zm1]...[xmn, ymn, zmn]] where [xij, yij, zij] is the location of grid point (i, j). 2-D Plot Options There are many options that control the rendering of 2-D plots. These include: The AXESLABELS structure contains two strings that are used to label the x- and y-axes. It can also contain a FONT object defining the font used to render the labels. Specifies the number, location, and labeling of the tickmarks on an axes. It contains two values (one for x and the other for y). Each value can be an integer, a list of numbers, a list of equations, or the special value DEFAULT. If the value is an integer, then the driver chooses tick locations such that there are at least as many labels as specified. If the value is a list of numbers, then ticks and labels are specified at exactly those values. If a list of equations is given, then the left-hand side of each equation must be a number and the right-hand side a string. Ticks are placed at each specified number and labeled with the corresponding string. Axesticks can also contain a FONT object defining the font used to render the tick labels. Controls the selection of the lines drawn for the axes on the plot. It can take the five values: BOX, FRAME, NORMAL, NONE, or DEFAULT. BOX axes consist of a rectangular box surrounding the plot with labels and tickmarks on the left and lower lines. FRAME axes style only draws the left and lower axes of the box style with their associated tickmarks and labels. NORMAL style draws two axes lines and attempts to have them intersect at the zero position on the axes. If 0 is not in the axes range, the axes intersect at the lower bound of the range. The NONE style results in no lines or labels and the DEFAULT style chooses a device-specific axes style. Specifies information about a single axis, with direction given by the integer n (1 for the x-axis and 2 for the y-axis). The _AXIS[n] structure can contain one of the following substructures: Specifies a caption to be placed at the top of the plot, where c is any expression, string, or _TYPESET structure. The CAPTION object can also contain a FONT object defining the font used to render the caption. Specifies the color of the axis. See the description of the general COLOR structure below. _GRIDLINES(t) or _GRIDLINES(t, s) Specifies information about gridlines. This structure can have the form _GRIDLINES(t) or _GRIDLINES(t, s), where t is one of the values allowed for the AXESTICKS structure, described above, and s is a sequence of one or more of the following substructures: The COLOR, LINESTYLE, and THICKNESS substructures take the same form as the general plot structures of the same name, except that LINESTYLE takes the additional argument _TICKS indicating that tickmarks rather than gridlines are to be displayed. The _MAJORLINES(n) with structure displays the n-th gridline as a major line where n is a positive integer. The _SUBTICKS(t) structure displays subticks only if t is set to true. Specifies the location of the axis, where n is -1, 0, or 1, representing "low", "origin", and "high", respectively. When n is -1 or 1, the axis is placed at the lowest or highest value of the view range. When n is 0, the axis is placed at the origin or at the closest value if the origin is not in the view range. Specifies the scaling of the axis, where n is 0 or 1, representing linear and logarithmic, respectively. The COLOR structure can be specified in three different ways: RGB, HSV, or HUE. The RGB color specification requires three floating-point values for each color. The three values must each be between 0 and 1 and specify the amount of red, green, and blue light in the final color. For example, COLOR(RGB, 1.0, 0.0, 0.0) is red, while COLOR(RGB, 1.0, 1.0, 0.0) is yellow. The HSV color specification also requires three numbers for each color. The fractional portion of the first value indicates the color and the remaining two values give the purity (saturation) and the brightness of the color. The latter two values must be between zero and one. The HUE color specification only requires a single floating-point value and cycles through the colors of the spectrum based on the fractional part of the value. For example, COLOR(HUE, 0.9) is violet, while COLOR(HUE, 0.0) is red. COLOR(HUE, x) is equivalent to COLOR(HSV, x, 0.9, 1.0). Multiple colors may be specified in a single COLOR structure. See the note at the end of this section on local options. Specifies the font used in rendering TEXT objects. A font is specified by family, typeface, and size. Valid family and typeface combinations are (note that typeface default can be omitted): Family Typeface TIMES ROMAN COURIER DEFAULT Symbol font is used to produce Greek symbols. For a listing of which keys to use to produce the Greek symbols, see SYMBOL Font Keyboard Mapping. Displays a legend that identifies the curves in a 2-D plot. The legend l can be any expression, string, or _TYPESET structure. Specifies the style of the legend, where ls is a sequence of FONT or LOCATION structures. Specifies the dash pattern to be used when drawing line segments. It is often used to distinguish between different curves in a single image (see local options below). The line style value must be an integer from 1 to 7, corresponding to the following patterns: solid, dash, dot, dash-dot, long dash, space-dot, and space-dash. Takes one of the two values CONSTRAINED or UNCONSTRAINED. If scaling is constrained, then any transformations applied to the image must scale the x and y dimensions equally. The value DEFAULT can be used to select the device default scaling, usually unconstrained. The actual rendering of the non-TEXT objects in a plot is controlled by the STYLE option setting. In 2-D, there are 4 possible styles: POINT, LINE, PATCH, and PATCHNOGRID. The PATCH style which is the default, rendered points as symbols, curves as line segments, and polygons as filled regions with a border. The PATCHNOGRID style omits the border on polygons. The LINE style omits the filled interior of the polygons. The POINT style draws the endpoints of the curve line segments and the vertices of the polygons as symbols. Styles are specified by adding the function call STYLE(style), where style is one of the keywords above, to the PLOT structure. Specifies the symbol to be used when drawing points. Currently supported values are _ASTERISK, BOX, CROSS, CIRCLE, POINT, _DIAGONALCROSS, DIAMOND, and DEFAULT. The values _SOLIDBOX, _SOLIDCIRCLE, and _SOLIDDIAMOND are also available for 2-D plots only. A second argument to SYMBOL specifies the size (in points) of the symbol to be used for plotting. This is a non-negative integer. Controls the drawing of any line segments in the image resulting from the graphics primitives (not axes lines). The thickness setting is a non-negative integer, with 0 representing the thinnest line, and increasing values specifying increasingly thick lines. The default value is 1. Specifies a title to be placed at the top of the plot, where t is any expression, string, or _TYPESET structure. The TITLE object can also contain a FONT object defining the font used to render the title. Controls the transparency of a plot object. The transparency is specified as TRANSPARENCY(n), where n is a floating-point number in the range 0.0 to 1.0 or the name DEFAULT. A value of 0.0 means "not transparent", while a value of 1.0 means "fully transparent". Specifies text to be typeset and concatenated, where t is a sequence of arbitrary expressions or strings. • VIEW(xmin..xmax, ymin..ymax) Contains two ranges that specify the subrange of the x-y plane that is to be displayed. Either range can be replaced by DEFAULT, in which case the range is chosen to contain all elements in the Local Options - Where it is applicable, each of the above options can also be placed inside a POINTS, CURVES, TEXT, or POLYGONS object and overrides the global option for the rendering of that object. The COLOR option allows an additional format in this situation. In the case of an object having n subobjects (for example, multiple points, lines, or polygons), one color value can be supplied for each object. The structure then has the form COLOR(t, A) where A is a float[8] Array and t is one of RGB, HSV and HUE. If t is HUE, then A must have dimension 1..n. Otherwise, A must have dimensions 1..n, 1..3, with A[i,1], A[i,2], A[i,3] representing the RGB or HSV values for the i-th color. For example, PLOT(POINTS(Array(1..2, 1..2, [[0, 0], [1, 1]], 'datatype'='float[8]', 'order'='C_order'), COLOR(RGB, Array(1..2, 1..3, [[1, 0, 0], [0, 1, 0]], 'datatype'='float[8]', 'order'='C_order'))), SYMBOL(_SOLIDCIRCLE, 30)) draws two points, one in red and the other in green. 3-D Plot Options Each of the 2-D objects and options described in the section above are available in 3-D plotting with the exception of the legend and gridline options. The extension of the 2-D options to 3-D is obvious in most cases. For example, points are specified with 3 values instead of 2. Several additional option structures are available and are described below. Specifies the ambient light on the scene. It contains three numeric values between 0 and 1 specifying the intensity of the red, green, and blue components of the ambient light. The _AXIS[n] structure, with n equal to 3, works the same way as described for n equal to 1 and 2, and is used to specify information for the z-axis. However, the _GRIDLINES substructure is ignored in 3-D plots. COLOR can take several extra keyword options for 3-D plotting. XYZSHADING, XYSHADING, and ZSHADING specify that an objects color is to be based on its coordinates. For XYZSHADING, color is based on x-, y-, and z-coordinates and varies over the surface. For XYSHADING, color is based on x- and y-coordinates and varies over the surface. For ZSHADING, the color is based on the z-coordinate and varies over the surface. ZHUE and ZGREYSCALE are modified forms of ZSHADING. For ZHUE, the color for a point is linearly related to the HUE value (range 0 - 1) based on the relative z-coordinate. For ZGREYSCALE, the relative value of the z-coordinate affects the red, green, and blue appearing at a point in the plot equally which renders the entire plot in shades of gray. For specification of multiple colors, the COLOR(t, A) structure, where t is RGB, HSV or HUE, is similar to that for the 2-D case, except that m by n GRID and MESH structures are also supported. In this situation, A must be an Array with datatype=float[8] and dimensions 1..m, 1..n when t is HUE, or 1..m, 1..n, 1..3 otherwise. Controls the glossiness of a plotted surface. The glossiness is specified as _GLOSSINESS(g), where g is a floating-point number in the range 0.0 to 1.0 or the name DEFAULT. A value of 0.0 results in no light reflected while a value of 1.0 results in maximum reflection. The default value is 0.0. Reflections are rendered only if a point light source is enabled with the LIGHT or LIGHTMODEL During the rendering of GRID and MESH objects, they are broken down into rectangles and then further split into triangles. When overlaying the grid on the surface, either the rectangular or the triangular breakdown can be shown. GRIDSTYLE(TRIANGULAR) shows the triangular grid, while GRIDSTYLE(RECTANGULAR) omits the diagonal line to form a rectangular grid. • LIGHT(phi, theta, r, g, b) Specifies the direction and intensity of a directed light shining on the scene. The first two numeric values specify the direction to the light in polar coordinates using angles specified in degrees. The next three numeric values specify the intensity in the same manner as AMBIENTLIGHT. Allows the user to select from several user-defined lighting schemes. The allowed schemes are USER, LIGHT_1, LIGHT_2, LIGHT_3 and LIGHT_4. USER specifies that the light definitions given in the LIGHT and AMBIENTLIGHT options are to be used. • ORIENTATION(theta, phi, psi) Specifies the angles in degrees defining the orientation of the plot, obtained by rotating the plot psi about the x-axis, then phi about the (transformed) z-axis, and then theta about the (transformed) y-axis. These angles, given in degrees, are the Euler angles for the transformation matrix. The angle psi is optional and is assumed to be 0 if not given. Specifies the perspective from which the surface is viewed, where r is a real number in the range 0..1. The value 1 represents orthogonal projection while the value 0 represents wide-angle perspective rendering. Style can take several extra keyword options for 3-D plotting. HIDDEN specifies that the interior of polygons are not to be drawn but that they are to obscure objects behind them. CONTOUR and PATCHCONTOUR specify that contour lines for polygons and surfaces are to be shown (with or without patches). The ANIMATE structure contains lists of plot objects and options. Each list defines one frame in the animation and the frames are displayed in sequence on the output device. The options specified in each list override any options specified for the entire plot while the frame is being rendered. By specifying options within the lists for each frame, it is possible to move the lights and the viewpoint during an animation. See Also lprint, plot, plot/options, plot3d, plot3d/options, plots Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=plot%2Fstructure","timestamp":"2014-04-19T02:02:08Z","content_type":null,"content_length":"432219","record_id":"<urn:uuid:0b54d15c-f9b2-4fcf-93d1-df7ec57555fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Sets of integers containing not more than a given numbers of terms in arithmetical progression Results 1 - 10 of 16 - Ann. of Math "... Abstract. We prove that there are arbitrarily long arithmetic progressions of primes. ..." - J. N. SRIVASTAVA ET AL., EDS., A SURVEY OF COMBINATORIAL THEORY OC NORTH-HOLLAND PUBLISHING COMPANY, 1973 , 1973 "... I will discuss in this paper number theoretic problems which are of combinatorial nature. I certainly do not claim to cover the field completely and the paper will be biased heavily towards problems considered by me and my collaborators. Combinatorial methods have often been used successfully in num ..." Cited by 16 (1 self) Add to MetaCart I will discuss in this paper number theoretic problems which are of combinatorial nature. I certainly do not claim to cover the field completely and the paper will be biased heavily towards problems considered by me and my collaborators. Combinatorial methods have often been used successfully in number theory (e.g. sieve methods), but here we will try to restrict ourselves to problems which themselves have a combinatorial flavor. I have written several papers in recent years on such problems and in order to avoid making this paper too long, wherever possible, will discuss either problems not mentioned in the earlier papers or problems where some progress has been made since these papers were written. Before starting the discussion of our problems I give a few of the principal papers where similar problems were discussed and where further literature can be found. - IEEE Transactions on Information Theory , 2006 "... Abstract — One approach to designing structured low-density parity-check (LDPC) codes with large girth is to shorten codes with small girth in such a manner that the deleted columns of the parity-check matrix contain all the variables involved in short cycles. This approach is especially effective i ..." Cited by 14 (1 self) Add to MetaCart Abstract — One approach to designing structured low-density parity-check (LDPC) codes with large girth is to shorten codes with small girth in such a manner that the deleted columns of the parity-check matrix contain all the variables involved in short cycles. This approach is especially effective if the parity-check matrix of a code is a matrix composed of blocks of circulant permutation matrices, as is the case for the class of codes known as array codes. We show how to shorten array codes by deleting certain columns of their parity-check matrices so as to increase their girth. The shortening approach is based on the observation that for array codes, and in fact for a slightly more general class of LDPC codes, the cycles in the corresponding Tanner graph are governed by certain homogeneous linear equations with integer coefficients. Consequently, we can selectively eliminate cycles from an array code by only retaining those columns from the parity-check matrix of the original code that are indexed by integer sequences that do not contain solutions to the equations governing those cycles. We provide Ramsey-theoretic estimates for the maximum number of columns that can be retained from the original parity-check matrix with the property that the sequence of their indices avoid solutions to various types of cycle-governing equations. This translates to estimates of the rate penalty incurred in shortening a code to eliminate cycles. Simulation results show that for the codes considered, shortening them to increase the girth can lead to significant gains in signalto-noise ratio in the case of communication over an additive white Gaussian noise channel. Index Terms — Array codes, LDPC codes, shortening, cyclegoverning equations - Proc. of the 16 th Annual ACM-SIAM SODA, ACM Press , 2005 "... For a fixed k-uniform hypergraph D (k-graph for short, k ≥ 3), we say that a k-graph H) if it contains no copy (resp. induced copy) of D. Our goal in satisfies property PD (resp. P ∗ D this paper is to classify the k-graphs D for which there are property-testers for testing PD and P ∗ D whose query ..." Cited by 8 (2 self) Add to MetaCart For a fixed k-uniform hypergraph D (k-graph for short, k ≥ 3), we say that a k-graph H) if it contains no copy (resp. induced copy) of D. Our goal in satisfies property PD (resp. P ∗ D this paper is to classify the k-graphs D for which there are property-testers for testing PD and P ∗ D whose query complexity is polynomial in 1/ɛ. For such k-graphs we say that PD (resp. P ∗ D) is easily testable. For P ∗ D, we prove that aside from a single 3-graph, P ∗ D is easily testable if and only if D is a single k-edge. We further show that for large k, one can use more sophisticated techniques in order to obtain better lower bounds for any large enough k-graph. These results extend and improve previous results about graphs [5] and k-graphs [18]. For PD, we show that for any k-partite k-graph D, PD is easily testable, by giving an efficient one-sided error-property tester, which improves the one obtained by [18]. We further prove a nearly matching lower bound on the query complexity of such a property-tester. Finally, we give a sufficient condition for inferring that PD is not easily testable. Though our results do not supply a complete characterization of the k-graphs for which PD is easily testable, they are a natural - In Proc. of 31st MFCS , 2006 "... Abstract Let x1,..., xk be n-bit numbers and T 2 N. Assume that P1,..., Pk are players such that Pi knows all of the numbers except xi. The players want to determine if Pkj=1 xj = T bybroadcasting as few bits as possible. Chandra, Furst, and Lipton obtained an upper bound of O(pn) bits for the k = 3 ..." Cited by 7 (3 self) Add to MetaCart Abstract Let x1,..., xk be n-bit numbers and T 2 N. Assume that P1,..., Pk are players such that Pi knows all of the numbers except xi. The players want to determine if Pkj=1 xj = T bybroadcasting as few bits as possible. Chandra, Furst, and Lipton obtained an upper bound of O(pn) bits for the k = 3 case, and a lower bound of!(1) for k> = 3 when T = \Theta (2n). We obtain(1) for general k> = 3 an upper bound of k + O(n1/(k-1)), (2) for k = 3, T = \Theta (2n), a lowerbound of \Omega (log log n), (3) a generalization of the protocol to abelian groups, (4) lower boundson the multiparty communication complexity of some regular languages, (5) lower bounds on branching programs, and (6) empirical results for the k = 3 case. 1 Introduction Multiparty communication complexity was first defined by Chandra, Furst, and Lipton [8] and used to obtain lower bounds on branching programs. Since then it has been used to get additional lower bounds and tradeoffs for branching programs [1, 5], lower bounds on problems in data structures [5], time-space tradeoffs for restricted Turing machines [1], and unconditional pseudorandom generators for logspace [1]. Def 1.1 Let f: {{0, 1}n}k! {0, 1}. Assume, for 1 < = i < = k, Pi has all of the inputs except xi. Let d(f) be the total number of bits broadcast in the optimal deterministic protocol for f. This is called the multiparty communication complexity of f. The scenario is called the forehead model. "... A linear equation on k unknowns is called a (k, h)-equation if it is of the form � k i=1 aixi = 0, with ai ∈ {−h,..., h} and � ai = 0. For a (k, h)-equation E, let rE(n) denote the size of the largest subset of the first n integers with no solution of E (besides certain trivial solutions). Several s ..." Cited by 4 (2 self) Add to MetaCart A linear equation on k unknowns is called a (k, h)-equation if it is of the form � k i=1 aixi = 0, with ai ∈ {−h,..., h} and � ai = 0. For a (k, h)-equation E, let rE(n) denote the size of the largest subset of the first n integers with no solution of E (besides certain trivial solutions). Several special cases of this general problem, such as Sidon’s equation and sets without threeterm arithmetic progressions, are some of the most well studied problems in additive number theory. Ruzsa was the first to address the general problem of the influence of certain properties of equations on rE(n). His results suggest, but do not imply, that for every fixed k, all but an O(1/h) fraction of the (k, h)-equations E are such that rE(n)> n 1−o(1). In this paper we address the generalized problem of estimating the size of the largest subset of the first n integers with no solution of a set S of (k, h)-equations (again, besides certain trivial solutions). We denote this quantity by rS (n). Our main result is that all but an O(1/h) fraction of the sets of (k, h)-equations S of size k − ⌊ √ 2k ⌋ + 1, are such that rS(n)> n 1−o(1). We also give several additional results relating properties of sets of equations and rS(n). , 2010 "... There has been much work on the following question: given n, how large can a subset of {1,..., n} be that has no arithmetic progressions of length 3. We call such sets 3-free. Most of the work has been asymptotic. In this paper we sketch applications of large 3-free sets, review the literature of ho ..." Cited by 3 (3 self) Add to MetaCart There has been much work on the following question: given n, how large can a subset of {1,..., n} be that has no arithmetic progressions of length 3. We call such sets 3-free. Most of the work has been asymptotic. In this paper we sketch applications of large 3-free sets, review the literature of how to construct large 3-free sets, and present empirical studies on how large such sets actually are. The two main questions considered are (1) How large can a 3-free set be when n is small, and (2) How do the methods in the literature compare to each other? In particular, - Mathematics Proceedings "... Abstract. This is an article for a general mathematical audience on the author’s work, joint with Terence Tao, establishing that there are arbitrarily long arithmetic progressions of primes. 1. introduction and history This is a description of recent work of the author and Terence Tao [11] on primes ..." Cited by 2 (0 self) Add to MetaCart Abstract. This is an article for a general mathematical audience on the author’s work, joint with Terence Tao, establishing that there are arbitrarily long arithmetic progressions of primes. 1. introduction and history This is a description of recent work of the author and Terence Tao [11] on primes in arithmetic progression. It is based on seminars given for a general mathematical , 705 "... Let G be a finite Abelian group and A ⊆ G × G be a set of cardinality at least |G | 2 /(log log |G|) c, where c> 0 is an absolute constant. We prove that A contains a triple {(k, m), (k + d, m), (k, m+ d)}, where d ̸= 0. This theorem is a two-dimensional generalization of Szemerédi’s theorem on ari ..." Cited by 2 (0 self) Add to MetaCart Let G be a finite Abelian group and A ⊆ G × G be a set of cardinality at least |G | 2 /(log log |G|) c, where c> 0 is an absolute constant. We prove that A contains a triple {(k, m), (k + d, m), (k, m+ d)}, where d ̸= 0. This theorem is a two-dimensional generalization of Szemerédi’s theorem on arithmetic progressions. 1. Introduction. Szemerédi’s theorem [29] on arithmetic progressions states that an arbitrary set A ⊆ Z of positive density contains arithmetic progression of any length. This remarkable theorem has played a significant role in the development of two fields in mathematics: additive combinatorics (see e.g. [31]) and combinatorial ergodic
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=727107","timestamp":"2014-04-21T01:34:02Z","content_type":null,"content_length":"37942","record_id":"<urn:uuid:cdd6c890-536a-4373-a60f-a1ef0b893bee>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: A closed surface from 3D data point Replies: 4 Last Post: Jan 16, 2011 9:09 PM Messages: [ Previous | Next ] Alan Re: A closed surface from 3D data point Posted: Jan 14, 2011 12:42 PM Posts: 151 Registered: 7 "matt dash" wrote in message <igq0k5$im3$1@fred.mathworks.com>... /24/09 > "Tran Toan" <trantoan2008@nate.com> wrote in message <igpl0s$me0$1@fred.mathworks.com>... > > Hi everyone, > > > > Can anyone help me to solve my problem? I have 3D data point as [x,y,z] matrix which show a geometry in 3D such as cube, sphere, any complex geometry...I want to show this geometry as closed surface, but unfortunately, 3D visualization in Matlab such as mesh, surfc... can not do it. > > So, anybody help me to do above problem. > > Thank you in advance. > You might find something here that does what you need: > http://www.advancedmcode.org/ The simplest real solution is the convex hull, convhulln() in Matlab. This probably won't give you any "complex geometry", depending on what you mean by that. Date Subject Author 1/14/11 A closed surface from 3D data point Tran Thanh Toan 1/14/11 Re: A closed surface from 3D data point Walter Roberson 1/14/11 Re: A closed surface from 3D data point matt dash 1/14/11 Re: A closed surface from 3D data point Alan 1/16/11 Re: A closed surface from 3D data point Tran Thanh Toan
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2228025&messageID=7358556","timestamp":"2014-04-16T11:25:52Z","content_type":null,"content_length":"21578","record_id":"<urn:uuid:ffda3b88-50e6-45e2-9d59-4ce407645a34>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
A Single Conservative Force Acts On A 5.00 Kg Particle. ... | Chegg.com A single conservative force acts on a 5.00 kg particle. Theequation = (2 + 4) N describesthis force, where is in meters. As the particle movesalong the axis from = 2.60 m to = 5.00 m, calculate the following. (a) the work done by this force on theparticle 1. J (b) the change in the potential energy of the system 2 J (c) the kinetic energy the particle has at x =5.00 m if its speed is 3.00 m/s atx = 2.60 m 3 J
{"url":"http://www.chegg.com/homework-help/questions-and-answers/single-conservative-force-acts-500-kg-particle-theequation-fx-2x-4-n-describesthis-force-x-q652616","timestamp":"2014-04-20T14:53:10Z","content_type":null,"content_length":"22198","record_id":"<urn:uuid:5d9d6b8a-d287-4ef1-ad8a-e623b193ad5a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Centre de Recherches Mathematiques CRM Proceedings and Lecture Notes Volume 11, 1997 The Problem of Classifying Automorphic Representations of Classical Groups James Arthur In this article we shall give an elementary introduction to an important problem in representation theory. The problem is to relate the automorphic representations of classical groups to those of the general linear group. Thanks to the work of a number of people over the past twenty-five years, the automorphic representation theory of GL(n) is in pretty good shape. The theory for GL(n) now includes a good understanding of the analytic properties of Rankin-Selberg L-functions, the classification of the discrete spectrum, and cyclic base change. One would like to establish similar things for classical groups. The goal would be an explicit comparison between the automorphic spectra of classical groups and GL(n) through the appropriate trace formulas. There are still obstacles to be overcome. However with the progress of recent years, there is also reason to be optimistic. We shall not discuss the techniques here. Nor will we consider the possible applications. Our modest aim is to introduce the problem itself, in a form that might be accessible to a nonspecialist. In the process we shall review some of
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/209/1768342.html","timestamp":"2014-04-19T04:23:28Z","content_type":null,"content_length":"8356","record_id":"<urn:uuid:de005bbb-453d-4076-9143-5fd472f7be5d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
complexity theorie, deterministic turing-machines, time complexity December 19th 2011, 05:01 AM complexity theorie, deterministic turing-machines, time complexity Be A = { $A_n$ | n $\in$ N} a countable set of Stockitems. and $(Frml)_A$ = { φ | φ is a formula with stockitems of A. } Find a finite alphabet Σ, a embedding i : $Frml_A$--> Σ* and a deterministic $n^k$ - time- turing machine M (for adequate k $\in$ N, so A(M) = {i(φ|φ $\in Frml_A$ }. A rough sketch of the turingprogramm is enough. I have to give an estimation up for the runtime of my program. Can someone help me, I dont even know how to start this exercise (Speechless) Every help would be appreciated :) December 19th 2011, 05:09 AM Re: complexity theorie, deterministic turing-machines, time complexity I am not sure what the following things mean: stockitems, A(M) and i(φ). It is better to give too much explanation than not enough, keeping in mind that notations and definitions in complexity theory and logic in general vary widely between courses and textbooks. This question belongs in the Logic section of the forum. December 19th 2011, 05:27 AM Re: complexity theorie, deterministic turing-machines, time complexity i'm sorryy. can i displace it to the logic section? A(M) means M accepts A , if A(M) .Its acceptionset.. M is my Machine. i is the embedding i: $Frml_A$ --> Σ* I think its the same if I take f: $Frml_A$ --> Σ* and A(M) = {f(φ) | φ $\in Frml_A$ ok Stockitem it's "Satzsymbol" in German, I think its the right translation but not for maths (Rofl) I dont know the english word. Its like a variable e.g. $A_0 \wedge A_1$ . $A_i$ is a December 19th 2011, 06:44 AM Re: complexity theorie, deterministic turing-machines, time complexity You can send a private message to a moderator (their list is at the bottom of each forum section). It probably means a "propositional variable" in English. Since the set of propositional variables is countable, they can be encoded with binary numbers. So, to encode $A_5$ you can have, for example, the string A_101_ where '_' is a delimiter. Logic connectives— $\land$, $\lor$, $\to$ and $eg$—can be included in the alphabet Σ. It is somewhat easier to check if a string is (an encoding of) a well-formed formula if the formula is written in prefix (Polish) or postfix (reverse Polish), rather than the usual infix, notation because prefix and postfix notations don't need parentheses. The algorithm for evaluating (or checking) an expression in prefix notation is roughly described in Wikipedia. If one has a stack, then it is possible to scan an expression only once to check whether it is well-formed. From here, one can argue that one needs polynomial time.
{"url":"http://mathhelpforum.com/discrete-math/194477-complexity-theorie-deterministic-turing-machines-time-complexity-print.html","timestamp":"2014-04-19T00:17:47Z","content_type":null,"content_length":"11242","record_id":"<urn:uuid:ad39e247-7fc6-45d1-a124-35cd01dc7085>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Confidence intervals #2 February 18th 2010, 11:14 AM #1 Feb 2010 A question I need help with... The average hemaglobin reading for a sample of 20 teachers was 16 grams per 100 milliliters, witha sample standard deviation of 2 grams. Find the 99% confidence interval f the true mean. Last edited by mr fantastic; February 19th 2010 at 01:21 AM. Reason: Changed post title Have you reviewed any examples from your class notes or textbook? Where are you stuck? February 19th 2010, 01:22 AM #2
{"url":"http://mathhelpforum.com/statistics/129487-confidence-intervals-2-a.html","timestamp":"2014-04-17T10:03:10Z","content_type":null,"content_length":"33807","record_id":"<urn:uuid:39cc7c61-9d5b-4370-9620-4388820863c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Triangle Math Tutor Find a Triangle Math Tutor ...I have a BA in Literature from UCSB. This gives me a very strong background in reading and writing, as well as other English-related subjects. I spent my first 2 years in college as a biology major before deciding to switch to Literature, so I have a strong biology and general science background as well. 17 Subjects: including algebra 1, algebra 2, prealgebra, reading I have been working as a middle school math teacher for the past four years. I have taught math to sixth and seventh grade students. I am certified to teach all subjects in elementary school and mathematics at the middle school level. 4 Subjects: including algebra 1, prealgebra, SAT math, elementary math ...I have been tutoring these subjects since 2003. I know where to go to receive the previous released test from the Virginia Department of Education. Each student I worked with in these subjects got the above average score on the test in the subject I tutored them in. 13 Subjects: including calculus, spelling, algebra 1, algebra 2 ...I minored in economics and went on to study it further in graduate school. My graduate work was completed at the University of Maryland College Park, where I specialized in international development and quantitative analysis. I currently work as a professional economist. 16 Subjects: including discrete math, differential equations, probability, ACT Math I completed my undergraduate coursework at the University of Virginia, and my graduate coursework in education at the University of Mary Washington. I have an extensive background in all areas of biology with a focus on genetics and microbiology. I have experience in both laboratory settings and field research. 11 Subjects: including algebra 1, prealgebra, biology, anatomy
{"url":"http://www.purplemath.com/triangle_math_tutors.php","timestamp":"2014-04-19T20:07:10Z","content_type":null,"content_length":"23591","record_id":"<urn:uuid:d1c16bd9-f7a5-4568-b6c0-3696851c4bf9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove that 2n^2 + n is odd if and only if cos(npi/2) is even October 2nd 2013, 03:56 PM Prove that 2n^2 + n is odd if and only if cos(npi/2) is even Let n be an element of Z. Prove that $2n^2 + 2$ is odd if and only if $cos\left(\frac{\pi n }{2}\right)$ is even. Since it's a biconditional, I'm certain there will be two parts: 1) Proving $p \Rightarrow q$ , and 2) proving $q \Rightarrow p$ It's in the proof by contrapositive section, so I'll try to prove it that way. 1) If $2n^2 + 2$ is odd, then $cos\left(\frac{\pi n }{2}\right)$ is even. Contrapositive: If $cos\left(\frac{\pi n }{2}\right)$ is odd, then $2n^2 + 2$ is even. Assuming $cos\left(\frac{\pi n }{2}\right)$ is odd, then $n=2k$ $2n^2 + 2=2(2k)^2+2$ $8k^2+2=2(4k^2+1)=2m$ for some integer m. Therefore, $2n^2 + 2$ is even and the implication is true. My professor says that my proof is wrong, but I don't see how. October 2nd 2013, 04:10 PM Re: Prove that 2n^2 + n is odd if and only if cos(npi/2) is even I would suggest you go back and reread the problem. $2n^2+ 2= 2(n^2+ 1)$ is never odd. Your title, however, says $2n^2+ n$, not $2n^2+ 2$. Perhaps if you tried proving that you will do better. October 2nd 2013, 04:25 PM Re: Prove that 2n^2 + n is odd if and only if cos(npi/2) is even October 2nd 2013, 04:28 PM Re: Prove that 2n^2 + n is odd if and only if cos(npi/2) is even Correction: Let n be an element of Z. Prove that $2n^2 + n$ is odd if and only if $cos\left(\frac{\pi n }{2}\right)$ is even. 1) If $2n^2 + n$ is odd, then $cos\left(\frac{\pi n }{2}\right)$ is even. Contrapositive: If $cos\left(\frac{\pi n }{2}\right)$ is odd, then $2n^2 + n$ is even. Assuming $cos\left(\frac{\pi n }{2}\right)$ is odd, then $n=2k$ $2n^2 + n=2(2k)^2+2k$ $8k^2+2k=2(4k^2+k)=2m$ for some integer m. Therefore, $2n^2 + n$ is even and the implication is true.
{"url":"http://mathhelpforum.com/discrete-math/222524-prove-2n-2-n-odd-if-only-if-cos-npi-2-even-print.html","timestamp":"2014-04-18T00:45:30Z","content_type":null,"content_length":"11779","record_id":"<urn:uuid:33c2a31d-9f19-46b8-806b-18ee05788ef6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
interesting question October 10th 2007, 04:06 AM #1 Sep 2007 interesting question Two towns are to get their water supply from a river. Both towns are on the same side of the river at distance of 6 km and 18 km respectively from the river bank. If the distance between the points on the river bank nearest to the towns respectively be 10 km, find (i)Where may a single pumping station be located to require the least amount of pipe? (ii) How much pipe is needed for the above in (i)? An interesting question given by my teacher in class. Any idea on solving? Last edited by CaptainBlack; October 10th 2007 at 02:13 PM. how can snell's law solve the problem? well, i think we'll have to draw out the sketch of the towns position and the river..and the pumping station should be at the intersection of the hypothenuses of the right angle triangles. The triangles have their right angles between the side parallel to the river bank and the perpendicular distance of the towns to the river. any more ideas? how can snell's law solve the problem? well, i think we'll have to draw out the sketch of the towns position and the river..and the pumping station should be at the intersection of the hypothenuses of the right angle triangles. The triangles have their right angles between the side parallel to the river bank and the perpendicular distance of the towns to the river. any more ideas? The law of reflection solves the problem because light rays minimise a functional equivalent to the pipe line length. A light ray follows a path such that the time of flight is stationary, as the speed of light is constant in a homogeneous medium this corresponds to a path who's length is stationary. In this case it happens to be a minimum. Thus the pipline will satisfy law of reflection as it is a consequence of the stationarity of time of flight along the ray. Last edited by CaptainBlack; October 10th 2007 at 02:12 PM. how can snell's law solve the problem? well, i think we'll have to draw out the sketch of the towns position and the river..and the pumping station should be at the intersection of the hypothenuses of the right angle triangles. The triangles have their right angles between the side parallel to the river bank and the perpendicular distance of the towns to the river. any more ideas? Zeez, both the Problem and your idea how to solve it test my understanding of English. Do I understand what you are saying if I describe the figure according to my understandings? So there is a vertical line, the river bank. Points A and B, 10 km apart, are on this vertical line. Perpendicular to line segment AB, from point B, is point C. Point C is 6 km from point B. Likewise, perpendicular to line segment AB, from point A, is point D. Point D is 18 km from point A. The pumping station, point P, is on AB. So the pipe lines are PC and PD. Yes, your idea is very good. Total length of pipelines, L = PC +PD Let P be x km away from A. So P is (10-x) km away from B. Then, by Pythagorean Theorem, L = sqrt[6^2 +(10-x)^2] +sqrt[18^2 +x^2] Differentiate both sides with respect to x, set dL/dx to zero, and you'd get the least L. So I solved for the minimum L using the idea that P should be on AB. I got L = 26 km minimum. Then, I computed for another idea. That the pumping station could be at point B. And so L = BC +CD L = 6 + sqrt[(18-6)^2 +10^2] = 21.62 km only. Therefore, I correct my first answer. Now, for the least pipelines, the pumping station should be on the river bank that is 6 km from the town nearer to the river. The pipeline then goes first to that nearer town, then the pipeline proceeds to the other town. ----------revised answer. October 10th 2007, 04:53 AM #2 Grand Panjandrum Nov 2005 October 10th 2007, 06:29 AM #3 Sep 2007 October 10th 2007, 10:58 AM #4 Grand Panjandrum Nov 2005 October 10th 2007, 11:20 AM #5 MHF Contributor Apr 2005 October 11th 2007, 05:20 AM #6 October 11th 2007, 07:23 AM #7 Grand Panjandrum Nov 2005 October 11th 2007, 09:48 PM #8 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-applied-math/20308-interesting-question.html","timestamp":"2014-04-21T13:39:41Z","content_type":null,"content_length":"57567","record_id":"<urn:uuid:74d7f233-ee71-4652-a433-5afb255c2dc7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Weston, MA Statistics Tutor Find a Weston, MA Statistics Tutor I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. 14 Subjects: including statistics, geometry, algebra 1, algebra 2 ...Statistics offers many new concepts which, depending how it's taught, can be overwhelming at times. I have experience taking topics in statistics which students find challenging or intimidating and placing them in an easier to understand context. I have taught math for an SAT prep company. 24 Subjects: including statistics, chemistry, calculus, physics ...It has been a while since I have looked at a calculus book, but I think that it would come back fairly easily with materials in hand. I can help with test prep in math subjects for SAT, ACT and AP. I am also available for some college math subjects. 17 Subjects: including statistics, calculus, geometry, algebra 1 ...I have taught at the Community College level in Florida, and taught undergraduate courses at a 4 year University while in Graduate School. I have also taught several professional review courses in the actuarial and insurance fields. Please note- I only tutor on weekends, and only in Wellesley.Introductory to Intermediate Statistics theory and applications. 2 Subjects: including statistics, probability ...I excelled in my linear algebra courses as an undergraduate, and always enjoyed working with others. I also was a TA for this subject and met with students for office hours and lab sessions. I have tutored students in linear algebra in the past (outside of WyzAnt), and have impacted them well. 16 Subjects: including statistics, Spanish, calculus, geometry Related Weston, MA Tutors Weston, MA Accounting Tutors Weston, MA ACT Tutors Weston, MA Algebra Tutors Weston, MA Algebra 2 Tutors Weston, MA Calculus Tutors Weston, MA Geometry Tutors Weston, MA Math Tutors Weston, MA Prealgebra Tutors Weston, MA Precalculus Tutors Weston, MA SAT Tutors Weston, MA SAT Math Tutors Weston, MA Science Tutors Weston, MA Statistics Tutors Weston, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/weston_ma_statistics_tutors.php","timestamp":"2014-04-19T02:29:23Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:f502722b-143b-46a4-967e-0df07756579d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Detect subclass of ndarray Travis Oliphant oliphant@ee.byu.... Sat Mar 24 15:02:59 CDT 2007 Alan G Isaac wrote: > On Sat, 24 Mar 2007, Charles R Harris apparently wrote: >> Yes, that is what I am thinking. Given that there are only the two >> possibilities, row or column, choose the only one that is compatible with >> the multiplying matrix. The result will not always be a column vector, for >> instance, mat([[1]])*ones(3) will be a 1x3 row vector. > Ack! The simple rule `post multiply means its a column vector` > would be horrible enough: A*ones(n)*B becomes utterly obscure. > Now even that simple rule is to be violated?? > Down this path lies madness. > Please, just raise an exception. My opinion is that a 1-d array in matrix-multiplication should always be interpreted as a row vector. Is this not what is currently done? If not, then it is a bug in my mind. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026703.html","timestamp":"2014-04-16T22:20:22Z","content_type":null,"content_length":"3597","record_id":"<urn:uuid:656d4dc3-954a-4af3-9dc8-996ddc6b7593>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Bernhard Riemann — developer of the Riemann zeta-function, the 'grand-daddy of all L-functions.' A new mathematical object, an elusive cousin of the Riemann zeta-function, was revealed to great acclaim recently at the American Institute of Mathematics. Ce Bian and Andrew Booker from the University of Bristol showed the first example of a third degree transcendental L-function. L-functions underpin much of twentieth century number theory. They feature in the proof of Fermat's last theorem, as well as playing a part in the recent classification of congruent numbers, a problem first posed one thousand years ago. The Riemann zeta-function, the "grand-daddy of all L-functions" according to the researchers, goes back to Leonhard Euler and Bernhard Riemann, and contains deep information regarding the distribution of prime numbers. Many mathematicians believe that other L-functions also contain invaluable insights into number theory. The problem is that few of them are explicitly known. To understand L-functions, let's firstly consider the Riemann zeta-function. In the eighteenth century, the legendary mathematician Leonhard Euler considered the infinite series is a real number. On the face of it, this series does not seem to have much to do with prime numbers. However, Euler showed that this series is equal to the infinite product which contains one factor for each of the primes 2, 3, 5, 7, etc. The value of the series — or the lack of one — depends on the value of s. When s is less than or equal to 1, it is possible to make the sum ever larger simply by adding more terms — that is, the series does not converge, it diverges. For example, for s = -2, the series is However, if is greater than 1, the series converges to a finite value. Taking = 2, for example, gives which sums to of the variable , Euler's series is only valid for values of that are greater than 1 — for all other values of , the series adds to infinity. In his seminal 1859 paper on number theory, Riemann developed a method of extending Euler's series to a function that is valid for all values of s. He found a function that agrees with Euler's series for values of s that are greater than 1, but also gives a finite value for all other values of s, including complex values. This analytic continuation of Euler's series is now known as the Riemann Riemann showed that this continuous function of a complex variable had deep connections to prime numbers, which are not only real numbers, but also discrete. In particular, he found that the way the primes are distributed along the number line is related to the values of s for which his zeta-function is zero. He also conjectured for which values of s this happens, but he could not prove it — this is the famous Riemann hypothesis, one of the most important open problems in mathematics. Whilst the Riemann zeta-function itself is now reasonably well understood, its L-function relatives are not. L-functions are analytic continuations of the more general Dirichlet series: functional equations . Functional equations shed light on the properties of those functions that satisfy them, and for L-functions F(s) the functional equation is: is an integer called the is the , and the numbers Langland's parameters . If these numbers are transcendental (that is, non-algebraic, such as . The Riemann zeta-function is the L-function where the level is 1, the degree is 1 and the Langland's parameters are 0 — that is, a first degree algebraic L-function . The Bristol researchers showed the first example of a third degree transcendental L-function After the announcement, mathematicians in the audience were quickly able to determine that the first few zeros of this new L-function satisfy the generalised Riemann hypothesis. The generalised form of the Riemann hypothesis was announced in 1884 and asserts that all of the non-real zeros of an L-function should have their real part equal to 1/2. Members of the highly intelligent audience were able to compute this quickly on the spot. Mathematicians know only a few explicit examples of L-functions and the quest to find them has been at the heart of major research programmes. "This work was made possible by a combination of theoretical advances and the power of modern computers," said Booker, while Bian reported during his lecture that it took approximately 10,000 hours of computer time to produce his initial results. Harold Stark, who was the first to accurately calculate second degree transcendental L-functions 30 years ago, said: "It's a big advance. I thought we were years away from doing this. The geometry of what you have to do and the scale of the computation are orders of magnitude harder." Following on from this work, Michael Rubinstein from the University of Waterloo, and William Stein from the University of Washington, will direct an ambitious new initiative to chart all L-functions. "The techniques developed by Bian and Booker open up whole new possibilities for experimenting with these powerful and mysterious functions and are a key step towards making our group project a success," said Rubinstein. The project has plans for three graduate student schools, an undergraduate research course, and support for postdoctoral and graduate students. Dorian Goldfeld, Professor of Mathematics at Columbia University, likened the discovery to finding planets in remote solar systems. "We know they are out there, but the problem is to detect them and determine what they look like. It gives us a glimpse of new worlds." Further reading
{"url":"http://plus.maths.org/content/comment/reply/2626","timestamp":"2014-04-20T06:00:43Z","content_type":null,"content_length":"34230","record_id":"<urn:uuid:dbd5583d-e818-4185-8ab6-c75db2063920>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: -1 x -1 ? Date: Sep 18, 1999 1:38 PM Author: John Savard Subject: Re: -1 x -1 ? "Guillermo Phillips" <Guillermo.Phillips@marsman.demon.co.uk> wrote, in part: >Here's something I've always wondered (perhaps in my naivety). Why >should -1 x -1 = 1? >I appreciate that lots of nice things come from this, but what's the >fundamental reason for it? Well, negative numbers are kind of strange, almost like imaginary numbers. You have an empty box, and you can leave it empty, or you can put one pebble in it, or you can put two pebbles in it, and so on. But negative numbers can make sense for things like temperature and bank accounts. If you owe five dollars, that can be considered a way of having -5 If you owe three people five dollars, you have 3 * -5 dollars, or -15 If three people owe you five dollars, then you have an asset that is (hopefully, if they're honest) worth 15 dollars. So that's -3 * -5, the minus on the 3 standing for the fact that the debt is "the other way around". John Savard ( teneerf<- )
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=199964","timestamp":"2014-04-20T03:54:34Z","content_type":null,"content_length":"2149","record_id":"<urn:uuid:be1e928f-6a2e-4798-ac7f-08801a17d204>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Toric geometry and mirror symmetry Abstract (Summary) In this dissertation, we first study complete intersections of hypersurfaces in toric varieties. We introduce a quasismooth intersection in a complete simplicial toric variety, which generalizes a nonsingular complete intersection in a projective space. Using a Cayley trick, we show how to relate cohomology of a quasismooth inter section to cohomology of a quasismooth hypersurface in a higher dimensional toric variety. The cohomology of quasismooth intersections of ample hypersurfaces is completely described. Next, we study semiample hypersurfaces in toric varieties. While the geometry and cohomology of ample hypersurfaces in toric varieties have been studied, not much attention has been paid to semiample hypersurfaces de fined by the sections of line bundles generated by global sections. It turns out that mirror symmetric hypersurfaces in the Batyrev mirror construction are semiample, but often not ample. We show that semiample hypersurfaces lead to a geometric construction which allows us to study the intersection theory and cohomology of the hypersurfaces. The toric Nakai criterion is proved for ample divisors on complete toric varieties. A similar result shows: the notions nef (numerically effective) and semiample are equivalent. Then we study the middle cohomology of a quasismooth hypersurface. There is a natural map from a graded (Jacobian ) ring to the middle cohomology of the hypersurface such that the multiplicative structure on the ring is compatible with the topological cup product. Finally, we study the chiral ring of Calabi-Yau hypersurfaces in Batyrev's mirror construction, widely used in physics and mathematics. This ring is important in physics because it gives the correlation functions describing interactions between strings. From a mathematical point of view, this also produces enumerative information on mirror manifolds (e.g., the number of curves of a given degree and genus). We show that for a quasismooth hypersurface there is a ring homomorphism from its Jacobian ring to the chiral ring. In the Calabi-Yau case, we get an injective ring homomorphism from a quotient of the Jacobian ring into the chiral ring. We construct new elements in the chiral ring, which should correspond to non-polynomial deformations (moving the Calabi-Yau outside the toric variety). The main result is an explicit description of a subring of the chiral ring of semiample regular Calabi-Yau hypersurfaces. This contains all information about the correlation functions used by physicists. Computation of the chiral ring leads to a description of cohomology of the hypersurface. In particular, we describe the toric part of cohomology of a semiample regular hypersurface defined as the image of cohomology of the toric variety. Bibliographical Information:
{"url":"http://www.openthesis.org/documents/Toric-geometry-mirror-symmetry-370157.html","timestamp":"2014-04-16T11:01:22Z","content_type":null,"content_length":"10554","record_id":"<urn:uuid:492e73af-4b34-4491-9be0-4096a6ef882d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do you know if a chord is perpendicular to a segment in a circle? • one year ago • one year ago Best Response You've already chosen the best response. It will (most likely) have the little box in the corner to show the right angle. Best Response You've already chosen the best response. A segment that bisects the chord and passes through the center is perpendicular to the chord. Best Response You've already chosen the best response. Oooo, okay thanks guys :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f96ed1be4b000ae9eccc43b","timestamp":"2014-04-20T18:43:28Z","content_type":null,"content_length":"32367","record_id":"<urn:uuid:ff390f18-abab-419f-9357-0f35474eb678>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit of an inverse Mellin transform up vote 0 down vote favorite In Edwards' very nice book ``Riemann's zeta function'' the following integral comes up in section 1.14. Suppose $\beta = \sigma + i\tau$ with $\sigma > 0$. Suppose $x > 1$. Fix some real number $a > \sigma$ and let $$I(\beta) := \int_{a-i\infty}^{a+i\infty} \frac{\log(1-\frac s{\beta})}{s^2} x^s ds.$$ [Here, the logarithm is taken as $\log(s-\beta)-\log(-\beta)$ for $\tau \ne 0$ using the branch of logarithm defined away from the negative real axis, which is real on the positive real axis.] It's easy to see that this integral is absolutely convergent (note that $|x^s| = x^a$). Now Edwards claims that $\lim_{\tau \to \infty} I(\beta) = 0$ because ``it is not difficult to show, using the Lebesgue bounded convergence theorem ... that the limit of this integral is the integral of the limit, namely zero''. I didn't find a good way to bound $\log(1-\frac s{\beta})$ for fixed $s$. Is there a solution that avoids long and messy calculations? complex-analysis analytic-number-theory 2 For $s$ fixed, and $\beta$ big, you can use the Taylor expansion for $\log{1+z}$ around $z=0$. – Matt Young Jun 8 '12 at 14:41 Sure, but for dominated convergence I would have to bound the absolute value of the integrand by an (integrable) function that only depends on $s$ on a region $(a-i\infty,a+i\infty) \times (\ sigma+i\tau_0,\sigma+i\infty)$, but the Taylor expansion isn't valid on all of this region. But maybe I misunderstand what Edwards or you are suggesting? – anon Jun 8 '12 at 15:50 2 You just said that you didn't find a good way to bound $\log(1-s/\beta)$ for fixed $s$. Now if you want to vary $s$ then it's another story. What I would do is get an asymptotic bound on $I(\beta) $ and let $\beta \rightarrow \infty$. In the region $|s| < |\beta|/2$ use the Taylor expansion, and for $|s| > |\beta|/2$ bound the log with absolute values and use the fact that the integral of $ \log{y}/y^2$ from $Y$ to $\infty$ is $O(\log{Y}/Y)$. So maybe you can get a bound like $|I(\beta)| \leq \log(1 + |\beta|)/|\beta|$. – Matt Young Jun 8 '12 at 16:43 Thanks! That sounds good, I'll think about it tomorrow. And my question was about the limit of $I(\beta)$, not about the log. I didn't phrase it well, sorry. – anon Jun 8 '12 at 19:16 It's all clear now, thanks a lot! Edwards' suggestion was misleading. – anon Jun 9 '12 at 9:21 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged complex-analysis analytic-number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/99095/limit-of-an-inverse-mellin-transform","timestamp":"2014-04-16T11:26:00Z","content_type":null,"content_length":"51314","record_id":"<urn:uuid:474c3001-df80-4f00-b337-344076cdbff0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2001 [00423] [Date Index] [Thread Index] [Author Index] RE: Extracting units from a list of values • To: mathgroup at smc.vnet.net • Subject: [mg27423] RE: [mg27398] Extracting units from a list of values • From: "David Park" <djmp at earthlink.net> • Date: Sun, 25 Feb 2001 00:53:50 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com I have a package at my web site below called Miscellaneous`V4ExtendUnits`. One of the routines in the package is BaseSI which will convert any expression with units to the standard base SI units (Second, Meter, Kilogram, Ampere, Candela). Once this is done we can remove all numeric quantities with the following rule. unitsextract[expr_] := Module[{t = BaseSI[expr]}, t = t //. HoldPattern[a___*(b_ /; NumericQ[b] && FreeQ[b, Second | Meter | Kilogram | Ampere | Candela])* c___] :> a*c; If[NumericQ[t], 1, t]] Then it is easy to test. Here is one of your examples: unitsextract /@ {1, 2 Pi, 3 E Meter^2} SameQ @@ % {1, 1, Meter^2} Here is a more complicated example, where the list is expressed in different but compatible units. (If only NASA had done this with their Mars probe!) unitsextract /@ {2.*Meter/Second^2, Sin[3]*Meter/Second^2, 5*Pi*Feet/Minute^2, 3.*LightYear/Year^2} SameQ @@ % {Meter/Second^2, Meter/Second^2, Meter/Second^2, Meter/Second^2} David Park djmp at earthlink.net > -----Original Message----- > From: Thomas Anderson [mailto:tga at stanford.edu] To: mathgroup at smc.vnet.net > Sent: Friday, February 23, 2001 2:34 AM > To: mathgroup at smc.vnet.net > Subject: [mg27423] [mg27398] Extracting units from a list of values > As part of the package I'm working on, one of the functions takes > a list of measured values as an argument. For maximum flexibility, > I wish to accept either a list of dimensionless numbers or a list > of numbers with units, i.e. > {1, 2, 3} > or > {1 Meter, 2 Meter, 3 Meter} > for example. As part of the argument checking, I want to be able > to test whether the units are consistent: everything should either > be dimensionless or have the same units. > The problem boils down to: how can I separate the units from the > numeric part of the value? I've tried a few things, and so far my > best attempt has been > Replace[vals, (_?NumericQ unit_.) :> unit, 1] > where vals is the list of values. This works pretty well: > dimensionless numbers give a list of "units" of 1, and values like > "2 Meter" or "1 Elephant^2" give "Meter" and "Elephant^2" respectively. > This method doesn't work, however, with input containing more > complicated numerical values. For example, {1, 2 Pi, 3 E Meter^2} gets > transformed into {1, 2, 3 Meter^2}, whereas I want {1, 1, Meter^2} for > these values. I could apply N[] to the values before extracting the > units, but then "Meter^2" becomes "Meter^2.", which I don't want. > This isn't a huge problem, since I'm expecting the values to be > integers > or real numbers, but I want my code to be as bulletproof as possible. > Thanks in advance for any suggestions. > -Tom Anderson > tga at stanford.edu
{"url":"http://forums.wolfram.com/mathgroup/archive/2001/Feb/msg00423.html","timestamp":"2014-04-16T22:04:28Z","content_type":null,"content_length":"37318","record_id":"<urn:uuid:c10a2add-87e9-4137-83a3-dd60738f1cad>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Edgewater, NJ Algebra 2 Tutor Find an Edgewater, NJ Algebra 2 Tutor ...My brother played jazz drums, my sister, classical guitar. My dad was a Jazz bassist. I continue to grow as a pianist with a piano mentor who graduated from Julliard and did a stint at the Royal College of Music, London. 81 Subjects: including algebra 2, chemistry, English, Spanish ...I have also been an adjunct professor at the College of New Rochelle, Rosa Parks Campus. As for teaching style, I feel that the concept drives the skill. If you have the idea of what to do on a problem, you do not need to complete 10 similar problems. 26 Subjects: including algebra 2, calculus, physics, geometry Even though I struggled with math in the past, today I'm a mechanical engineering major in one of the best engineering programs nationwide. Overcoming my math struggles, gave me special abilities to tutor and help others to overcome theirs. Let me help you discover the math wiz we all have in us. 12 Subjects: including algebra 2, chemistry, French, physics ...I'm 23 years old and I recently graduated from FIU with a Bachelors of Arts in Psychology. I specialize in elementary math, algebra, geometry, trigonometry, and pre-calculus. I tutor drawing and painting as well. 11 Subjects: including algebra 2, geometry, trigonometry, elementary (k-6th) I love the feeling when the light-bulb goes on in my head and what seemed like gibberish, makes total sense. Even more exciting is watching that light-bulb go on when I am tutoring. I have experience tutoring students of all ages, (elementary through graduate school) in many subject areas - although my real passion is math. 18 Subjects: including algebra 2, chemistry, geometry, statistics
{"url":"http://www.purplemath.com/Edgewater_NJ_algebra_2_tutors.php","timestamp":"2014-04-18T16:21:48Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:3be43c68-d5d4-4f43-ac69-032eed75cd37>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Solve in two different ways (2) September 26th 2011, 11:05 PM #1 Solve in two different ways (2) Solve in two different ways. $\frac{dy}{dx}=\frac{\sqrt{y}-y}{\tan x}$ Could someone please check the finer details of my solution, such as signs of integration constants and modulus signs? $\frac{dy}{\sqrt{y}-y}=\frac{dx}{\tan x}$ $\frac{1}{2\sqrt{y}}\frac{2}{1-\sqrt{y}}dy=\cot xdx$ $\ln |\sin x|=\int \frac{1}{2\sqrt{y}}\frac{2}{1-\sqrt{y}}dy$ Let $u=\sqrt{y}$. $\int \frac{1}{2\sqrt{y}}\frac{2}{1-\sqrt{y}}dy=\int \frac{2}{1-u}du$ $=-2\ln |1-u|+C_1$ $=-2\ln |1-\sqrt{y}|+C_1$ $\ln |\sin x|=-2\ln |1-\sqrt{y}|+C_1$ $\ln |\sin x|(1-\sqrt{y})^2=C_1$ $|\sin x|(1-\sqrt{y})^2=e^{C_1}=C_2$, where $C_2$ is a positive constant. $(1-\sqrt{y})^2=C_2|\sin x|^{-1}$ $1-\sqrt{y}=\pm \sqrt{C_2}|\sin x|^{-1/2}$ $\sqrt{y}=1\mp \sqrt{C_2}|\sin x|^{-1/2}$ $\sqrt{y}=1+C|\sin x|^{-1/2}$, where $C=\mp \sqrt{C_2}$. $y(x)=1+C^2|\sin x|^{-1}+2C|\sin x|^{-1/2}$ $\frac{dy}{dx}+\frac{y}{\tan x}=\frac{\sqrt{y}}{\tan x}$ Let $u=y^{1/2}$. $2u\frac{du}{dx}+\frac{u^2}{\tan x}=\frac{u}{\tan x}$ $\frac{du}{dx}+\frac{u}{2\tan x}=\frac{1}{2\tan x}$ $\frac{du}{dx}+\frac{u\cot x}{2}=\frac{\cot x}{2}$ The integrating factor is $\exp\left(\int \frac{\cot x}{2}\right)=\exp\left(\frac{1}{2}\ln |\sin x|\right)=|\sin x|^{1/2}$. $|\sin x|^{1/2}\frac{du}{dx}+|\sin x|^{1/2}\frac{u\cot x}{2}=|\sin x|^{1/2}\frac{\cot x}{2}$ $\frac{d}{dx}(u|\sin x|^{1/2})=|\sin x|^{1/2}\frac{\cot x}{2}$ $u|\sin x|^{1/2}=|\sin x|^{1/2}+C$ $u=1+C|\sin x|^{-1/2}$ $y^{1/2}=1+C|\sin x|^{-1/2}$ $y(x)=1+C^2|\sin x|^{-1}+2C|\sin x|^{-1/2}$ Re: Solve in two different ways (2) The beauty of DE's is it's often much easier to check your own solution than to find the solution in the first place. Just plug your solution in to the DE and see if you get equality! You got the same answer both ways, which is a good sign. Re: Solve in two different ways (2) How would I differentiate $|\sin x|^{-1}$ and $|\sin x|^{-1/2}$. Also, are the modulus signs really necessary? Re: Solve in two different ways (2) Note that the signum function. You can use the chain rule multiple times to get the final result. Re: Solve in two different ways (2) Note that the signum function. You can use the chain rule multiple times to get the final result. $\frac{dy}{dx}=C^2*-1*|\sin x|^{-2}sgn(\sin x)\cos x$$+2C*-\frac{1}{2}|\sin x|^{-3/2}sgn(\sin x)\cos x$ $=-C^2|\sin x|^{-2}sgn(\sin x)\cos x-C|\sin x|^{-3/2}sgn(\sin x)\cos x$ $\frac{\sqrt{y}-y}{\tan x}=\frac{-C^2|\sin x|^{-1}-C|\sin x|^{-1/2}}{\tan x}$ $=\frac{(-C^2|\sin x|^{-1}-C|\sin x|^{-1/2})\cos x}{\sin x}$ $=\frac{(-C^2|\sin x|^{-2}-C|\sin x|^{-3/2})\cos x}{\sin x|\sin x|^{-1}}$ (Multiplying numerator and denominator by $|\sin x|^{-1}$.) $=\frac{(-C^2|\sin x|^{-2}-C|\sin x|^{-3/2})\cos x}{sgn(\sin x)}$ $=(-C^2|\sin x|^{-2}-C|\sin x|^{-3/2})\cos xsgn(\sin x)$ $=-C^2|\sin x|^{-2}sgn(\sin x)\cos x-C|\sin x|^{-3/2}sgn(\sin x)\cos x$ $\frac{dy}{dx}=\frac{\sqrt{y}-y}{\tan x}$ September 27th 2011, 07:34 AM #2 September 27th 2011, 07:50 AM #3 September 27th 2011, 07:53 AM #4 September 27th 2011, 09:40 AM #5
{"url":"http://mathhelpforum.com/differential-equations/188962-solve-two-different-ways-2-a.html","timestamp":"2014-04-18T04:08:10Z","content_type":null,"content_length":"57794","record_id":"<urn:uuid:1aebe5d5-3261-4290-a5d6-85b3bcfe0023>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Unifying Theories of Programming Results 11 - 20 of 122 - In ISMM 08 , 2008 "... Embedded systems are becoming more widely used but these systems are often resource constrained. Programming models for these systems should take into formal consideration resources such as stack and heap. In this paper, we show how memory resource bounds can be inferred for assembly-level programs. ..." Cited by 19 (1 self) Add to MetaCart Embedded systems are becoming more widely used but these systems are often resource constrained. Programming models for these systems should take into formal consideration resources such as stack and heap. In this paper, we show how memory resource bounds can be inferred for assembly-level programs. Our inference process captures the memory needs of each method in terms of the symbolic values of its parameters. For better precision, we infer path-sensitive information through a novel guarded expression format. Our current proposal relies on a Presburger solver to capture memory requirements symbolically, and to perform fixpoint analysis for loops and recursion. Apart from safety in memory adequacy, our proposal can provide estimate on memory costs for embedded devices and improve performance via fewer runtime checks against memory bound. 1. , 2003 "... In this article we introduce a comprehensive set of algebraic laws for rool, a language similar to sequential Java but with a copy semantics. We present a few laws of commands, but focus on the object-oriented features of the language. We show that this set of laws is complete in the sense that ..." Cited by 16 (3 self) Add to MetaCart In this article we introduce a comprehensive set of algebraic laws for rool, a language similar to sequential Java but with a copy semantics. We present a few laws of commands, but focus on the object-oriented features of the language. We show that this set of laws is complete in the sense that it is sufficient to reduce an arbitrary rool program to a normal form expressed in a restricted subset of the rool operators. We also , 2001 "... Refinement is reviewed in a partial correctness framework, highlighting in particular the distinction between its use as a specification constructor at a high level, and its use as an implementation mechanism at a low level. Some of its shortcomings as specification constructor at high levels of ..." Cited by 16 (13 self) Add to MetaCart Refinement is reviewed in a partial correctness framework, highlighting in particular the distinction between its use as a specification constructor at a high level, and its use as an implementation mechanism at a low level. Some of its shortcomings as specification constructor at high levels of abstraction are pointed out, and these are used to motivate the adoption of retrenchment for certain high level development steps. Basic properties of retrenchment are described, including a justification of the operation PO, simple examples, simulation properties, and compositionality for both the basic retrenchment notion and enriched versions. The issue of framing retrenchment in the wide variety of correctness notions for refinement calculi that exist in the literature is tackled, culminating in guidelines on how to `brew your own retrenchment theory'. Two short case studies are presented. One is a simple digital redesign control theory problem, the other is a radiotherapy - In Proceedings of ICECCS-2006 , 2006 "... This paper presents efficient mechanisms for the direct implementation of formal models of highly concurrent dynamic systems. The formalisms captured are CSP (for concurrency) and B (for state transformation). The technology is driving the development of occam-π, a multiprocessing language based on ..." Cited by 16 (9 self) Add to MetaCart This paper presents efficient mechanisms for the direct implementation of formal models of highly concurrent dynamic systems. The formalisms captured are CSP (for concurrency) and B (for state transformation). The technology is driving the development of occam-π, a multiprocessing language based on a careful combination of ideas from Hoare’s CSP (giving compositional semantics, refinement and safety/liveness analysis) and Milner’s π-calculus (giving dynamic network construction and mobility). We have been experimenting with systems developing as layered networks of self-organising neighbourhood-aware communicating processes, with no need for advanced planning or centralised control. The work reported is part of our TUNA (‘Theory Underpinning Nanotech Assemblers’) project, a partnership with colleagues from the Universities of York, Surrey and Kent, which is investigating formal approaches to the capture of safe emergent behaviour in highly complex systems. A particular study modelling artificial blood platelets is described. A novel contribution reported here is a fast resolution of (CSP external) choice between multiway process synchronisations from which any participant may withdraw its offer at any time. The software technology scales to millions of processes per processor and distributes over common multiprocessor clusters. 1. - Department of Computer Science, University of Utrecht , 1999 "... The distinctive merit of the declarative reading of logic programs is the validity ofallthelaws of reasoning supplied by the predicate calculus with equality. Surprisingly many of these laws are still valid for the procedural reading � they can therefore be used safely for algebraic manipulation, pr ..." Cited by 16 (4 self) Add to MetaCart The distinctive merit of the declarative reading of logic programs is the validity ofallthelaws of reasoning supplied by the predicate calculus with equality. Surprisingly many of these laws are still valid for the procedural reading � they can therefore be used safely for algebraic manipulation, program transformation and optimisation of executable logic programs. This paper lists a number of common laws, and proves their validity for the standard (depth- rst search) procedural reading of Prolog. They also hold for alternative search strategies, e.g. breadth- rst search. Our proofs of the laws are based on the standard algebra of functional programming, after the strategies have been given a rather simple implementation in Haskell. 1 , 1997 "... Method integration is the procedure of combining multiple methods to form a new technique. In the context of software engineering, this can involve combining specification techniques, rules and guidelines for design and implementation, and sequences of steps for managing an entire development. In cu ..." Cited by 15 (9 self) Add to MetaCart Method integration is the procedure of combining multiple methods to form a new technique. In the context of software engineering, this can involve combining specification techniques, rules and guidelines for design and implementation, and sequences of steps for managing an entire development. In current practice, method integration is often an ad-hoc process, where links between methods are defined on a case-by-case basis. In this dissertation, we examine an approach to formal method integration based on so-called heterogeneous notations: compositions of compatible notations. We set up a basis that can be used to formally define the meaning of compositions of formal and semiformal notations. Then, we examine how this basis can be used in combining methods used for system specification, design, and implementation. We demonst... , 1999 "... Timed RAISE Specification Language(TRSL) is an extension of RAISE Specification Language by adding time constructors for specifying real-time application. Duration Calculus(DC) is a real-time interval logic which can be used to specify and reason about timing and logical constraints on duration prop ..." Cited by 15 (5 self) Add to MetaCart Timed RAISE Specification Language(TRSL) is an extension of RAISE Specification Language by adding time constructors for specifying real-time application. Duration Calculus(DC) is a real-time interval logic which can be used to specify and reason about timing and logical constraints on duration properties of Boolean states in a dynamic system. This paper gives a denotational semantics to a subset of TRSL expressions, using Duration Calculus extended with super-dense chop modality and notations to capture time point properties of piecewise continuous states of arbitrary types. Using this semantics, we present a proof rule for verifying TRSL iterative expressions and implement the rule to prove the satisfaction by a sample TRSL specification of its real-time requirements. Li Li is a Fellow of UNU/IIST, on leave of absence from University of Science and Technology of China, where he is a Ph.D student. E-mail: ll@iist.unu.edu. He Jifeng is a Senior Research Fellow of UNU/ IIST, on leave o... - Science of Computer Programming , 1995 "... The two models presented in this paper provide two different semantics for an extension of Dijkstra's language of guarded commands. The extended language has an additional operator, namely probabilistic choice, which makes it possible to express randomised algorithms. An earlier model by Claire Jone ..." Cited by 15 (0 self) Add to MetaCart The two models presented in this paper provide two different semantics for an extension of Dijkstra's language of guarded commands. The extended language has an additional operator, namely probabilistic choice, which makes it possible to express randomised algorithms. An earlier model by Claire Jones included probabilistic choice but not non-determinism, which meant that it could not be used for the development of algorithms from specifications. Our second model is built on top of Claire Jones' model, using a general method of extending a probabilistic cpo to one which also contains non-determinism. The first model was constructed from scratch, as it were, guided only by the desire for certain algebraic properties of the language constructs, which we found lacking in the second model. We compare and contrast the properties of the two models both by giving examples and by constructing mappings between them and the non-probabilistic model. On the basis of this comparison we argue that, i... , 2001 "... An intermediate-level specification notation is presented for use with BSP-style programming. It is achieved by extending pre-post semantics to reveal state at points of global synchronisation. That enables us to integrate the pre-post, finite and reactive-process styles of specification in BSP, as ..." Cited by 14 (10 self) Add to MetaCart An intermediate-level specification notation is presented for use with BSP-style programming. It is achieved by extending pre-post semantics to reveal state at points of global synchronisation. That enables us to integrate the pre-post, finite and reactive-process styles of specification in BSP, as shown by our treatment of the dining philosophers. The language is provided with a complete set of laws and has been formulated to benefit from a simple predicative semantics. - In Pro. APLAS’2004, LNCS 3302 , 2004 "... Abstract. This paper develops a mathematical characterisation of object-oriented concepts by defining an observation-oriented semantics for an object-oriented language (OOL) with a rich variety of features including subtypes, visibility, inheritance, dynamic binding and polymorphism. The language is ..." Cited by 14 (7 self) Add to MetaCart Abstract. This paper develops a mathematical characterisation of object-oriented concepts by defining an observation-oriented semantics for an object-oriented language (OOL) with a rich variety of features including subtypes, visibility, inheritance, dynamic binding and polymorphism. The language is expressive enough for the specification of object-oriented designs and programs. We also propose a calculus based on this model to support both structural and behavioural refinement of object-oriented designs. We take the approach of the development of the design calculus based on the standard predicate logic in Hoare and He’s Unifying Theories of Programming (UTP). We also consider object reference in terms of object identity as values and mutually dependent methods.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=16060&sort=cite&start=10","timestamp":"2014-04-21T03:46:09Z","content_type":null,"content_length":"38298","record_id":"<urn:uuid:8707fdf8-fbd3-4136-a47e-9e03b31bf85d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Newport, RI Find a Newport, RI Calculus Tutor ...I find that helping students succeed in Geometry helps them to succeed in further areas such as SAT tests, sciences like Physics and Chemistry, and higher maths like Precalculus and Calculus. To acquire Geometry skills, I work with student to think on paper! This helps them envision what the solutions look like with the process to get to the solution. 11 Subjects: including calculus, physics, geometry, algebra 1 ...I have been working with Fortran code for over 40 years. I have taken both undergraduate and graduate courses in Linear Algebra, and I use it in my work as an Applied Mathematician. I work with Matlab daily in my work. 36 Subjects: including calculus, English, GRE, reading Hello!!My name is Phil and I would be thrilled to be the tutor for you or for your student. As a father and a former Calculus instructor at the United States Air Force Academy, I am confident that we will find the tools necessary to succeed. I would like ensure that you or your students have that extra time and attention they need to succeed. 9 Subjects: including calculus, physics, geometry, algebra 1 ...The advanced classes included computer graphics and compiler design. I have my own business where I develop programs to solve engineering analysis problems. I have two masters degrees in engineering from the University of Michigan, one of which is in Computer Engineering. 20 Subjects: including calculus, physics, statistics, geometry ...The MTEL tests all the topics Praxis I Math does, as well a few other topics (e.g., Calculus). I did well on all the subareas, including the open response, where I received the highest score, “Thorough.” (The open response section involves answering two multipart problems. The section score eva... 45 Subjects: including calculus, Spanish, chemistry, English Related Newport, RI Tutors Newport, RI Accounting Tutors Newport, RI ACT Tutors Newport, RI Algebra Tutors Newport, RI Algebra 2 Tutors Newport, RI Calculus Tutors Newport, RI Geometry Tutors Newport, RI Math Tutors Newport, RI Prealgebra Tutors Newport, RI Precalculus Tutors Newport, RI SAT Tutors Newport, RI SAT Math Tutors Newport, RI Science Tutors Newport, RI Statistics Tutors Newport, RI Trigonometry Tutors Nearby Cities With calculus Tutor Coventry, RI calculus Tutors Dartmouth calculus Tutors Jamestown, RI calculus Tutors Johnston, RI calculus Tutors Middletown, RI calculus Tutors Narragansett calculus Tutors NETC, RI calculus Tutors North Kingstown calculus Tutors Portsmouth, RI calculus Tutors Somerset, MA calculus Tutors South Kingstown, RI calculus Tutors Tiverton calculus Tutors Wakefield, RI calculus Tutors West Warwick calculus Tutors Westport, MA calculus Tutors
{"url":"http://www.purplemath.com/newport_ri_calculus_tutors.php","timestamp":"2014-04-17T19:51:37Z","content_type":null,"content_length":"24095","record_id":"<urn:uuid:54d34897-c7d5-47f0-b3df-ab0627cbc9e0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple linear regression August 5, 2009 By Todos Logos We use the regression analysis when, from the data sample, we want to derive a statistical model that predicts the values of a variable (Y, ) from the values of another variable (X, ). The linear regression , which is the simplest and most frequent relationship between two quantitative variables, can be (when X increase, Y increase too) or (when X increase, Y decrease): this is indicated by the sign of the coefficient To build the line that describes the distribution of points, we might refer to different principles. The most common is the least squares method (or Model I), and this is the method used by the statistical software R. Suppose you want to obtain a linear relationship between weight (kg) and height (cm) of 10 subjects. Height: 175, 168, 170, 171, 169, 165, 165, 160, 180, 186 Weight: 80, 68, 72, 75, 70, 65, 62, 60, 85, 90 The first problem is to decide what is the dependent variable Y and waht is the independent variable X. In general, the independent variable is not affected by an error during the measurement (or affected by random error), while the dependent variable is affected by error. In our case we can assume that the variable weight is the independent variable (X), and the dependent variable height So our problem is to find a linear relationship (formula) that allows us to calculate the height, known as the weight of an individual. The simplest formula is that of a broad line of type Y = a + bX. The simple regression line in R is calculated as follows: height = c(175, 168, 170, 171, 169, 165, 165, 160, 180, 186) weight = c(80, 68, 72, 75, 70, 65, 62, 60, 85, 90) model = lm(formula = height ~ weight, x=TRUE, y=TRUE) lm(formula = height ~ weight, x = TRUE, y = TRUE) (Intercept) weight 115.2002 0.7662 The correct syntax of the formula stated in lm is: Y ~ X, then you declare first the dependent variable, and after the independent variable (or variables). The output of the function is represented by two parameters a and b: a=115.2002 (intercept), b=0.7662 (the slope). The simple calculation of the line is not enough. We must assess the significance of the line, ie if the slopeb differs from zero significantly. This may be done with a Student's t.test or with a Fisher's F-test. In R both can be retrieved very quickly, with the function summary(). Here's how: model <- lm(height ~ weight) lm(formula = height ~ weight) Min 1Q Median 3Q Max -1.6622 -0.9683 -0.1622 0.5679 2.2979 Estimate Std. Error t value Pr(>|t|) (Intercept) 115.20021 3.48450 33.06 7.64e-10 *** weight 0.76616 0.04754 16.12 2.21e-07 *** Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.405 on 8 degrees of freedom Multiple R-squared: 0.9701, Adjusted R-squared: 0.9664 F-statistic: 259.7 on 1 and 8 DF, p-value: 2.206e-07 Here too there are the values of the parameters a and b. The Student's t-test on the slope in this case has the value 16.12; the Student's t-test on the intercept has value 16.12; the value of the Fisher's F test is 259.7 (is the same value would be achieved by performing an ANOVA on the same data: anova(model)). The p-values of the t-tests and the F-test are less then 0.05, so the model we found is significant. The Multiple R-squared is the coefficient of determination. It provides a measure of how well future outcomes are likely to be predicted by the model. In this case, the 97.01% of the data are well predicted (with 95% of significance) by our model. We can plot on a graph the data points and the regression line, in this way: plot(weight, height) for the author, please follow the link and comment on his blog: Statistic on aiR daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/simple-linear-regression-3/","timestamp":"2014-04-20T05:59:56Z","content_type":null,"content_length":"41112","record_id":"<urn:uuid:60e55c13-7b5a-4cda-9efb-1dbcd667f388>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Shannon, GA Prealgebra Tutor Find a Shannon, GA Prealgebra Tutor ...I bring a strong passion for business and computers as well as unique math skills to help with making the time spent productive. My strength lies not just in knowledge of the subjects but in the process of how to integrate study habits with the use of internet technologies to make any challengin... 29 Subjects: including prealgebra, reading, writing, geometry ...I have been playing drums since I was 11 years old. From ages 12-17 I was selected for Georgia All State Band all 6 years for percussion. I was section leader of my high school drumline in grades 10, 11, and 12. 19 Subjects: including prealgebra, chemistry, physics, calculus ...I have a Bachelor's degree in Biology from Berry College. I have been a biology tutor at Berry for the past two years and would love to tutor other students in biology and other areas if needed. My experience tutoring students has allowed me to discover new methods to effectively teach students about the material and guide them to understand difficult concepts. 17 Subjects: including prealgebra, chemistry, reading, SAT math ...I have taught part time at Chattahoochee Community College and currently teach part time at Georgia Highlands College. I love mathematics but also appreciate the struggles that so many students have with this most important subject. I guarantee that I can make mathematics understandable to you or your child. 12 Subjects: including prealgebra, calculus, geometry, algebra 1 ...I also have a variety of needlework hobbies, including crochet, needlepoint, cross-stitch, and embroidery. I believe that anyone can learn any subject. I take a very personalized approach to tutoring, and I make sure that my students understand. 42 Subjects: including prealgebra, English, reading, writing Related Shannon, GA Tutors Shannon, GA Accounting Tutors Shannon, GA ACT Tutors Shannon, GA Algebra Tutors Shannon, GA Algebra 2 Tutors Shannon, GA Calculus Tutors Shannon, GA Geometry Tutors Shannon, GA Math Tutors Shannon, GA Prealgebra Tutors Shannon, GA Precalculus Tutors Shannon, GA SAT Tutors Shannon, GA SAT Math Tutors Shannon, GA Science Tutors Shannon, GA Statistics Tutors Shannon, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Shannon_GA_Prealgebra_tutors.php","timestamp":"2014-04-21T11:10:49Z","content_type":null,"content_length":"23952","record_id":"<urn:uuid:dcb96403-ef8b-4738-a924-ba7b7c4a8793>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
The maths sense You don't need to count to see that five apples are more than three oranges: you can tell just by looking. That's because we, as well as many animal species, are born with a sense for number that allows us to judge amounts even without being able to count. But is that inborn number sense related to the mathematical abilities people develop later on, or is learnt maths different from innate In one part of the experiments children were shown two different arrays and asked to choose which one had more dots without counting them. Image courtesy of Duke University. New research from the Duke Institute for Brain Sciences suggests that it's the former. "When children are acquiring the symbolic system for representing numbers and learning about maths in school, they're tapping into this primitive number sense," said Elizabeth Brannon, a professor of psychology and neuroscience, who led the study. "It's the conceptual building block upon which mathematical ability is built." Brannon and graduate student Ariel Starr worked with 48 six-months-old babies, who they sat in front of two screens. One screen always showed them the same number of dots (eg 8) which changed their size and position. The other screen switched between two numerical values (eg 8 and 16 dots) which also changed size and position. Most babies are interested in things that change, so if a baby looked longer at the screen on which the numerical values were changing, the researchers assumed that it had spotted the difference. Brannon and Starr then tested the same children three years later. Again they were asked to judge amounts of dots without counting them. But in addition they were given a standardised maths test suitable for their age, an IQ test and a verbal task to find out the largest number word they could understand. "We found that infants with higher preference scores for looking at the numerically changing screen had better primitive number sense three years later compared to those infants with lower scores," Starr said. "Likewise, children with higher scores in infancy performed better on standardised maths tests." This suggests that we do build on our inborn number sense when we come to learn symbolic maths using numerals and symbols. But there's no reason to despair (or excuse for laziness) if you feel you were short-changed at birth. Education is still the most important factor in developing maths ability. "We can't measure a baby's number sense ability at 6 months and know how they'll do on their SATs," Brannon added. "In fact our infant task only explains a small percentage of the variance in young children's maths performance. But our findings suggest that there is cognitive overlap between primitive number sense and symbolic math. These are fundamental building blocks." Understanding how babies and young children conceptualise and understand number can lead to new mathematics education strategies, according to Brannon. In particular, this knowledge can be used to help young children who have trouble learning mathematical symbols and basic methodologies. The new results, published in the Proceedings of the National Academy of Sciences, confirm previous research into the link between our inborn number sense and later mathematical ability.
{"url":"http://plus.maths.org/content/maths-sense","timestamp":"2014-04-19T09:24:41Z","content_type":null,"content_length":"26061","record_id":"<urn:uuid:4bedc04e-1546-475d-81de-7bc3db0fef15>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please help! Gerry is 4 years younger than Tom.If Tom is t years old, how old is Gerry? I know how to write the expression, but I don't know how to solve. Please help! Thanks! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50e9c4bee4b07cd2b6482ae2","timestamp":"2014-04-17T09:55:34Z","content_type":null,"content_length":"72834","record_id":"<urn:uuid:e7efcb1d-c6f4-4657-808b-9f753ec4c8e9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
arranging a fraction to simplify it 1. The problem statement, all variables and given/known data [itex]\stackrel{lim}{x\rightarrow}∞[/itex] [itex]\sqrt{x^2+x}-x[/itex] What's the deal with this one? You don't do anything with this one. I'm taking the limit of a rational function as x approaches ∞. However, within the problem, there is an algebraic arrangement i'm having trouble with. How would I get from the first fraction to the second fraction? Applying L'hopitals to: [itex]\frac{x}{\sqrt{x^2+x}+x}[/itex] I get: [itex]\frac{1}{\frac{x+1}{x+\sqrt{x}}+1}[/itex] This doesn't follow from what you started with. When you differentiated the square root in the denominator, I think this is what you did: $$d/dx \sqrt{x^2 + x} = \frac{1}{2\sqrt{x^2 + x}}\cdot (2x + 1) $$ So far, so good, but things start to fall apart after this. $$= \frac{x + 1}{x + \sqrt{x}}$$ 1 - minor mistake -- (1/2)(2x + 1) = x + 1/2, not x + 1 2 - serious mistake -- √(x + x) x + √x !! This mistake indicate that you don't understand the properties of radicals. There is NO property that says √(a + b) = √a + √b. I don't know for a fact that this was your thinking, but it sure seems like to me. In any case, it's much simpler to not use L'Hopital's Rule at all. The expression in the denominator of the original limit is ## \sqrt{x^2 + x} + x## Just factor x out of these two terms, and after some simplification, you can take the limit directly. But don't know how to get to: [itex]\frac{1}{\sqrt{1+\frac{1}{x}}+1}[/itex] ??? And the final solution simplified: 1/2
{"url":"http://www.physicsforums.com/showthread.php?p=3817907","timestamp":"2014-04-16T19:13:49Z","content_type":null,"content_length":"38520","record_id":"<urn:uuid:904fec40-0a1c-4534-8b46-7cfe6d38b016>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
vector fields April 13th 2010, 06:59 PM #1 Apr 2010 vector fields given a vector field V(x,y,z) = (xi + yj + zk)/r^3 where r = (x^2+y^2+z^2)^(1/2) is the distance from the point P from the origin. What are the x y and z components of V? and the partial derivatives of each component? The x component is the coefficient of the vector i, the y component is the coefficient of the vector j, and the z component is the coefficient of the vector k (at least in standard usage). thanks.. such an easy solution! April 13th 2010, 07:01 PM #2 Senior Member Feb 2010 April 13th 2010, 07:08 PM #3 Apr 2010
{"url":"http://mathhelpforum.com/calculus/139057-vector-fields.html","timestamp":"2014-04-18T17:09:46Z","content_type":null,"content_length":"32861","record_id":"<urn:uuid:55c2f53a-1d51-4116-a928-66713bb2adc3>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Are there results from gauge theory known or conjectured to distinguish smooth from PL manifolds? up vote 13 down vote favorite My question begins with a caveat: I sometimes spend time with topologists, but do not consider myself to be one. In particular, my apologies for any errors in what I say below — corrections are My impression is that manifold topologists like to consider three main categories of (finite-dimensional, paracompact, Hausdorff) manifolds, which I will call $\mathcal C^0$, $\mathrm{PL}$, and $\ mathcal C^\infty$, corresponding to manifolds whose atlases have transition functions that are, respectively, homeomorphisms, piecewise-affine transformations, and diffeomorphisms. The latter two categories obvious map faithfully into the first, and a theorem of Whitehead says that every $\mathcal C^\infty$ manifold admits a unique PL structure. These categories are not equivalent in any reasonable sense. The generalized Poincare conjecture is true in $\mathcal C^0$, true (except possibly in dimension $4$) in $\mathrm{PL}$, and false in many dimensions including $7$ in $\mathcal C^\infty$. $\mathcal C^0$ is the realm of surgery and h-cobordism. In $\mathcal C^\infty$, and in particular in $4$ dimensions, there is a powerful tool called "gauge theory", which provides the main technology used to prove examples of homeomorphic but not diffeomorphic manifolds. By definition, gauge theory is that part of PDE that studies connections on principal $G$-bundles for Lie groups $G$. The most important gauge theories for distinguishing between the $\mathcal C^\ infty$ and $\mathcal C^0$ worlds are Donaldson Theory (which studies the moduli space of $\mathrm{SU}(2)$ connections with self-dual curvature) and the conjecturally equivalent Seiberg–Witten Theory (which studies an abelian gauge field along with a matter field, and which I understand less well). Another important gauge theory that I understand much better is (three-dimensional) Chern–Simons Theory, whose PDE picks out the moduli space of flat $G$ connections; for example, counting with sign the flat $\mathrm{SU}(2)$ connections on a 3-manifold is supposed to correspond to the Casson My impression, furthermore, has been that the categories $\mathcal C^\infty$ and $\mathrm{PL}$ are in fact quite close. There are more objects in the latter, certainly, but in fact many of the results separating $\mathcal C^0$ from $\mathcal C^\infty$ in fact separate $\mathcal C^0$ from $\mathrm{PL}$. A side version of my question is to understand in better detail the distance between $\ mathcal C^\infty$ and $\mathrm{PL}$. But my main question is whether the technology of gauge theory (possibly broadly defined) can be used to separate them. A priori, the whole theory of PDE is based on smooth structures, so it would not be unreasonable, but I am not aware of examples. Are there gauge-theoretic invariants of smooth manifolds that distinguish nondiffeomorphic but $\mathrm{PL}$-isomorphic manifolds? Of course, a simple answer would be something like "For any $X,Y \in \mathcal C^\infty$, the inclusion $\mathcal C^\infty \hookrightarrow \mathrm{PL}$ induces a homotopy equivalence of mapping spaces $\mathcal C^\infty(X,Y) \to \mathrm{PL}(X,Y)$." This would explain the impression I have that $\mathcal C^\infty$ and $\mathrm{PL}$ are close — if it is true, it is not something I recall having been gt.geometric-topology manifolds gauge-theory add comment 2 Answers active oldest votes First, it follows from the work of Kirby and Siebenmann that in dimensions $\le 6$ PL and DIFF categories are equivalent. In particular, if you are working in dimension 4 (where gauge-theoretic invariants are mostly used) then the answer to your your question is negative. Starting in dimension 7, there are smooth manifolds which are PL-equivalent but not diffeomorphic. Milnor's exotic 7-spheres are the first examples of such manifolds. The smooth structures on Milnor's spheres are distinguished via index and 1st Pontryagin class. Whether you consider such invariants gauge-theoretic or not, depends on how broadly you interpret gauge theory. For instance, characteristic classes of smooth manifolds can be defined via up vote 7 differential forms, i.e., Chern-Weil theory, or as indices of some elliptic operators. Does this qualify as gauge theory? (You can lift forms from, say, tangent bundle to the principal down vote bundle- frame bundle, if you so desire.) The point of usage of gauge theory in dimension 4 is that the "traditional" topological invariants turned out to be insufficient, so one considers accepted spaces of connections satisfying some differential equation (like self-duality) and uses such spaces to derive some smooth invariants of 4-manifolds. As far as I know, nobody used this viewpoint in higher (i.e., at least 7) dimensions, since there was no need for it. add comment In Foundational Essays on Topological Manifolds, Smoothings, and Triangulations, Kirby and Seibenmann show homotopy equivalence of the Kan complexes $\mathrm{Man}^m_{sm}$ and $\mathrm{Man}^m_ {PL}$ for $m\leq 3$, , that can be thought of as classifying spaces of smooth and of PL $m$-manifolds correspondingly (an earlier version of this answer stated this for all $m$, which is false). Jacob Lurie wrote a very nice set of lecture notes on just this topic. Perhaps this is the statement you want? Restricted to one skeletons, it gives homotopy equivalence of mapping spaces for low dimensions- but the full statement tells you much more, and it provides a precise and intuitively satisfying sense in which smooth and PL categories are `close' for low dimensions. In higher dimensions, PL manifolds can be smoothed in dimension $\leq 7$, essentially uniquely in dimension $\leq 6$. up vote The definitions of the `classifying spaces' are roughly as follows. For a finite dimensional vector space $V$ and for a smooth $m$-manifold $M$, the simplicial set $\mathrm{Emb}^m_{sm}(M,V)$ 4 down is defined to have $n$-simplices as embeddings $M\times\Delta^n\rightarrow V\times\Delta^n$ which commute with the projection to $n$. Now let $\mathrm{Sub}^m_{sm}(V)$ denote the simplicial vote set of submanifolds of $V$, whose $n$-simplices are given as smooth submanifolds $X\subseteq V\times\Delta^n$ such that the projection $X\rightarrow \Delta^n$ is a smooth fibre bundle of relative dimension $n$. If $V$ is infinite dimensional, we define $\mathrm{Sub}_{sm}(V)$ as the direct limit of $\mathrm{Sub}_{sm}(V_0)$ as $V_0$ ranges over all finite dimensional subspaces of $V$. The parallel definitions in the PL case define $\mathrm{Sub}_{PL}(V)$. For a fixed infinite dimensional vector space $V$, the Kan complexes $\mathrm{Man}^m_{sm}$ and $\mathrm{Man}^m_ {PL}$ are defined as $\mathrm{Sub}^m_{sm}(V)$ and $\mathrm{Sub}^m_{PL}(V)$ correspondingly. (I fixed a formatting conflict.) – Theo Johnson-Freyd Mar 17 '13 at 17:37 2 Kirby and Siebenmann show no such thing, as the statement claimed is false. The $E_8$ PL-manifold in any dimension $4k \geq 8$ (obtained by forming the $E_8$-plumbing and coning off the boundary, which is a PL sphere) is not smoothable, so any reasonable map $Man_{sm}^m \to Man_{PL}^m$ is not even surjective on path-components, never mind a homotopy equivalence. – Oscar Randal-Williams Mar 17 '13 at 21:31 @Oscar: corrected to make the statement only for $m\leq 3$. Sorry about that. – Daniel Moskovich Mar 18 '13 at 1:22 add comment Not the answer you're looking for? Browse other questions tagged gt.geometric-topology manifolds gauge-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/124753/are-there-results-from-gauge-theory-known-or-conjectured-to-distinguish-smooth-f","timestamp":"2014-04-18T00:43:02Z","content_type":null,"content_length":"64936","record_id":"<urn:uuid:2eae2b20-a41d-484c-b52a-7c570a494d7f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
National Park Calculus Tutor ...I have been trained to teach Geometry according to the Common Core Standards. I have planned and executed numerous lessons for classes of high school students, as well as tutored many independently. I have been trained to teach Trigonometry according to the Common Core Standards. 11 Subjects: including calculus, geometry, algebra 1, algebra 2 ...Today, even though my career has taken me away from tutoring full-time, I continue to tutor math because it is one of my favorite subjects, and because helping people is one of the most rewarding ways to spend my time. It is a really meaningful experience for me when I can help someone get to th... 12 Subjects: including calculus, writing, geometry, algebra 1 ...I have a bachelor's degree in mathematics. I have taken courses dealing with linear algebra, including the courses Linear Algebra and Linear Programming (which is basically an applied linear algebra course). I have a master's in education, in particular, mathematics education. As part of that, I was required to take (and pass) the Praxis II: Content Knowledge. 16 Subjects: including calculus, English, physics, geometry ...My experiences have given me a very strong understanding of the concepts and applications of linear algebra. I have taken multiple Logic courses in college. I got an A in my introduction to Logic course, as well as my higher level logic courses. 27 Subjects: including calculus, chemistry, economics, elementary math ...I have experience with the uses of linear algebra and matrices. I have experience dealing with row reduction, multiplication of matrices. I have a bachelor's degree in mathematics and took a symbolic logic course in college passing with an A. 13 Subjects: including calculus, geometry, GRE, algebra 1
{"url":"http://www.purplemath.com/National_Park_Calculus_tutors.php","timestamp":"2014-04-21T12:41:00Z","content_type":null,"content_length":"24116","record_id":"<urn:uuid:d17dec15-d0be-4925-b2ab-a7217c8327e4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Deriving bisimulation congruences: 2-categories vs. precategories Results 1 - 10 of 13 , 2005 "... The theory of reactive systems, introduced by Leifer and Milner and previously extended by the authors, allows the derivation of well-behaved labelled transition systems (LTS) for semantic models with an underlying reduction semantics. The derivation procedure requires the presence of certain colimi ..." Cited by 36 (2 self) Add to MetaCart The theory of reactive systems, introduced by Leifer and Milner and previously extended by the authors, allows the derivation of well-behaved labelled transition systems (LTS) for semantic models with an underlying reduction semantics. The derivation procedure requires the presence of certain colimits (or, more usually and generally, bicolimits) which need to be constructed separately within each model. In this paper, we o#er a general construction of such bicolimits in a class of bicategories of cospans. The construction sheds light on as well as extends Ehrig and Konig's rewriting via borrowed contexts and opens the way to a unified treatment of several applications. , 2004 "... A simple example is given of the use of bigraphical reactive systems (BRSs). It provides a behavioural semantics for condition-event Petri nets whose interfaces are named condition nodes, using a simple form of BRS equipped with a labelled transition system and its associated bisimilarity equivalenc ..." Cited by 33 (3 self) Add to MetaCart A simple example is given of the use of bigraphical reactive systems (BRSs). It provides a behavioural semantics for condition-event Petri nets whose interfaces are named condition nodes, using a simple form of BRS equipped with a labelled transition system and its associated bisimilarity equivalence. Both of the latter are derived from the standard net firing rules by a uniform technique in bigraphs, which also ensures that the bisimilarity is a congruence. Furthermore, this bisimilarity is shown to coincide with one induced by a natural notion of experiment on condition-event nets, defined independently of bigraphs. The paper , 2004 "... Adhesive high-level replacement (HLR) categories and systems are introduced as a new categorical framework for graph transformation in a broad sense, which combines the well-known concept of HLR systems with the new concept of adhesive categories introduced by Lack and Sobociński. In this paper we s ..." Cited by 25 (6 self) Add to MetaCart Adhesive high-level replacement (HLR) categories and systems are introduced as a new categorical framework for graph transformation in a broad sense, which combines the well-known concept of HLR systems with the new concept of adhesive categories introduced by Lack and Sobociński. In this paper we show that most of the HLR properties, which had been introduced ad hoc to generalize some basic results from the category of graphs to high-level structures, are valid already in adhesive HLR categories. As a main new result in a categorical framework we show the Critical Pair Lemma for local confluence of transformations. Moreover we present a new version of embeddings and extensions for transformations in our framework of adhesive HLR systems. - PNGT’04 , 2004 "... We introduce a way of viewing Petri nets as open systems. This is done by considering a bicategory of cospans over a category of p/t nets and embeddings. We derive a labelled transition system (LTS) semantics for such nets using GIPOs and characterise the resulting congruence. Technically, our resul ..." Cited by 23 (10 self) Add to MetaCart We introduce a way of viewing Petri nets as open systems. This is done by considering a bicategory of cospans over a category of p/t nets and embeddings. We derive a labelled transition system (LTS) semantics for such nets using GIPOs and characterise the resulting congruence. Technically, our results are similar to the recent work by Milner on applying the theory of bigraphs to Petri Nets. The two main differences are that we treat p/t nets instead of c/e nets and we deal directly with a category of nets instead of encoding them into bigraphs. - PREPRINT OF GT-VC 2006 , 2006 "... We analyze the matching problem for bigraphs. In particular, we present a sound and complete inductive characterization of matching of binding bigraphs. Our results pave the way for a provably correct matching algorithm, as needed for an implementation of bigraphical reactive systems. ..." Cited by 20 (11 self) Add to MetaCart We analyze the matching problem for bigraphs. In particular, we present a sound and complete inductive characterization of matching of binding bigraphs. Our results pave the way for a provably correct matching algorithm, as needed for an implementation of bigraphical reactive systems. , 2004 "... Groupoidal relative pushouts (GRPOs) have recently been proposed by the authors as a new foundation for Leifer and Milner's approach to deriving labelled bisimulation congruences from reduction systems. In this paper, we develop the theory of GRPOs further, proving that well-known equivalences, othe ..." Cited by 11 (1 self) Add to MetaCart Groupoidal relative pushouts (GRPOs) have recently been proposed by the authors as a new foundation for Leifer and Milner's approach to deriving labelled bisimulation congruences from reduction systems. In this paper, we develop the theory of GRPOs further, proving that well-known equivalences, other than bisimulation, are congruences. To demonstrate the type of category theoretic arguments which are inherent in the 2-categorical approach, we construct GRPOs in a category of `bunches and wirings.' Finally, we prove that the 2-categorical theory of GRPOs is a generalisation of the approaches based on Milner's precategories and Leifer's functorial reactive systems. - IN PROCEEDINGS OF CONCUR’08, LNCS , 2008 "... We develop a theory of sorted bigraphical reactive systems. Every application of bigraphs in the literature has required an extension, a sorting, of pure bigraphs. In turn, every such application has required a redevelopment of the theory of pure bigraphical reactive systems for the sorting at hand. ..." Cited by 8 (4 self) Add to MetaCart We develop a theory of sorted bigraphical reactive systems. Every application of bigraphs in the literature has required an extension, a sorting, of pure bigraphs. In turn, every such application has required a redevelopment of the theory of pure bigraphical reactive systems for the sorting at hand. Here we present a general construction of sortings. The constructed sortings always sustain the behavioural theory of pure bigraphs (in a precise sense), thus obviating the need to redevelop that theory for each new application. As an example, we recover Milner’s local bigraphs as a sorting on pure bigraphs. Technically, we give our construction for ordinary reactive systems, then lift it to bigraphical reactive systems. As such, we give also a construction of sortings for ordinary reactive systems. This construction is an improvement over previous attempts in that it produces smaller and much more natural sortings, as witnessed by our recovery of local bigraphs as a sorting. - In Proceedings of the 8th International Conference on Foundations of Software Science and Computation Structures, FOSSACS 2005, volume 3441 of LNCS , 2005 "... Abstract. Structural congruences have been used to define the semantics and to capture inherent properties of language constructs. They have been used as an addendum to transition system specifications in Plotkin's style of Structural Operational Semantics (SOS). However, there has been little theor ..." Cited by 6 (3 self) Add to MetaCart Abstract. Structural congruences have been used to define the semantics and to capture inherent properties of language constructs. They have been used as an addendum to transition system specifications in Plotkin's style of Structural Operational Semantics (SOS). However, there has been little theoretical work on establishing a formal link between these two semantic specification frameworks. In this paper, we give an interpretation of structural congruences inside the transition system specification framework. This way, we extend a number of well-behavedness meta-theorems for SOS (such as well-definedness of the semantics and congruence of bisimilarity) to the extended setting with structural congruences. 1 Introduction Structural congruences were introduced in [12,13] in the operational semanticsspecification of the ss-calculus. There, structural congruences are a set of equationsdefining an equality and congruence relation on process terms. These equations , 2004 "... We introduce a comprehensive operational semantic theory of graph-rewriting. Graph-rewriting here is ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=108421","timestamp":"2014-04-20T05:15:11Z","content_type":null,"content_length":"35413","record_id":"<urn:uuid:23b86d2f-9b26-402c-a142-6f4e5e3d5ae1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
A three-dimensional vector field A (r) is specified by three components that are, individually, functions of position. It is difficult enough to plot a single scalar function in three dimensions; a plot of three is even more difficult and hence less useful for visualization purposes. Field lines are one way of picturing a field distribution. A field line through a particular point r is constructed in the following way: At the point r, the vector field has a particular direction. Proceed from the point r in the direction of the vector A (r) a differential distance dr. At the new point r + dr, the vector has a new direction A (r + dr). Proceed a differential distance dr^' along this new (differentially different) direction to a new point, and so forth as shown in Fig. 2.7.1. By this process, a field line is traced out. The tangent to the field line at any one of its points gives the direction of the vector field A(r) at that point. Figure 2.7.1. Construction of field line. The magnitude of A (r) can also be indicated in a somewhat rough way by means of the field lines. The convention is used that the number of field lines drawn through an area element perpendicular to the field line at a point r is proportional to the magnitude of A (r) at that point. The field might be represented in three dimensions by wires. If it has no divergence, a field is said to be solenoidal. If it has no curl, it is irrotational. It is especially important to conceptualize solenoidal and irrotational fields. We will discuss the nature of irrotational fields in the following examples, but become especially in tune with their distributions in Chap. 4. Consider now the "wire-model" picture of the solenoidal field. Single out a surface with sides formed of a continuum of adjacent field lines, a "hose" of lines as shown in Fig. 2.7.2, with endfaces spanning across the ends of the hose. Then, because a solenoidal field can have no net flux out of this tube, the number of field lines entering the hose through one endface must be equal to the number of lines leaving the hose through the other end. Because the hose is picked arbitrarily, we conclude that a solenoidal field is represented by lines that are continuous; they do not appear or disappear within the region where they are Figure 2.7.2. Solenoidal field lines form hoses within which the lines neither begin nor end. The following examples begin to develop an appreciation for the attributes of the field lines associated with the divergence and curl. Example 2.7.1. Fields with Divergence but No Curl (Irrotational but Not Solenoidal) The spherical region r < R supports a charge density [o] r/R. The exterior region is free of charge. In Example 1.3.1, the radially symmetric electric field intensity is found from the integral laws to be In spherical coordinates, the divergence operator is (from Table I) Thus, evaluation of Gauss' differential law, (2.3.1), gives which of course agrees with the charge distribution used in the original derivation. This exercise serves to emphasize that the differential laws apply point by point throughout the region. The field lines can be sketched as in Fig. 2.7.3. The magnitude of the charge density is represented by the density of + (or -) symbols. Figure 2.7.3. Spherically symmetric field that is irrotational. Volume elements V[a] and V[c] are used with Gauss' theorem to show why field is solenoidal outside the sphere but has a divergence inside. Surface elements C[b] and C[d] are used with Stokes' theorem to show why fields are irrotational everywhere. Where in this plot does the field have a divergence? Because the charge density has already been pictured, we already know the answer to this question. The field has divergence only where there is a charge density. Thus, even though the field lines are thinning out with increasing radius in the exterior region, at any given point in this region the field has no divergence. The situation in this region is typified by the flux of E through the "hose" defined by the volume V[a]. The field does indeed decrease with radius, but the cross-sectional area of the hose increases so as to exactly compensate and maintain the net flux constant. In the interior region, a volume element having the shape of a tube with sides parallel to the radial field can also be considered, volume V[c]. That the field is not solenoidal is evident from the fact that its intensity is least over the cross-section of the tube having the least area. That there must be a net outward flux is evidence of the net charge enclosed. Field lines originate inside the volume on the enclosed charges. Are the field lines in Fig. 2.7.3 irrotational? In spherical coordinates, the curl is and it follows from a substitution of (1) that there is no curl, either inside or outside. This result is corroborated by evaluating the circulation of E for contours enclosing areas having normals in any one of the coordinate directions. [Remember the definition of the curl, (2.4.2).] Examples are the contours enclosing the surfaces S[b] and S[d] in Fig. 2.7.3. Contributions to the C^" and C^"' segments vanish because these are perpendicular to E, while (because E is independent of and ) the contribution from one C^' segment cancels that from the other. Example 2.7.2. Fields with Curl but No Divergence (Solenoidal but Not Irrotational) A wire having radius R carries an axial current density that increases linearly with radius. Ampère's integral law was used in Example 1.4.1 to show that the associated magnetic field intensity is Where does this field have curl? The answer follows from Ampère's law, (2.6.2), with the displacement current neglected. The curl is the current density, and hence restricted to the region r < R, where it tends to be concentrated at the periphery. Evaluation of the curl in cylindrical coordinates gives a result consistent with this expectation. The current density and magnetic field intensity are sketched in Fig. 2.7.4. In accordance with the "wire" representation, the spacing of the field lines indicates their intensity. A similar convention applies to the current density. When seen "end-on," a current density headed out of the paper is indicated by \odot, while \otimes indicates the vector is headed into the paper. The suggestion is of the vector pictured as an arrow, with the symbols representing its tip and feathers, respectively. Figure 2.7.4. Cylindrically symmetric field that is solenoidal. Volume elements V[a] and V[c] are used with Gauss' theorem to show why the field has no divergence anywhere. Surface elements S [b] and S[d] are used with Stokes' theorem to show that the field is irrotational outside the cylinder but does have a curl inside. Can the azimuthally directed field vary with r (a direction perpendicular to ) and still have no curl in the outer region? The integration of H around the contour C[b] in Fig. 2.7.4 shows why it can. The contours C[b]^' are arranged to make ds perpendicular to H, so that H s = 0 there. Integrations on the segments C[b]^"' and C[b]^" cancel because the difference in the length of the segments just compensates the decrease in the field with radius. In the interior region, a similar integration surely gives a finite result. On the contour C[d], the field is larger on the outside leg where the contour length is larger, so it is clear that the curl must be finite. Of course, this field shape simply reflects the presence of the current density. The field is solenoidal everywhere. This can be checked by taking the divergence of (5) in each of the regions. In cylindrical coordinates, Table I gives The flux tubes defined as incremental volumes V[a] and V[c] in Fig. 2.7.4, in the exterior and interior regions, respectively, clearly sustain no net flux through their surfaces. That the field lines circulate in tubes without originating or disappearing in certain regions is the hallmark of the solenoidal field. It is important to distinguish between fields "in the large" (in terms of the integral laws written for volumes, surfaces, and contours of finite size) and "in the small" (in terms of differential laws). To this end, consider some questions that might be raised. Is it possible for a field that has no divergence at each point on a closed surface S to have a net flux through that surface? Example 2.7.1 illustrates that the answer is yes. At each point on a surface S that encloses the charged interior region, the divergence of [o]E is zero. Yet integration of [o] E a over such a surface gives a finite value, indeed, the net charge enclosed. Figure 2.7.5. Volume element with sides tangential to field lines is used to interpret divergence from field coordinate system. The divergence can be viewed as a weighted derivative along the direction of the field, or along the field "hose." With a defined as the cross-sectional area of such a tube having sides parallel to the field [o]E, as shown in Fig. 2.7.5, it follows from (2.1.2) that the divergence is The minus sign in the second term results because da and a are negatives on the left surface. Written in this form, the divergence is the derivative of e[o]E a with respect to a coordinate in the direction of E. Examples of such tubes are volumes V[a] and V[c] in Fig. 2.7.3. That the divergence is zero in the exterior region of that example is equivalent to having a radial derivative of the displacement flux [o]E a that is zero. A further observation returns to the distinction between fields as they are described "in the large" by means of the integral laws and as they are represented "in the small" by the differential laws. Is it possible for a field to have a circulation on some contour C and yet be irrotational at each point on C? Example 2.7.2 shows that the answer is again yes. The exterior magnetic field encircles the center current-carrying region. Therefore, it has a circulation on any contour that encloses the center region. Yet at all exterior points, the curl of H is zero. The cross-product of two vectors is perpendicular to both vectors. Is the curl of a vector necessarily perpendicular to that vector? Example 2.7.2 would seem to say yes. There the current density is the curl of H and is in the z direction, while H is in the azimuthal direction. However, this time the answer is no. By definition we can add to H any irrotational field without altering the curl. If that irrotational field has a component in the direction of the curl, then the curl of the combined fields is not perpendicular to the combined fields. Illustration. A Vector Field Not Perpendicular to Its Curl In the interior of the conductor shown in Fig. 2.7.4, the magnetic field intensity and its curl are Suppose that we add to this H a field that is uniform and z directed. Then the new field has a component in the z direction and yet has the same z-directed curl as given by (9). Note that the new field lines are helixes having increasingly tighter pitches as the radius is increased. The curl can also be viewed in terms of a field hose. The definition, (2.4.2), is applied to any one of the three contours and associated surfaces shown in Fig. 2.7.6. Contours C[] and C[] are perpendicular and across the hose while (C[] ) is around the hose. The former are illustrated by contours C[b] and C[d] in Fig. 2.7.4. Figure 2.7.6. Three surfaces, having orthogonal normal vectors, have geometry determined by the field hose. Thus, the curl of the field is interpreted in terms of a field coordinate system. The component of the curl in the direction is the limit in which the area 2 goes to zero of the circulation around the contour C[] divided by that area. The contributions to this line integration from the segments that are perpendicular to the axis are by definition zero. Thus, for this component of the curl, transverse to the field, (2.4.2) becomes The transverse components of the curl can be regarded as derivatives with respect to transverse directions of the vector field weighted by incremental line elements . At its center, the surface enclosed by the contour C[] has its normal in the direction of the field. It would seem that the curl in the direction would therefore have to be zero. However, the previous discussion and illustration give a warning that the contour integral around C[] is not necessarily zero. Even though, to zero order in the diameter of the hose, the field is perpendicular to the contour, to higher order it can have components parallel to the contour. This means that if the contour C [] were actually perpendicular to the field at each point, it would not close on itself. An equivalent contour, shown by the inset to Fig. 2.7.6, begins and terminates on the central field line. With the exception of the segment in the direction used to close this contour, each segment is now by definition perpendicular to . The contribution to the circulation around the contour now comes from the -directed segment. Remember that the length of this segment is determined by the shape of the field lines. Thus, it is proportional to (^2, and therefore so also is the circulation. The limit defined by (2.1.2) can result in a finite value in the direction. The "cross-product" of an operator with a vector has properties that are not identical with the cross-product of two vectors.
{"url":"http://web.mit.edu/6.013_book/www/chapter2/2.7.html","timestamp":"2014-04-17T07:27:21Z","content_type":null,"content_length":"18705","record_id":"<urn:uuid:4da5c6ca-0a28-4bcc-90af-d59d4009778c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
genus-zero Gromov-Witten invariants up vote 2 down vote favorite Let $(M, \omega)$ be a complex $n$-dimensional Hermitian symmetric space of compact type, where $\omega$ is the symplectic (Kaehler) form on $M$ normalized so that $[\omega]$ generates the integral cohomology class $H^2(M, Z)$. Let $A$ be the generator of $H_2(M, Z)$. Problem: Find two submanifolds $X$ and $Y$ of $M$ such that: $$\dim_{\R} X+\dim_{\R} Y=4n-2c_1(TM)(A)$$ and $$\Phi_A([X], [Y], [p])\neq 0,$$ where $\Phi_A([X], [Y], [p])$ is the genus-zero Gromov--Witten invariant of the triple $[p], [X], [Y]$ and $[p]$, $[X]$ and $[Y]$ denote the homology classes of a point $p\in M$, $X$ and $Y$ respectively.) What kind of answer are you looking for? There is a classification of irreducible, compact Hermitian symmetric spaces. There is also a theorem of Koll&aacute;r and Ruan that guarantees the existence of a nonzero, one-point Gromov-Witten invariant. In principle, you could go through the list and find nonzero Gromov-Witten invariants in each case (simplified by the many results of experts such as Buch-Kresch-Tamvakis on GW theory of homogeneous spaces). Is that what you want? – Jason Starr Dec 25 '12 at 22:21 Thank you for your answer. Yes I think you answer to my question. My problem was the following (with the same notation as in my question). Let J be an almost complex structure of M tamed by \omega and p be a point of M. Does there exists a J-holomorphic curve in the class of A which pass through p? Therefore, if I can find a non-zero one-point genus-zero Gromov-Witten invariant I believe I can giva a positiva answer to my question. Thank you – Andrea Loi Dec 26 '12 at 18:23 add comment 1 Answer active oldest votes I will just consider the simplest example, maybe someone will give the answer in complete generality. So let $M=\mathbb CP^n$. Then $4n-2c_1(M)(A)=2n-2$. This means that we are in good shape. Basically we can take for $X$ and $Y$ any complex submanifolds of $M$ satisfying your condition. Indeed in this case for a generic point $p$ in $\mathbb CP^n$ there will be $deg X\ up vote cdot deg Y$ lines that contain $p$ and intersect both $X$ and $Y$. I assumed $X$ (or $Y$) is not zero dimensional, in which case there is only one line and also that $X$ and $Y$ are in 2 down general position, but this does not matter for GW, of course. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/117173/genus-zero-gromov-witten-invariants","timestamp":"2014-04-19T22:11:03Z","content_type":null,"content_length":"52730","record_id":"<urn:uuid:3186f765-ff8f-4948-89dc-9a1fba88c389>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Compressive Sensing Resources The dogma of signal processing maintains that a signal must be sampled at a rate at least twice its highest frequency in order to be represented without error. However, in practice, we often compress the data soon after sensing, trading off signal representation complexity (bits) for some error (consider JPEG image compression in digital cameras, for example). Clearly, this is wasteful of valuable sensing resources. Over the past few years, a new theory of "compressive sensing" has begun to emerge, in which the signal is sampled (and simultaneously compressed) at a greatly reduced As the compressive sensing research community continues to expand rapidly, it behooves us to heed Shannon's advice. Compressive sensing is also referred to in the literature by the terms: compressed sensing, compressive sampling, and sketching/heavy-hitters. Submitting a Resource To submit a new or corrected paper for this listing, please complete the form at dsp.rice.edu/cs/submit. To submit a resource that isn't a paper, please email
{"url":"http://dsp.rice.edu/cs","timestamp":"2014-04-17T15:30:29Z","content_type":null,"content_length":"252390","record_id":"<urn:uuid:880fe3e2-4220-4986-85ea-ec215dc0069a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Date of this Version Published in The American Mathematical Monthly, Vol. 74, No. 6 (Jun. - Jul., 1967), pp. 669-673. Copyright 1967 Mathematical Association of America. Used by permission. It is well known that a real number is rational if and only if its decimal expansion is a repeating decimal. For example, 2/7 = .285714285714 . . . . Many students also know that if n/m is a rational number reduced to lowest terms (that is, n and m relatively prime), then the number of repeated digits (we call this the length of period) depends only on m. Thus all fractions with denominator 7 have length of period 6. A sharp-eyed student may also notice that when the period (that is, the repeating digits) for 2/7 is split into its two half-periods 285 and 714, then the sum 285 + 714 = 999 is a string of nines. A little experimentation makes it appear likely that this is always true for a fraction with the denominator 7, as well as for fractions with denominators 11, 13, or 17. A natural conjecture is that all primes with even length of period (note that many primes, such as 3 and 31, have odd length of period) will have a similar property. This conjecture is, in fact, true but it is unfortunately not a criterion for primeness, since many composite numbers (such as 77) also have the property. The relevant theorem appears not to be well known, although it was discovered many years ago. (L. E. Dickson [see 1, p. 163] attributes the result to E. Midy, Nantes, 1836). The proof of the theorem is simple and elegant, and since it also provides a nice example of the usefulness of the concept of the order of an element of a group, it deserves to be better known. In the following, we will develop from the beginning the theory of repeating decimals. This is to provide the necessary machinery for the proof of Midy's theorem, as well as for completeness.
{"url":"http://digitalcommons.unl.edu/mathfacpub/48/","timestamp":"2014-04-20T06:20:26Z","content_type":null,"content_length":"22357","record_id":"<urn:uuid:366ecd1f-8390-408d-8f42-524374cf608c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Kirchhoff's Laws and current through resistors The diagram below shows a circuit where; R1 = 5.00 Ω, R2 = 6.00 Ω, R3 = 1.00 Ω, V1 = 4.500 V, V2 = 20.00 V, and V3 = 6.00 V. (In solving the problems that follow, initially pick the current directions as shown. If the actual current turns out to be in the opposite direction, then your answer will be negative). What is the value of I1? I2? I3? So I know that I need three equations to solve it as I have three unknowns. I tried to make one using the junction where I2 meets I1 and I3, but they all come together and kind of crash, so I don't know how to put that into equation form any more :( I'm assuming I need to do loops to get the other two equations but again I run into the issue of knowing how to set them up. If I try to make points on either side of each resistor they won't 'flow' really.... In summary I am just very confused, so any help is welcome!
{"url":"http://www.physicsforums.com/showthread.php?t=217978","timestamp":"2014-04-18T00:34:50Z","content_type":null,"content_length":"24174","record_id":"<urn:uuid:6fe19489-0313-4a6d-bab1-d9838e928fe5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
MSP:MiddleSchoolPortal/Measurement Sliced and Diced From Middle School Portal Measurement Sliced and Diced - Introduction Middle school teachers tell us that there are important practical skills and understanding that students need before they engage in the abstractions of algebra. These skills are found in the blurry area where measurement, basic geometry, and the arithmetic of decimals and fractions come together in the real world. To move forward mathematically, middle school students need hands-on experiences with measuring, using scale and proportionality, and estimating with benchmarks. How can online resources support the need for hands-on experiences with measurement? Read on! Students can use web resources to see the quantifying components in life, to visualize mathematics concepts, and to get instant feedback on calculations. The web is a friendly place to practice computational skills with fractions, decimals, and formulas. But perhaps most importantly, the Internet can expand the realm of possible real-world problem solving. Here we feature activities, lesson plans, and projects to help students understand how measurement and mathematical problem solving are part of life. A class measurement project can wrap together many important components of mathematics learning into a very memorable experience. Who can forget measuring their school gym to see how many pennies it could hold or finding the volume of the community swimming pool to see how many ping-pong balls it would take to fill it? A class measurement project allows students to first make choices about which tools and units to use, and then to do the measuring, use the data to find an answer, and communicate results. They apply measurement skills and concepts to solve everyday questions that can involve estimation, decimals, fractions, and proportional reasoning. A solid foundation in measurement in the middle school years enables students to think about their world in quantitative, geometric terms and see the usefulness of mathematics. Background Information for Teachers You are not alone if the idea of teaching measurement makes you uncomfortable; after all, doesn't everyone know how to measure? We selected these resources to help you refresh your approach to teaching measurement. Learning math: measurement Feel like you need a little review? This free online course examines critical concepts related to measurement. The 10 sessions feature video lessons, activities, and online demonstrations to review procedures used in conducting measurements, along with other topics such as the use of nonstandard measurement units, precision and accuracy, and the metric system. Circles, angles, volume formulas, and relationships between units of measurement are also explored. The final session presents case studies that help you examine measurement concepts from the perspective of your students. Measurement fundamentals This module, developed for the Virtual Machine Shop, offers a great deal of practical information about linear and angular measurement and the use of standard and metric units. It introduces the unique math used by engineers and tradespeople and presents a very practical focus to the study of measurement. How Many? A Dictionary of Units of Measurement This handy reference from the Center for Mathematics and Science Education at the University of North Carolina at Chapel Hill features a clickable list of defined terms related to units of measure. You'll also find FAQs, commentary, and news related to measurement units. Animations and Interactive Online Activities With these animations and interactive resources, students can find length, area, and perimeter at their own pace with as many repetitions as needed to create understanding. Measuring Henry's cabin When students want to know why they need to learn to measure, show them this cabin blueprint and ask them what they think a builder needs to know to start constructing a building. Students examine the cabin blueprint and find the surface area of the walls. Powers of ten What student isn't interested in very large numbers! Before your very eyes see the perspective expand from a 1-meter view of a rose bush to an expanded vision of 10 to the 26 power and then decrease to 10 to the negative 15. The site, also available in German and Italian, uses the meter as the unit of measurement. This visualization can help students see the results of increasing and decreasing scale. It is an engaging way to demonstrate scale and is a nice illustration of the meaning of exponents. Jigsaw puzzle size-up This online interactive jigsaw puzzle activity requires students to enlarge or shrink puzzle pieces before placing them in a puzzle. The choices for enlarging are 1.5, 2, and 4 times larger, while the sizes for shrinking are one-quarter, one-third, and one-half. These next two resources are from the site Figure This! that features 81 activities for middle schoolers. The activities, presented by colorful, animated characters, feature mathematics found in real-life situations. Students work with paper and pencil to answer the multiple questions posed in each activity. From the Figure This! home page you can go to a math index for a correlation of activities to important math topics. Printable versions of the activities are available in English and Spanish. Access ramp: how steep can a ramp be? The activity opens with an animation of a Figure This! character in a wheelchair using an access ramp over a three-step staircase, with steps 7 inches high and 10 inches wide. Students are challenged to think about dimensions of an access ramp to determine where the ramp should start to go up the three steps at a reasonable slope. Information about handicap accessibility is included. Windshield wipers: it's raining! Who sees more? The driver of the car or the truck? Here's something that the future drivers in middle school will relate to. Geometric shapes are used to compare the areas cleaned by different styles of windshield wipers. Open-Ended Questions and Hands-On Activities Teachers can use the printable open-ended questions and hands-on activities to get students thinking about measurement concepts. Approximating the Circumference and Area of a Circle If you have students who think they know everything about area, perimeter, circles, and pi, make a copy of this Geometry Problem of the Week and see what they can figure out. Wacky ruler Here's a good starter activity for students who lack the most fundamental understanding of measurement. They print out a "wacky ruler" and a page that features eight wiggly pink worms. The model ruler is marked with two, four, and seven units. Students can enter their measurements online to check their accuracy. Measuring (Const) These six worksheets introduce students to fractional units on a standard ruler and millimeters on a metric ruler. There is also a short, colorful PowerPoint slide show that demonstrates the fractional parts of an inch. Measure a picture, number 1: inch, half, quarter of an inch This student worksheet is the first in a series of five worksheets offering practical experience reading units on a ruler. Visualizing the Metric System How can you make the metric system more understandable for your students? Tell them to think of a gram as the mass of a jelly bean and a liter as one quart. This list can help students retain a visual picture to approximate various metric units. National Institute of Standards and Technology metric pyramid Take the mystery out of the metric system by having students create their own reference tool. They can use this paper cutout to make a 3-D pyramid printed with metric conversion information for length, mass, area, energy, volume, and temperature. The next four resources can be used to support a student project that explores big trees and the mathematics related to circumference and pi. A student exploration question can be "How big is the biggest tree in our neighborhood?" Big tree: have you ever seen a tree big enough to drive a car through? Even if your students have never seen a tree large enough to drive a car through, they can practice using fractions and decimals and the formula for the circumference of a circle. This activity lists the girth and height of 10 National Champion giant trees and asks students to determine which of the trees is large enough for a car to drive through. NPR: Bushwhacking with a Big-Tree Hunter Some people hunt animals and others hunt for trees. In this National Public Radio story, one in a special series called Big Trees and the Lives They've Changed, visit the Olympic Peninsula and learn about the life of a big tree hunter and the death of a giant Douglas fir. Project Shadow Here is a project idea that can be huge and interdisciplinary with the science or social studies department or that can be a one-day event where students can experience practical measurement. You may want to register as part of an online worldwide one-day event to calculate the circumference of the Earth (see first resource) or simply use all the resources to put together a class activity for replicating Eratosthenes' experiment. However you choose to approach it, you can tie measurement to real life by highlighting the historical connections and relating the activity to the modern technology of global positioning systems. The noon day project This Internet site presents the necessary mathematics and science information teachers need to re-create the measurement of the circumference of the Earth as done by the Greek librarian Eratosthenes more than 2000 years ago. Shadow measurements taken at high noon local time on a designated day in March are posted online and used to calculate the circumference of the Earth. Teachers can sign up and have their students participate in this annual spring event. Measuring the Circumference of the Earth This web page illustrates how data and mathematics were used in Eratosthenes' famous experiment. Money: Large Amounts Project Finally, how about using pennies as a unit of measure and asking big questions such as "What would the national debt look like if it were a pile of pennies—would it reach farther than the moon?" Once you start thinking in terms of using pennies, or any other size coins, to represent quantities, you may decide to start with a smaller quantity than the national debt. In any event, these web sites are a great place to begin. The megapenny project Visit this site to begin to appreciate the magnitude of large numbers. It shows and describes arrangements of large quantities of U.S. pennies. You'll see that a stack of 16 pennies measures one inch and a row of 16 pennies is one foot long. The site builds excitement for learning the size of the mass found in one quintillion (written as a one followed by eighteen zeroes) pennies. All pages have tables at the bottom, listing things such as the value of the pennies on the page, size of the pile, weight, and area (if laid flat). All weights and measurements are U.S. standards, not metric. The silver mile Here is a Math Forum middle school Problem of the Week that challenges students to think about the coins involved in creating a mile-long trail of silver coins. The authors include a few rules that require students to use fractions as they construct their mile using nickels, dimes, quarters, half-dollars, or silver dollars in specific proportions. MSP full record SMARTR: Virtual Learning Experiences for Students Visit our student site SMARTR to find related virtual learning experiences for your students! The SMARTR learning experiences were designed both for and by middle school aged students. Students from around the country participated in every stage of SMARTR’s development and each of the learning experiences includes multimedia content including videos, simulations, games and virtual activities. Visit the virtual learning experience on Measurement. The FunWorks Visit the FunWorks STEM career website to learn more about a variety of math-related careers (click on the Math link at the bottom of the home page). NCTM Measurement Standard In the discussion of the measurement standard, the National Council of Teachers of Mathematics states that "In the middle grades, students should build on their formal and informal experiences with measurable attributes like length, area, and volume; with units of measurement; and with systems of measurement." (Principles and Standards for School Mathematics, NCTM, 2000, p. 241) At its simplest, measurement in grades 6-8 begins with what appears to be a basic need for the student to know how to use a ruler to measure length. The reality is that even this apparently simple measurement task requires the use of multistep mathematics thinking. Many students lack the computational skills and conceptual understanding necessary to take on the more sophisticated tasks of finding surface area and volume and using units and converting units in metric and customary systems. The suggested online resources may help your students develop a conceptual understanding of area, perimeter, and volume and learn to use formulas and measurement units. The resources can also help you plan a really worthwhile class project. Check out the nine specific expectations that NCTM describes for middle school students related to the measurement standard. Author and Copyright Judy Spicer is the mathematics education resource specialist for digital library projects at Ohio State University. She has taught mathematics in grades 9-14. Please email any comments to msp@msteacher.org. Connect with colleagues at our social network for middle school math and science teachers at http://msteacher2.org. Copyright November 2004 - The Ohio State University. This material is based upon work supported by the National Science Foundation under Grant No. 0424671 and since September 1, 2009 Grant No. 0840824. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
{"url":"http://msp.ehe.osu.edu/wiki/index.php/MSP:MiddleSchoolPortal/Measurement_Sliced_and_Diced","timestamp":"2014-04-16T13:02:42Z","content_type":null,"content_length":"39012","record_id":"<urn:uuid:1cd7f639-6167-486c-a88a-2c61cf01db17>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability problem. Possibly involves combinatorics. March 27th 2013, 02:03 PM #1 Mar 2013 Probability problem. Possibly involves combinatorics. I'm a mature student undertaking a self-study A-Level course in order to go to university. I have been unable to figure this out by myself and have even had a private tutor stumped by it, so any help would be greatly appreciated. It's from "Understanding Statistics" by Upton & Cook. Question 5g) 1) d). The correct answer is 613/648 according to the textbook. In Ruritania all the cars are made by a single firm and vary only in their colouring. Six different colours are available. The same numbers of cars are painted in each of the six colours. Assuming that, when travelling on the roads, the colours of the cars occur in random order, determine the probability that: "at least 8 cars pass before all 6 colours are encountered." Re: Probability problem. Possibly involves combinatorics. Hey Xenoji. The first thing you have to do is come up with a distribution that corresponds to the actual problem/process. In your problem this is a multivariable hyper-geometric distribution since you are sampling without replacement. However in the case that you have a large sample size, you can approximate this well with a multinomial distribution. In your probability, you need to find out P(C1 > 0, C2 > 0, C3 > 0, ..., C6 > 0, N > 7). Hint: With a multinomial, try finding the probability that less than 8 cars pass with all colors appearing once and find 1 - that probability. Multinomial distribution - Wikipedia, the free encyclopedia Remember that we are using a approximation if the number of cars is extremely large. March 27th 2013, 06:28 PM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/statistics/215785-probability-problem-possibly-involves-combinatorics.html","timestamp":"2014-04-19T08:57:53Z","content_type":null,"content_length":"33350","record_id":"<urn:uuid:de3f45d1-2a1a-4944-82df-15fcd9b2cc4a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Old books still used up vote 51 down vote favorite It's a commonplace to state that while other sciences (like biology) may always need the newest books, we mathematicians also use to use older books. While this is a qualitative remark, I would like to get a quantitative result. So what are "old" books still used? Coming from (algebraic) topology, the first things which come to my mind are the works by Milnor. Frequently used (also as a topic for seminars) are his Characteristic Classes (1974, but based on lectures from 1957), his Morse Theory (1963) and other books and articles by him from the mid sixties. An older book, which is sometimes used, is Steenrod's The Topology of Fibre Bundles from 1951, but this feels a bit dated already. Books older than that in topology are usually only read for historic As I have only very limited experience in other fields (except, perhaps, in algebraic geometry), my question is: What are the oldest books regularly used in your field (and which don't feel "outdated")? ho.history-overview books big-list 5 I think this should be Community Wiki. – Alberto García-Raboso Dec 28 '12 at 16:28 17 Please don't call "Characteristic Classes" old or I will have to call myself old, being born in the same year as the lectures :-/ – Lee Mosher Dec 28 '12 at 18:28 22 @Lee Mosher: Would you prefer to call yourself "classical"? :) – user29720 Dec 29 '12 at 0:08 4 Timeless . . . . – Rodrigo A. Pérez Dec 29 '12 at 3:08 4 E. Spanier "ALgebric TOpology", "Eilenberg Steenrod "ALgebric TOpology", GOdement "Topologie Algébrique et Théorie des Faisceaux ", COurant-Hilbert "Methods of Mathematical Physics"... "the problem of contemporary authors, is to being con-temporary" (Ennio Flaiano) – Buschi Sergio Dec 29 '12 at 10:45 show 5 more comments protected by Scott Morrison♦ Oct 12 '13 at 11:46 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site. Would you like to answer one of these unanswered questions instead? 61 Answers active oldest votes I think the absolute record (excluding Euclid) belongs to E. T. Whittaker G. H. Watson, A course of modern analysis. According to the Jahrbuch database, the first edition was in 1915. Moreover, this 1915 edition was an extended version of a 1902 book, by Whittaker alone. The last revision was in 1927. The book is still in print, and widely used, not only by mathematicians but by physicists and engineers. Soon we will celebrate the centenary... It has 1056 citations on Mathscinet, by the way, and 8866 on the Google Scholar ! Perhaps this deserves a Guinnes book of records entry as a "textbook longest continuously in print". And I suppose this is a record not only for math but for all sciences... with the exception of Euclid and Ptolemy, of course:-) If we include not only textbooks but research monographs there are plenty of other examples, even older ones: H. F. Baker, Abelian functions, was first published in 1897. Rerinted in 1995, and there is a new Russian translation. Just out of curiosity, look at its current citation rate in Mathscinet:-) up vote 34 They also reprinted down vote H. Schubert, Kalkül der abzählenden Geometrie, 1879, in 1979, and again you can see from Mathscinet that people are using this. EDIT: A brief inspection of the most cited (and thus most used) books on Mathscinet shows that a very large proportion of the most cited books are 30-40 years old. Which is easy to explain, by the way. Thus on my opinion, such books do not qualify for this list (unless we want to make it infinite). EDIT2: Today I accidentally found that 3 of the 4 copies of G. H. Watson, Treatise on the theory of Bessel functions (first edition, 1922) are checked out from my university library. Mathscinet shows 1157 citations for the last 2 editions. Another question is old papers which are still highly sited. A typical life span of a paper is much smaller than that of a book. In the list of 100 most cited papers in 2011, I found only two papers published before 1950 (One by Shannon and another by Leray). 3 I have an electronic copy of the 1996 reissue of Whittaker and Watson's that details its history: first edition 1902, second edition 1915, third edition 1920, fourth edition 1927. Since then, there were 8 reprints (1935, 1940, 1946, 1950, 1952, 1958, 1962 and 1963) and the 1996 reissue. – Alberto García-Raboso Dec 28 '12 at 19:25 show 2 more comments Bonnesen and Fenchel, "Theorie Der Konvexen Korper" Springer, Berlin 1934 not available in English translation until 1987 although Eggleston's "Convexity" 1958 draws heavily on up vote 0 down vote add comment Two old books by David Mumford are not mentioned above (unless I am wrong): 1) Introduction to Algebraic Geometry (preliminary version of first 3 Chapters) (published and distributed by Harvard math. dpt., bound in red !, and containing 444 pages.) At that time (around end of 1960's ), this book was the unique good way to be introduced to theory of schemes . The EGA's were not helpful. up vote 1 down vote In 1988, it became "The Red Book of Varieties and Schemes "(Springer). He is still excellent for learning ,and teaching , schemes. 2) The classical and fundamental: Geometric Invariant Theory (Springer, 1965), It has two enlarged editions : 1982, 1994. add comment Daniel Quillen's "Homotopical algebra", 1967. up vote 0 down vote add comment If one needs to use tools from classical invariant theory or elimination theory then some books that come to mind are: and there are quite a few more. up vote 15 down vote For Salmon's book, the 4th edition of 1885 might be best. Indeed, as I learned from a paper by Macauley, it has a discussion (on p. 87) of Cayley's very general formula for the multivariate resultant as the determinant of a complex (see the book by Gelfand, Kapranov and Zelevinsky for a modern account and a reprint of Cayley's paper). 1 Since you answered before this was turned into CW by the questioner, your answer stayed in normal mode. Typically moderators would take care of this, but since your answer is the only one affected in this case, I thought it could be more efficient if you turned your answer into CW manually (edit and tick the box). – quid Dec 28 '12 at 18:18 4 done........... – Abdelmalek Abdesselam Dec 28 '12 at 18:37 2 @Abdelmalek Abdesselam: Can it really be that modern books on computational commutative algebra have not adequately replaced the need to look at a book on "modern higher algebra" from 1876 (or some of the others that you list)? This sounds very surprising. What are examples of things found in such old books that are not available in more recent references? – user30180 Dec 29 '12 at 6:00 @Ayanta: Despite the eloquence of your rethorical question, what you said is simply wrong. For instance anything involving the classical symbolic method in relation with specific 3 invariants coming from elimination theory is not really accounted for nor "adequately replaced" in the recent commutative algebra literature. To form an accurate and informed opinion you need to have a look at the books I mentioned especially Grace and Young if you only have time to look at one. – Abdelmalek Abdesselam Dec 31 '12 at 12:31 show 5 more comments Probability Theory, by Feller. Volumes I and II. Oldies but goldies up vote 4 down vote add comment When I was an undergrad, at the turn of the millenium, I took a complex analysis class that used (an English translation of) Knopp's 1936 Funktionentheorie. up vote 1 down vote add comment I would like to mention about M. Postnikov's geometry series, Lectures on geometry which I always refer to when I need some coherent view inbetween geometry and analysis. up vote 2 down vote Sometimes I may refer to Hopf and Alexanderoff's Topologie in order to gain some authority... add comment Rudin's Principles of Mathematical Analysis, and Herstein's Topics in Algebra if not heavily used, are the ideal that many people strive to in teaching introductory analysis and up vote 4 down abstract algebra to undergraduates. add comment Spivak's five volume "Comprehensive Introduction to Differential Geometry" still gets a lot of use---particularly the first two volumes. up vote 7 down vote add comment My field is dominated by older books, it seems. Gilmer's Multiplicative Ideal Theory came out in 1972 and it's nearly unmatched in the content it covers. We're currently using Kaplansky's Commutative Rings book for the Commutative Algebra course I'm taking at UCR; Atiyah and Macdonald's book is also considered a standard reference for those kinds of courses, and it came out up vote 1 in 1969. And, of course, you can't forget Bourbaki. I'm also partial to Zariski and Samuel's Commutative Algebra texts over other texts in the field, which came out in 1958 and 1961. down vote add comment In numerical linear algebra, Gantmacher's The theory of matrices is still a widely read and cited text (see MathSciNet citations). The Russian original dates back to 1953 (thanks up vote 7 down @Giuseppe), and the first English translation is from 1959. 1 The first Russian edition is dated 1953. – Giuseppe Tortorella Jan 7 '13 at 11:38 add comment Most of the textbooks I use are quite new. The old books are the exception. The oldest book about mathematics I use is Hajós György: Bevezetés a geometriába, a textbook on elementary geometry (in the sense of Euclid). The first edition is from 1950, I have a copy published in 1960. (Edit: it seems there's a German translation.) up vote 2 down vote I'm also using Knuth's The Art of Computer Programming, does that count as old now? The translation of the first volume is based on the second edition, of which the original was published in 1973. add comment Nathaniel Bowditch is generally regarded as a nineteenth century American mathematician . His American Practical Navigator has been in continous print since 1804. It is still in use up vote -2 today judging from the comments on Amazon. But perhaps this isn't what was meant by a mathematics book and perhaps navigation isn't to be considered applied mathematics. down vote 2 According to wikipedia (see en.wikipedia.org/wiki/American_Practical_Navigator) the book has been continually revised since 1804 and at this point contains essentially none of the 19th century content. – Andy Putman Feb 6 '13 at 16:58 add comment Emil Artin's Geometric Algebra (Interscience, 1957) is definitely immortal. up vote 4 down vote add comment My first thought was Atiyah & Macdonald's 'An Introduction to Commutative Algebra' - which has already been mentioned - and 'anything by J.P. Serre' (that's old enough, of course!). It appears that not quite everything in this latter category has been mentioned; notably, 'Algebres de Lie Semi-simple Complexes', first published in 1966. There is also a later English translation, 'Complex Semisimple Lie Algebras' published in 1987. While not quite an introduction, I find myself referring back to this text often for its streamlined, beautiful exposition (a hallmark of Serre). It also has the best exposition of root up vote 4 systems I've encountered. down vote Furthermore, another classic text on semisimple Lie algebras (J. Humphreys - 'Introduction to Lie Algebras & Representation Theory') is a 'fleshing out' of Serre's notes. Actually, Humphreys's textbook was first published in 1972 so might squeeze onto this list too? add comment Keisler's "Calculus: An Approach Using Infinitesimals" is a very cool freshman calc book using NSA. It dates back to 1976, and is available for free online: http://www.math.wisc.edu/~keisler /calc.html . Although I'm not aware of anyone who's using Keisler in the classroom today, it's under a Creative Commons license, and there is a newer book by Guichard and Koblitz that incorporates a bunch of material from Keisler: http://www.whitman.edu/mathematics/multivariable/ . In the world of the digital commons, it's a little hard to define how old a book is. It's like asking how old a bacterium is. Bacteria are in some sense immortal. They just evolve. Another wonderful old calc book that is still in print is Calculus Made Easy, by Silvanus Thompson, 1910. up vote 3 down I noticed that another answer to this question got heavily downvoted for referring to a book published in the 1980's. The question was: 'What are the oldest books regularly used in your vote field (and which don't feel "outdated")?' It didn't specify what "used" meant -- used in research, teaching, personal study, ...? The lower you get on the educational totem pole, the shorter the half-life of a book. Someone posted that they liked Disquisitiones Arithmeticae, but that doesn't mean it's being used for teaching number theory to undergrad math majors. For freshman calc, it is extremely unusual for anybody to use anything more than 5 years old. The community college where I teach has an explicit rule forbidding the use of books of more than about that show 1 more comment I still think the exposition on elliptic functions in Jacobi's Fundamenta Nova (1829) is one of the best I've encountered if you are interested in the functional relationships. A close second for me is Cayley's An elementary treatise on elliptic functions (1895), especially for the number of alternative proofs presented and the numerous relationships detailed. Modern books tend to take the algebraic approach, which is obviously extremely important for understanding the true nature of the relationships here, but for those of us who study the field because of it's incidental use in combinatorics and generating functions, these older books are a wealth. up vote Also, I have a personal love of Gauss' Disquisitiones Arithmeticae (1798) because it introduced me to number theory at a young age in a way that was very natural and elegant. Again, I 4 down appreciated it's approach to forms and related because it was all easily understandable with middle school algebra. And finally more modern, for me Goldblatt's Topoi: The categorial analysis of logic (1979) is the best introduction to categories one could have, far better in my opinion than even Mac Lane's. That it is also subversive propaganda for constructivism is also a huge bonus. 2 A book edited in 79 is not old! – Mariano Suárez-Alvarez♦ Jan 10 '13 at 1:52 add comment Many systematic introductions to the foundations of the edifice of Differential Geometry appeared in the sixties, and they are useful references even today. Some of them are: • Lang, Introduction to Differentiable Manifolds, 1962; • Helgason, Differential Geometry and Symmetric Spaces, 1962; up vote 4 down vote • Kobayashi, Nomizu, Foundations of Differential Geometry, 1st Vol 1963, 2nd Vol 1969; • Sternberg, Lectures on Differential Geometry, 1964; • Bishop, Crittenden, Geometry of Manifolds, 1964; add comment That depends if you speak of research books or advanced text book. In the second category, I should place • Rudin's Real and complex analysis (1966), • J.-P. Serre's Cours d'Arithmétique (1970) (hope you will forgive me), • Lang's Algebra (1st Edt 1965). In the first category, I see up vote 26 down vote • Kato's Perturbation theory of linear operators (1966), • Courant & Hilbert's Methods of Mathematical Physics (1924), • Courant & Friedrich's Supersonic Flow and Shock Waves (1948), • V. I. Arnold's Mathematical methods of classical mechanics (1974). 1 +1 for Courant & Hilbert! – Igor Khavkine Dec 28 '12 at 18:36 1 @Qfwfq: Well, we used it when I was a junior, so it had already appeared in 1975. But I don't know the original publication date offhand. – Joe Silverman Dec 29 '12 at 23:29 1 +1 for the last four, in particular for Kato and Arnold – RSG Jan 7 '13 at 11:19 show 2 more comments No one suggests Weyl's Classical Groups? It was first published in 1939. I don't know if researchers in representation theory and invariant theory value it nowadays, but it is still up vote 4 down frequently cited in random matrix literature. show 1 more comment I'm surprised that nobody has mentioned Serre's Corps locaux (Local Fields), his Cohomologie galoisienne (Galois cohomology) and his Représentations linéaires des groupes finis (Linear representations of finite groups). up vote 16 Other eternal texts in Number Theory include Artin's Algebraic numbers and algebraic functions and the Artin-Tate notes on Class field theory, Hasse's Zahlentheorie and his down vote Klassenkörperbericht, Hecke's Vorlesungen über die Theorie der Algebraischen Zahlen, Weyl's Algebraic Theory of Numbers, and Hilbert's Zahlbericht. add comment G.N. Watson's "A Treatise on the Theory of Bessel Functions" (1922), up vote 2 down vote add comment Tate's thesis, Fourier analysis in number fields, and Hecke's zeta-functions, is from 1950 and is certainly still considered a primary on the subject (in addition to being the up vote 5 down original resource). show 2 more comments In classical invariant theory, both "The Algebra of Invariants" by Grace and Young and "An introduction to the algebra of quantics" by Elliott are still much in use. The latest up vote 1 down edition of Grace and Young is 1903 and of Elliott 1913. 1 It seems these were already mentioned in Abdelmalek Abdesselam's answer. – quid Jan 2 '13 at 20:04 add comment Barry Simon and Michael Reed's classic volume on Functional Analysis (1981) is my one of my favorites. up vote 4 down vote Ayoub, "An Introduction to the Analytic Theory of Numbers," (1963) is out of print but one of the best books on the subject. add comment "Introduction to commutative algebra" by Atiyah and MacDonald is from 1969. (I learnt commutative algebra from this book at the University of Oslo just a few years ago) up vote 30 down vote add comment Meet the Rudins: Baby Rudin (first published in 1953), Papa Rudin (whose oldest copyright I've been able to find dates back to 1966) and Grandaddy Rudin (1973 is the oldest up vote 43 down reference I've found). 2 Also I would add his book "Functional Analysis". – Vahid Shirbisheh Dec 28 '12 at 21:59 5 Also known as Grandaddy Rudin. – Nate Eldredge Dec 30 '12 at 18:47 4 @Robert: judging by the year it was published, I suppose it should be adolescent Rudin. – Alberto García-Raboso Jan 7 '13 at 13:36 2 A rare example of a family where the granddaddy is the youngest... – Daniel McLaury Oct 13 '13 at 7:03 show 2 more comments Dickson's "History of the Theory of Numbers" is not only old (1919), but it reviews material which is even older. I found it extremely useful when calculating some family Gromov-Witten invariants in a recent paper with Jarek Kedra - while performing the arithmetic manipulations in Section 8, we would have been lost without the wealth of formulae in Dickson. I've no doubt up vote 6 the material appears elsewhere, but Dickson has a comprehensive and carefully historical approach. down vote add comment Hardy "Divergent series" (1949) Naimark "Normed rings" (1968) up vote 4 down vote Maurin "Methods of Hilbert spaces" (1959) Hille & Phillips "Functional analysis and semigroups" (1957) add comment Not the answer you're looking for? Browse other questions tagged ho.history-overview books big-list or ask your own question.
{"url":"http://mathoverflow.net/questions/117415/old-books-still-used?answertab=active","timestamp":"2014-04-18T19:06:15Z","content_type":null,"content_length":"167626","record_id":"<urn:uuid:ab0b856e-0f82-4668-bd18-377caa05ff8e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
e S On the Fluid Limits of a Resource Sharing Algorithm with Logarithmic Weights Seminar Room 1, Newton Institute The properties of a class of resource allocation algorithms for communication networks are presented in this talk.The algorithm is as follows: if a node of this network has x requests to transmit, then it receives a fraction of the capacity proportional to log(x), the logarithm of its current load. A fluid scaling analysis of such a network is presented. It is shown that several different times scales play an important role in the evolution of such a system. An interesting interaction of time scales phenomenon is exhibited. It is also shown that these algorithms with logarithmics weights have remarkable, unsual, fairness properties. A heavy traffic limit theorem for the invariant distribution is proved. Joint work with Amandine Veber. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/SCS/seminars/2013081210001.html","timestamp":"2014-04-19T13:08:53Z","content_type":null,"content_length":"6452","record_id":"<urn:uuid:9deb70b6-e5ce-447c-9fab-89a49e9026c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Call C Functions in C Charts Call C Library Functions You can call this subset of the C Math Library functions: ┃ abs^* ^** │ acos^** │ asin^** │ atan^** │ atan2^** │ ceil^** ┃ ┃ cos^** │ cosh^** │ exp^** │ fabs │ floor^** │ fmod^** ┃ ┃ labs │ ldexp^** │ log^** │ log10^** │ pow^** │ rand ┃ ┃ sin^** │ sinh^** │ sqrt^** │ tan^** │ tanh^** │ ┃ ^* The Stateflow^® abs function goes beyond that of its standard C counterpart with its own built-in functionality. See Call the abs Function. ^** You can also replace calls to the C Math Library with target-specific implementations for this subset of functions. For more information, see Replacement of C Math Library Functions with Target-Specific Implementations. When you call these math functions, double precision applies unless all the input arguments are explicitly single precision. When a type mismatch occurs, a cast of the input arguments to the expected type replace the original arguments. For example, if you call the sin function with an integer argument, a cast of the input argument to a floating-point number of type double replaces the original If you call other C library functions not listed above, include the appropriate #include... statement in the Simulation Target > Custom Code pane of the Model Configuration Parameters dialog box. Call the abs Function Interpretation of the Stateflow abs function goes beyond the standard C version to include integer and floating-point arguments of all types as follows: ● If x is an integer of type int32, the standard C function abs applies to x, or abs(x). ● If x is an integer of type other than int32, the standard C abs function applies to a cast of x as an integer of type int32, or abs((int32)x). ● If x is a floating-point number of type double, the standard C function fabs applies to x, or fabs(x). ● If x is a floating-point number of type single, the standard C function fabs applies to a cast of x as a double, or fabs((double)x). ● If x is a fixed-point number, the standard C function fabs applies to a cast of the fixed-point number as a double, or fabs((double)V[x]), where V[x] is the real-world value of x. If you want to use the abs function in the strict sense of standard C, cast its argument or return values to integer types. See Type Cast Operations. │ Note: If you declare x in custom code, the standard C abs function applies in all cases. For instructions on inserting custom code into charts, see Share Data Using Custom C Code. │ Call min and max Functions You can call min and max by emitting the following macros automatically at the top of generated code. #define min(x1,x2) ((x1) > (x2) ? (x2):(x1)) #define max(x1,x2) ((x1) > (x2) ? (x1):(x2)) To allow compatibility with user graphical functions named min() or max(), generated code uses a mangled name of the following form: <prefix>_min. However, if you export min() or max() graphical functions to other charts in your model, the name of these functions can no longer be emitted with mangled names in generated code and conflict occurs. To avoid this conflict, rename the min() and max() graphical functions. Replacement of C Math Library Functions with Target-Specific Implementations You can use the code replacement library published by Embedded Coder^® code generation software to replace the default implementations of a subset of C library functions with target-specific implementations (see Supported Functions for Code Replacement). When you specify a code replacement library, Stateflow software generates code that calls the target implementations instead of the associated C library functions. Stateflow software also uses target implementations in cases where the compiler generates calls to math functions, such as in fixed-point arithmetic utilities. Use of Code Replacement Libraries To learn how to create and register code replacement tables in a library, see Introduction to Code Replacement Libraries and Map Math Functions to Application-Specific Implementations in the Embedded Coder documentation. Supported Functions for Code Replacement You can replace the following math functions with target-specific implementations: ┃ Function │ Data Type Support ┃ ┃ abs │ Floating-point and integer ┃ ┃ acos │ Floating-point ┃ ┃ asin │ Floating-point ┃ ┃ atan │ Floating-point ┃ ┃ atan2 │ Floating-point ┃ ┃ ceil │ Floating-point ┃ ┃ cos │ Floating-point ┃ ┃ cosh │ Floating-point ┃ ┃ exp │ Floating-point ┃ ┃ floor │ Floating-point ┃ ┃ fmod │ Floating-point ┃ ┃ ldexp │ Floating-point ┃ ┃ log │ Floating-point ┃ ┃ log10 │ Floating-point ┃ ┃ max │ Floating-point and integer ┃ ┃ min │ Floating-point and integer ┃ ┃ pow │ Floating-point ┃ ┃ sin │ Floating-point ┃ ┃ sinh │ Floating-point ┃ ┃ sqrt │ Floating-point ┃ ┃ tan │ Floating-point ┃ ┃ tanh │ Floating-point ┃ Replacement of Calls to abs Replacement of calls to abs can occur as follows: ┃ Type of Argument for abs │ Result ┃ ┃ Floating-point │ Replacement with target-specific implementation ┃ ┃ Integer │ Replacement with target-specific implementation ┃ ┃ Fixed-point with zero bias │ Replacement with ANSI C function ┃ ┃ Fixed-point with nonzero bias │ Error ┃ Call Custom C Code Functions You can specify custom code functions for use in C charts for simulation and C code generation. Specify Custom C Functions for Simulation To specify custom C functions for simulation: 1. Open the Model Configuration Parameters dialog box. 2. Select Simulation Target > Custom Code. 3. Specify your custom C files, as described in Integrate Custom C Code for Nonlibrary Charts for Simulation. Specify Custom C Functions for Code Generation To specify custom C functions for code generation: 1. Open the Model Configuration Parameters dialog box. 2. Select Code Generation > Custom Code. 3. Specify your custom C files, as described in Integrate Custom C Code for Nonlibrary Charts for Code Generation. Guidelines for Calling Custom C Functions in Your Chart ● Define a function by its name, any arguments in parentheses, and an optional semicolon. ● Pass string parameters to user-written functions using single quotation marks. For example, func('string'). ● An action can nest function calls. ● An action can invoke functions that return a scalar value (of type double in the case of MATLAB^® functions and of any type in the case of C user-written functions). Guidelines for Writing Custom C Functions That Access Input Vectors ● Use the sizeof function to determine the length of an input vector. For example, your custom function can include a for-loop that uses sizeof as follows: for(i=0; i < sizeof(input); i++) { ● If your custom function uses the value of the input vector length multiple times, include an input to your function that specifies the input vector length. For example, you can use input_length as the second input to a sum function as follows: int sum(double *input, double input_length) Your sum function can include a for-loop that iterates over all elements of the input vector: for(i=0; i < input_length; i++) { Function Call in Transition Action Example formats of function calls using transition action notation appear in the following chart. A function call to fcn1 occurs with arg1, arg2, and arg3 if the following are true: The transition action in the transition from S2 to S3 shows a function call nested within another function call. Function Call in State Action Example formats of function calls using state action notation appear in the following chart. Chart execution occurs as follows: 1. When the default transition into S1 occurs, S1 becomes active. 2. The entry action, a function call to fcn1 with the specified arguments, executes. 3. After 5 seconds of simulation time, S1 becomes inactive and S2 becomes active. 4. The during action, a function call to fcn2 with the specified arguments, executes. 5. After 10 seconds of simulation time, S2 becomes inactive and S1 becomes active again. 6. Steps 2 through 5 repeat until the simulation ends. Pass Arguments by Reference A Stateflow action can pass arguments to a user-written function by reference rather than by value. In particular, an action can pass a pointer to a value rather than the value itself. For example, an action could contain the following call: where f is a custom-code C function that expects a pointer to x as an argument. If x is the name of a data item defined in the Stateflow hierarchy, the following rules apply: ● Do not use pointers to pass data items input from a Simulink^® model. If you need to pass an input item by reference, for example, an array, assign the item to a local data item and pass the local item by reference. ● If x is a Simulink output data item having a data type other than double, the chart property Use Strong Data Typing with Simulink I/O must be on (see Specify Chart Properties). ● If the data type of x is boolean, you must turn off the coder option Use bitsets for storing state configuration (see How to Optimize Generated Code for Embeddable Targets). ● If x is an array with its first index property set to 0 (see Set Data Properties), then you must call the function as follows. This passes a pointer to the first element of x to the function. ● If x is an array with its first index property set to a nonzero number (for example, 1), the function must be called in the following way: This passes a pointer to the first element of x to the function.
{"url":"http://www.mathworks.com.au/help/stateflow/ug/calling-c-functions-in-actions.html?nocookie=true","timestamp":"2014-04-25T01:42:32Z","content_type":null,"content_length":"59301","record_id":"<urn:uuid:8bc58a70-9b31-4620-b87a-650bd09d8fe6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Weighing the implications of negative-mass antimatter Antimatter is anti- a lot of things, but it’s not supposed to be antigravity. An antimatter particle does have the opposite electric charge of its ordinary matter counterpart. But the mass of both particles should be precisely equal. When an antiparticle meets its ordinary particle partner, they annihilate in a burst of energy, equal to their masses squared in accordance with Einstein’s famous formula E=mc^2. Because you get energy out by destroying antimatter, you have to put energy in to create antimatter, which implies that antimatter’s mass must be positive. That’s one reason why most physicists immediately dismiss the idea that antimatter could have negative mass. But thanks to some technical complications in the definition of mass, the case is not closed. To figure out whether such a thing as negative mass is possible in the universe, you have to know what kind of mass you’re talking about. Sometimes you’ll hear physicists say there are two kinds of mass: inertial and gravitational. But actually mass is more like Gaul — it can be divided into three sorts: inertial mass, active gravitational mass and passive gravitational mass. Inertial mass represents an object’s resistance to changes in its state of motion. (That’s the type of mass that particles acquire by interacting with the Higgs field, the source of the Higgs boson.) Active gravitational mass is the source of a gravitational field; passive gravitational mass is the mass that a field acts on. In Newton’s physics, active and passive gravitational mass must be equal, by virtue of his third law of motion (the one about equal and opposite reactions). But for Newton it seemed to be just a happy coincidence that gravitational mass equaled inertial mass (which is the mass in Newton’s second law, about force and acceleration). Einstein, though, required inertial and gravitational mass to be equal — that was the principle on which he based his theory of gravity, general relativity. If gravitational and inertial mass are equal, then antimatter can’t have negative gravitational mass, because it certainly has positive inertial mass. But suppose general relativity turns out not to be the last word on gravity. Then antimatter, despite having positive inertial mass, might still have negative gravitational mass. That’s why several teams are doing or planning experiments to find out. “In a world in which physicists have only recently discovered that we cannot account for most of the matter and energy in the universe, it would be presumptuous to categorically assert that the gravitational mass of antimatter necessarily equals its inertial mass,” physicists from the ALPHA collaboration at the European research laboratory CERN wrote in a recent paper. Physicists have debating this point for decades. Some have contended that negative-gravity antimatter would violate the law of conservation of energy. But that argument apparently doesn’t hold up to advanced analysis invoking quantum physics. And you can’t very well say it must be so because general relativity says so, because the whole point is to test general relativity to see if it’s really right. And after all, since general relativity does not reconcile itself with quantum physics very well, something has to give somewhere. If antimatter does possess negative gravitational mass, the gravitational force on an antiatom would point up, and so it would “fall” upward (because its inertial mass is positive, it accelerates in the direction of the force). But it’s not exactly easy to measure which way antiatoms fall. One problem is making antimatter atoms in the first place, but that has been done at CERN. Still, apparatus of extreme delicacy is required to test the effect of gravity on antimatter. One early attempt, reported this year in Nature Communications by the ALPHA team, recorded the demise of 434 antihydrogen atoms confined in a magnetic trap. When the magnetism is turned off, the antiatoms collide with the walls of the container and annihilate. If antiatoms fall up instead of down, more of the annihilations should occur near the top of the container than the bottom. That sounds simpler than it is. It’s a very complex experiment. The location of an annihilation depends on all sorts of things, such as how fast the magnetic field decays after it’s turned off and the energy possessed by the individual atoms. All those factors make the results rather imprecise. If all went well, such an experiment could determine the precise ratio of antimatter’s gravitational to inertial mass. If Einstein’s general relativity is right, inertial and gravitational mass are identical, so that ratio would be exactly equal to 1. If antimatter has negative gravitational mass (of equal magnitude), that ratio would be –1. So far, the best the ALPHA experiment can conclude is that the ratio is probably no more than 110 or less than –75, which doesn’t exactly come close to answering the question. Refinements in the technique may someday narrow that range. And there are other proposals, such as one using muonium (an antimuon plus electron) that might offer better precision. (A muonium experiment would have the additional benefit of testing whether antileptons behave the same way with respect to gravity as antiquarks, the main components of antihydrogen atoms.) If any of the proposed experiments succeed in identifying negative mass of some sort, it won’t be the first time that taking a negative idea seriously led to an important discovery. Antimatter itself was first imagined by the physicist Paul Dirac only after he decided to take the idea of negative energy seriously, not to mention the even more bizarre notion of negative probabilities. “Negative energies and probabilities should not be considered as nonsense,” Dirac declared in a lecture in 1941. “They are well-defined concepts mathematically, like a negative sum of money.” So maybe negative mass should not be considered as nonsense, either. But you shouldn’t bet on it. You’d probably end up with a negative sum of money. Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset your password. You may also log into Disqus using Facebook, Twitter or Google.
{"url":"https://www.sciencenews.org/blog/context/weighing-implications-negative-mass-antimatter","timestamp":"2014-04-19T15:50:57Z","content_type":null,"content_length":"79040","record_id":"<urn:uuid:3bfb7bc3-044a-4fc6-b2a8-577b408f881e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Mr. Bishara is tutoring Steph in Algebra II, Exponential Functions! I’m back after a week of programming a nonsensical multithreaded program in Java that works about every 7 out of 8 times that I run it. 931 more words Question: Five people witnessed a murder and they each gave a slightly different description of the suspect. Which description is most likely to be correct? 237 more words Today is day 18 of Ouliposting…and it was one of those more “surreal” challenges where I just had to follow the rules and let the words come as they may…there’s obviously some interesting word choices, but as an overall poem, it’s not my favorite of the month…! 259 more words Lylanne Musselman For today’s installment of the A-to-Z challenge, I offer a bit of my mathematical background. The poetic form is pleiades with origins in the constellation Taurus. 118 more words This week,I’ve learned alot from taking on my consumer math project and even though it was vacation that didn’t stop my determination. I don’t know if its the lack of writing or pure laziness that put me in this stump but i do know for sure that moping around and stressing about it isn’t the way to go. 282 more words I can’t emphasize how ironic it is to me that I’m getting an MBA at times. I’m just not a math guy. This doesn’t mean I’m weak at math, however I wouldn’t consider it a strength, but taking a class that is strictly focusing on crunching numbers and solving equations without a purpose other than solving for the correct number isn’t terribly stimulating to me. 863 more words
{"url":"http://en.wordpress.com/tag/math/","timestamp":"2014-04-19T09:32:13Z","content_type":null,"content_length":"86961","record_id":"<urn:uuid:59565f88-2ffb-4a4f-ad07-9d637e7c1eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
hmm: implementation of viterbi algorithm (Durbin, 1998) Part 2 Previous post presented the problem of dishonest casino that ocassionally uses loaded die. Sequence of the real states is hidden, and we are trying to figure it out just by looking at the observations (symbols). # backtracking algorithm for (i in 2:length(symbol.sequence)) { # probability vector stores the current emission with respect to (i-1) observation of selected state and transition probability # state vector (pointer) on the other hand is only storing the most probable state in (i-1), which we will later use for backtracking tmp.path.probability <- lapply(states, function(l) { max.k <- unlist(lapply(states, function(k) { prob.history[i-1, k] + transition.matrix[k, l] return(c(states[which(max.k == max(max.k))], max(max.k) + emission.matrix[symbol.sequence[i], l])) prob.history <- rbind(prob.history, data.frame(F = as.numeric(tmp.path.probability[[1]][2]), L = as.numeric(tmp.path.probability[[2]][2]))) state.history <- data.frame(F = c(as.character(state.history[,tmp.path.probability[[1]][1]]), "F"), L = c(as.character(state.history[,tmp.path.probability[[2]][1]]), "L")) # selecting the most probable path viterbi.path <- as.character(state.history[,c("F", "L")[which(max(prob.history[length(symbol.sequence), ]) == prob.history[length(symbol.sequence), ])]]) If we apply our implementation to the data in the previous post, we can get the idea how well can HMM reconstruct the real history. viterbi.table <- table(viterbi.path == real.path) cat(paste(round(viterbi.table["TRUE"] / sum(viterbi.table) * 100, 2), "% accuracy\n", sep = "")) # 71.33% accuracy Cheers, mintgene. One thought on “hmm: implementation of viterbi algorithm (Durbin, 1998) Part 2” 1. Nice intro! Thanks!
{"url":"http://mintgene.wordpress.com/2012/01/29/hmm-implementation-of-viterbi-algorithm-durbin-1998-part-2/","timestamp":"2014-04-16T13:16:14Z","content_type":null,"content_length":"50199","record_id":"<urn:uuid:10943c93-bda5-4d61-a50b-97d80de7d38d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
TweetCondensed matter physics – the branch of physics responsible for discovering and describing most of these phases – has traditionally classified phases by the way their fundamental building blocks – usually atoms – are arranged. The key is something called symmetry. Using modern mathematics – specifically group cohomology theory and group super-cohomology theory – the researchers have constructed and classified the symmetry-protected phases in any number of dimensions and for any symmetries. Their new classification system will provide insight about these quantum phases of matter, which may in turn increase our ability to design states of matter for use in superconductors or quantum computers. Examples of symmetry-protected phases include some topological superconductors and topological insulators, which are of widespread immediate interest because they show promise for use in the coming first generation of quantum electronics. To understand symmetry, imagine flying through liquid water in an impossibly tiny ship: the atoms would swirl randomly around you and every direction – whether up, down, or sideways – would be the same. The technical term for this is "symmetry" – and liquids are highly symmetric. Crystal ice, another phase of water, is less symmetric. If you flew through ice in the same way, you would see the straight rows of crystalline structures passing as regularly as the girders of an unfinished skyscraper. Certain angles would give you different views. Certain paths would be blocked, others wide open. Ice has many symmetries – every "floor" and every "room" would look the same, for instance – but physicists would say that the high symmetry of liquid water is broken. Classifying the phases of matter by describing their symmetries and where and how those symmetries break is known as the Landau paradigm. More than simply a way of arranging the phases of matter into a chart, Landau’s theory is a powerful tool which both guides scientists in discovering new phases of matter and helps them grapple with the behaviours of the known phases. Physicists were so pleased with Landau’s theory that for a long time they believed that all phases of matter could be described by symmetries. That’s why it was an eye-opening experience when they discovered a handful of phases that Landau couldn’t describe. New states contain a new kind of order: topological order. Topological order is a quantum mechanical phenomenon: it is not related to the symmetry of the ground state, but instead to the global properties of the ground state’s wave function. Therefore, it transcends the Landau paradigm, which is based on classical physics concepts. String net theory of light and electrons Science - etry-Protected Topological Orders in Interacting Bosonic Systems Order Parameters, Broken Symmetry, and Topology - Online introduction to the theoretical framework used to study the bewildering variety of phases in condensed-matter physics. They emphasize the importance of the breaking of symmetries, and develop the idea of an order parameter through several examples. They discuss elementary excitations and the topological theory of Tweet Infinite Z has a virtual holographic 3D display and pen input device that goes by the name zSpace. This technology combines stereoscopic images with infrared cameras that actually track head and hand movements to construct a more realistic holographic effect. For the zSpace illusion to work, you need to wear a pair of special glasses. Not only do the glasses perform the required image separation for stereoscopy, but they also have embedded infrared reflectors to help the system track your head. This allows you to move your head so that you can view a hovering object from different perspectives. The screen actually changes what is being displayed based on where you’re looking at it. This innovation allows the illusion of three dimensions to work much more effectively. The system has real-world uses now in architecture and medicine. The zSpace is designed for professionals working in fields like 3D modeling, so it is priced accordingly. It’s available for $3,995 and people enrolled in Infinite Z’s developer program can buy a device for only $1,500. “Virtual Holographic 3-D,” also lets you manipulate virtual objects as if they really were floating just inches in front of you. The special stylus connected to the display also contains sensors that allow its movement to be tracked in three dimensions. You can use the stylus to “grab” parts of the virtual image in front of you and move them around in 3-D space. TweetMIT researchers have produced a new kind of photovoltaic cell based on sheets of flexible graphene coated with a layer of nanowires. The approach could lead to low-cost, transparent and flexible solar cells that could be deployed on windows, roofs or other surfaces. Illustration shows the layered structure of the new device, starting with a flexible layer of graphene, a one-atom-thick carbon material. A layer of polymer is bonded to that, and then a layer of zinc-oxide nano wires (shown in magenta), and finally a layer of a material that can extract energy from sunlight, such as quantum dots or a polymer-based material. Illustration courtesy of the research team Nanoletters - Graphene Cathode-Based ZnO Nanowire Hybrid Solar Cells TweetARPA seeks innovative approaches that ensure robust communications in such congested and contested environments. The DARPA Spectrum Challenge is a competition for teams to create software-defined radio protocols that best use communication channels in the presence of other users and interfering signals. Using a standardized radio hardware platform, the team that finds the best strategies for guaranteeing successful communication in the presence of other competing radios will win. In addition to bragging rights for the winning teams, one team could win as much as $150,000. High priority radios in the military and civilian sectors must be able to operate regardless of the ambient electromagnetic environment, to avoid disruption of communications and potential loss of life. Rapid response operations, such as disaster relief, further motivate the desire for multiple radio networks to effectively share the spectrum without requiring direct coordination or spectrum preplanning. Consequently, the need to provide robust communications in the presence of interfering signals is of great importance. TweetOrbitec has flown a radical new engine technology that promises to cut the size, weight and therefore the cost of putting a rocket – and payload – into space. Regular rocket engines get incredibly hot, reaching temperatures upwards of 3,000C (5,400F) or more, hot enough to melt the metal chamber in which the rocket fuel mixes with oxygen and burns. At these extremes, even rockets with sidewalls made of heat-resistant superalloys would fail catastrophically. Orbitec’s alternative approach keeps the hot burning gases away from the chamber surfaces altogether. The company’s patented designs create a cyclonic swirl, or vortex, of fuel and oxygen that holds the searing gases and fumes in the very centre of the cylindrical combustion chamber, away from the vulnerable sidewalls. “Our vortex generator eliminates the high temperatures at the inner surfaces of the engine,” says Martin Chiaverini, principal propulsion engineer at the firm. “You can touch the exterior during lab-test firings and not get burned.” The vortex, or swirl, is produced by placing the oxidiser nozzles at the base of the combustion chamber and aiming them tangentially to the inner surface of its curving walls. This produces an outer vortex of cool gases that spiral up the walls forming a protective, cooling barrier. When this meets the top of the chamber it is mixed with rocket fuel and forced inward and down, forming a second, inner, descending vortex in the centre of the chamber that is concentrated like a tornado Parabolic Arc covered the October launch. Tweet 1. Commercial operation of the Kudankulam Nuclear Power Project has been delayed to January, 2013 It is India's first 1,000MW first unit. 2. Russia has told India that Kudankulam nuclear power plants 3 and 4 would cost “double”, after New Delhi decided that the next two reactors would come under the new civil nuclear liability law, and not be covered by the agreement on Kudankulam 1 and 2. Russia extending credit lines worth $3.2 billion for Kudankulam 3 and 4 early this year, initial costs had been estimated to be between $ 6 billion and $ 7 billion. This figure could now double. Moscow had urged New Delhi to recognise Kudankulam 3 and 4 as being grandfathered under the agreement for Kudankulam 1 and 2, and argued that the inter-governmental agreement of 2008, which firms up plans for setting up four additional reactors, was done before the liability law. TweetResearchers from North Carolina State University have created conductive wires that can be stretched up to eight times their original length while still functioning. The wires can be used for everything from headphones to phone chargers, and hold potential for use in electronic textiles. To make the wires, researchers start with a thin tube made of an extremely elastic polymer and then fill the tube with a liquid metal alloy of gallium and indium, which is an efficient conductor of The tube, filled with liquid metal, can be stretched many times its original length. TweetThe World Bank’s latest East Asia and Pacific Economic Update released today, projects the Asia region will grow at 7.5 percent in 2012, lower than the 8.3 percent registered in 2011, but set to recover to 7.9 percent in 2013. With weak demand for exports from global markets, domestic demand has remained the main driver of growth for most economies of the region. The region’s economic performance in 2012, the report says, was affected by China’s economic slowdown. China’s growth is projected to reach 7.9 percent in 2012, 1.4 percentage point lower than last year’s 9.3 percent and the lowest growth rate since 1999. Weak exports and the government’s efforts to cool down the overheating housing sector slowed down China’s economy in 2012, but recovery has set in the final months of the year. In 2013, China’s economy is expected to grow at 8.4 percent, fueled by fiscal stimulus and the faster implementation of large investment projects. Asia contributed almost 40 percent of global growth in 2012, and should have a similar share in 2013. World Bank projections - China 2013 GDP Growth 8.4% China 2014 GDP Growth 8.0% Developing East Asia (excluding China) 2011 GDP growth 4.4% [Actual] Developing East Asia (excluding China) 2012 GDP growth 5.6% [Estimate] Developing East Asia (excluding China) 2012 GDP growth 5.7% [Projection] Developing East Asia (excluding China) 2012 GDP growth 5.8% [Projection] Continuing strong performances by Indonesia, Malaysia, and the Philippines will boost developing East Asia, excluding China. TweetPsy's Gangnam style video has made it to the one billion view mark on Youtube. Lipdubs, spoofs and commentaries of Gangnam Style are seen an estimated 20 million times a day. Here is another parody - NASA Johnson style TweetIn a December 3, 2012 Lawrencevill Plasma Physics (LPP) status report - Two shots with no arcing indicate the problem is solved, although more proof is needed. A leak held up key tests while we implement a solution. Analysis of photos taken in October confirms our understanding of plasmoid structure. Arcing appears to be solved for now, but a leak causes delay. A twitter update indicates the leak was fixed. In May, 2012, the LPP plan for the next 12 months was laid out. To gain higher yield and to attain “feasibility” the following steps are being done over the course of the next year (2013): 1) The “teeth that chew the sheath” tungsten crown to regularize the filaments - 10-100x yield 2) Full power output of Capacitors and to ‘Imitate’ the heavier mixture of pB11 by using Deuterium/Nitrogen. 3) Shorter Electrodes, slower run down, more fill gas. 4) New Raytheon switches for more Current from capacitors - 10x yield. 5) Switch to pB11 (incrementally higher percentage from the D/N mix) - 15x yield. Goal: 30 kJ* gross fusion energy per shot proves feasibility of a positive net power output Generator using aneutronic fuel! *A 5MW production reactor would have about 66 kJ gross fusion yield per shot* TweetThe metabolic characteristics of long-lived mice are described in a paper in the Frontiers in genetic aging. Mice lacking Growth Hormone (GH) or GH receptors show numerous symptoms of delayed aging, are partially protected from age-related diseases, and outlive their normal siblings by 30–65% depending on genetic background, sex, and diet composition. Importantly, many of the metabolic features of long-lived mutant mice described in this article have been associated with extended human longevity. Comparisons between centenarians and elderly individuals from the same population and between the offspring of exceptionally long-lived people and their partners indicate that reduced insulin, improved insulin sensitivity, increased adiponectin, and reduced pro-inflammatory markers consistently correlate with improved life expectancy. Genetic suppression of insulin/insulin-like growth factor signaling (IIS) can extend longevity in worms, insects, and mammals. In laboratory mice, mutations with the greatest, most consistent, and best documented positive impact on lifespan are those that disrupt growth hormone (GH) release or actions. These mutations lead to major alterations in IIS but also have a variety of effects that are not directly related to the actions of insulin or insulin-like growth factor I. Long-lived GH-resistant GHR-KO mice with targeted disruption of the GH receptor gene, as well as Ames dwarf (Prop1df) and Snell dwarf (Pit1dw) mice lacking GH (along with prolactin and TSH), are diminutive in size and have major alterations in body composition and metabolic parameters including increased subcutaneous adiposity, increased relative brain weight, small liver, hypoinsulinemia, mild hypoglycemia, increased adiponectin levels and insulin sensitivity, and reduced serum lipids. Body temperature is reduced in Ames, Snell, and female GHR-KO mice. Indirect calorimetry revealed that both Ames dwarf and GHR-KO mice utilize more oxygen per gram (g) of body weight than sex- and age-matched normal animals from the same strain. They also have reduced respiratory quotient, implying greater reliance on fats, as opposed to carbohydrates, as an energy source. Differences in oxygen consumption (VO2) were seen in animals fed or fasted during the measurements as well as in animals that had been exposed to 30% calorie restriction or every-other-day feeding. However, at the thermoneutral temperature of 30°C, VO2 did not differ between GHR-KO and normal mice. Thus, the increased metabolic rate of the GHR-KO mice, at a standard animal room temperature of 23°C, is apparently related to increased energy demands for thermoregulation in these diminutive animals. We suspect that increased oxidative metabolism combined with enhanced fatty acid oxidation contribute to the extended longevity of GHR-KO mice. TweetA dinosaur tooth found in Argentina that is 32% longer than the tooth for the biggest sauropod suggests either a very big dinosaur or one with unusually large teeth. The largest had been thought to be 30 meters long and weighed as much as 80 tons. The weights were recently adjusted lower to about 23 tons with new analysis. The new dinosaur might be up to 40 meters long if the body was proportional to the tooth. The tooth MML-Pv 1030 comes from the Upper Cretaceous (middle Campanian–lower Maastrichtian) strata of the Allen Formation at Salitral de Santa Rosa, Río Negro, Argentina and is the biggest titanosaur tooth yet described. The specimen is a cylindrical chisel-like tooth, its length is 75 mm, mesiodistally 15 mm and labiolingually 11 mm. The wear facet is single on the lingual side of the tooth, which has an oval outline with a low angle (10°) with respect to the axial axis of the tooth. This tooth is 32% greater in length than the longest tooth registered in a titanosaurid (Nemegtosaurus), and twice the tooth size of taxa as Tapuiasaurus, Bonitasaura and Pitekunsaurus. Detailed descriptions of the tooth morphology and a highlight of comparative relationships among known titanosaur teeth are provided. Finally, different aspects are suggested related to morphology and feeding behavior. Tweet This site has discussed the normal technology adoption cycle which takes years to decades for something new to scale up would normally provide some time for acclimation. People have coping mechanisms to prevent psychological shock and to help them prevent admitting that their world view was wrong. People will also not bother to be precise about their claims and statements. The moon landings were shocking to many people and then the effort to go to the moon was stopped. Quite a large number of people would then claim that the landings were faked. The lack of follow up also lets them dismiss space development as retro-futurism. There are working flying cars and some are sold in special legal categories and a likely modest commercial success will come with the Terrafugia Transition. However, the small volume will let people dismiss this development. Even if flying cars had total units deployed of 500,000 this would not be considered the "promised future". 500,000 would still be more than the current world market for small planes. The volumes would be compared against cars which have 1 billion deployed. They would also need to be used for regular day to day commuting to be perceived as approaching what people hoped for. If DARPA was successful with the development of its "flying humvee" with some robotic flight capabilities and if there was development of pocket airports then that could bring about a revolution in flight. The pocket airports and short takeoff and highly fuel efficient light planes (especially with UAV robotic control) would enable a societally impacting usage of air taxis. TweetIEEE Spectrum publishes a ranking of company patent power. The patent benchmarking takes into account not only the size of organizations’ patent portfolios but also the quality, as reflected in characteristics such as growth, impact, originality, and general applicability. DWave Systems (adiabatic quantum computers) ranks 4th behind IBM, HP and Fujitsu in conmputer system patents. TweetChina's High Speed Rail Morgan Stanley has a 64 page analysis of high speed rail in China. China's complete national coverage for its HSR network makes the economic case different than one off HSR lines in California. HSR is about half the cost of a comparable air ticket. There are environmental benefits. China does have very high passenger usage on its lines. This is different that rail usage cases in the USA. The China HSR system will span 30,000 kilometers, connect more than 250 cities and regions with a total population of about 700 million, mobilize 4 billion travelers per year, and add 1,600 billion kilometers to China’s domestic passenger throughput annually (i.e., four times the total domestic passenger throughput in Japan today) by 2020. China's rail passenger numbers rose 4.6 percent to 1.7 billion through November, according to the ministry. The numbers have climbed because of the opening of new lines and the easing of safety concerns following a fatal crash last year. Rail demand in China is progressively growing and expected to more than triple to five billion passengers a year by 2020. The high-speed network is not only expected to ease congestion on conventional lines but will also have a positive impact on freight transportation, and boost productivity throughout the economy. As the China HSR system spreads out and ramps up capacity, regional economic dynamics will change with it. We believe the HSR’s most significant geographical economic impact will be the creation of connected metropolitan areas, or the super-city clusters (SCCs), because of its mass-transporting capacity at very high ground speeds (about 250 to 350 kilometers per hour), basically significantly multiplying a comuter's traveling distance within a fixed time period. Traveling to neighboring cities on HSR will take the same amount of time as traveling across a large city in a car. Eventually we believe that cities within the same cluster, connected by HSR grid, will no longer be conventional stand-alone cities. They will become like huge business-and-life districts in a very large metropolitan area, with active economic interactions. Will the China HSR project be sustainable? This is clearly a valid question to ask. The operating capital required for such a mega project will be substantial. Our transportation analyst Edward Xu and capital goods analyst Kate Zhu believe that on average every 1,000 kilometers of China HSR will require Rmb4.5 billion per year in operating cash to function (this includes maintenance, parts replacement, and day-to-day operational costs). On the other hand, the operating revenue, based on our ticket price estimate (using the same per-thousand-kilometer price to per-capita GDP ratio on the eastern portion of the national grid), per-thousand kilometer revenue will be around Rmb6.5 billion per year. This translates into a national operating cash surplus of Rmb2 billion per year, which should be able to cover most, if not all, of the interest expenses. We believe regional jets will have limited market potential in China and will do well only in those regions where the HSR will not reach, such as the northwest and southwest. TweetInvensysrail provides an analysis of high speed rail (HSR). HSR investment has commonly resulted in: * reduced travel times; * reduced congestion on established modes of transport; * improved access to markets and commerce; * decreased carbon footprint in comparison to road and air transport; * and creating industry growth and export opportunities Wikipedia - Very few high-speed trains consume diesel or other fossil fuels but the power stations that provide electric trains with power can consume fossil fuels. In Japan and France, with very extensive high speed rail networks, a large proportion of electricity comes from nuclear power. On the Eurostar, which primarily runs off the French grid, emissions from traveling by train from London to Paris are 90% lower than by flying. Even using electricity generated from coal or oil, high speed trains are significantly more fuel-efficient per passenger per kilometer traveled than the typical automobile because of economies of scale in generator technology. Reduced travel times. * HSR offers faster net travel times than conventional road, rail and air travel between distances of approximately 150 kilometres (km) and 800 km. * For distances shorter than 150 km, the competitive advantage of HSR over conventional rail is decreased drastically by station processing time and travel to and from stations. * For distances longer than 800 km, the higher speed of air travel compensates for slow airport processing times and long trips to and from airports. TweetThe Cleveland Clinic lists Top 10 Medical Innovations that will have a major impact on improving patient care within the next year for 2013. The list is made up of devices, including a handheld optical scan for melanoma; drugs; diagnostic tests, such as 3D mammography; and a government program that financially rewards patients for improving their health. 1. Bariatric Surgery for Control of Diabetes Exercise and diet alone are not effective for treating severe obesity or Type 2 diabetes. Once a person reaches 100 pounds or more above his or her ideal weight, losing the weight and keeping it off for many years almost never happens. While the medications we have for diabetes are good, about half of the people who take them are not able to control their disease. This can often lead to heart attack, blindness, stroke, and kidney Surgery for obesity, often called bariatric surgery, shrinks the stomach into a small pouch and rearranges the digestive tract so that food enters the small intestine at a later point than usual. Over the years, many doctors performing weight-loss operations found that the surgical procedure would rid patients of Type 2 diabetes, oftentimes before the patient left the hospital. Many diabetes experts now believe that weight-loss surgery should be offered much earlier as a reasonable treatment option for patients with poorly controlled diabetes —and not as a last resort. 2. Neuromodulation Device for Cluster and Migraine Headaches The sphenopalatine ganglion (SPG) nerve bundle — located behind the bridge of the nose — has been a specific target for the treatment of severe headache pain for many years. Researchers have invented an on-demand patient-controlled stimulator for the SPG nerve bundle. This miniaturized implantable neurostimulator, the size of an almond, is placed through a minimally invasive surgical incision in the upper gum above the second molar. 3. Mass Spectrometry for Bacterial Identification Even in this age of advanced medical technology, identification of bacteria growing in culture can still require days or weeks. However, clinical microbiology laboratories throughout the world are now implementing new mass spectrometry technology to provide rapid organism identification that is more accurate and less expensive than current biochemical methods. Using one of the two MALDI-TOF mass spectrometry systems currently available in the United States provides more accurate identification of bacteria in minutes — rather than days. TweetMagnonics is an exciting extension of spintronics, promising novel ways of computing and storing magnetic data. What determines a material’s magnetic state is how electron spins are arranged (not everyday spin, but quantized angular momentum). If most of the spins point in the same direction, the material is ferromagnetic, like a refrigerator magnet. If half the spins point one way and half the opposite, the material is antiferromagnetic, with no everyday magnetism. There are other kinds of magnetism. In materials where the electrons are “itinerant” – moving rapidly through the crystal lattice like a gas, so that their spins become strongly coupled to their motions – certain crystalline structures can cause the spins to precess collectively to the right or left in a helix, producing a state called helimagnetism. Helimagnetism most often occurs at low temperature; increasing the heat collectively excites the spin structure and eventually destroys the order, relaxing the magnetism. In quantum calculations, such collective excitations are treated like particles (“quasiparticles”); excitations that disrupt magnetism are called magnons, or spin waves. There is a well developed theory of helimagnons, yet little is known experimentally about how helimagnetism forms or relaxes on time scales of less than a trillionth of a second, the scale on which magnetic interactions actually occur. TweetThe Carnival of Space 280 is up at Starry Critters Revealing Hidden Black Holes A search using archival data from previous Chandra observations of a sample of 62 nearby galaxies has shown that 37 of the galaxies, including NGC 3627, contain X-ray sources in their centers. Most of these sources are likely powered by central supermassive black holes. The survey, which also used data from the Spitzer Infrared Nearby Galaxy Survey, found that seven of the 37 sources are new supermassive black hole candidates. TweetMIT researchers have demonstrated experimentally the existence of a fundamentally new kind of magnetic behavior, adding to the two previously known states of magnetism. Ferromagnetism — the simple magnetism of a bar magnet or compass needle — has been known for centuries. In a second type of magnetism, antiferromagnetism, the magnetic fields of the ions within a metal or alloy cancel each other out. In both cases, the materials become magnetic only when cooled below a certain critical temperature. The Quantum Spin Liquid (QSL) is a solid crystal, but its magnetic state is described as liquid: Unlike the other two kinds of magnetism, the magnetic orientations of the individual particles within it fluctuate constantly, resembling the constant motion of molecules within a true liquid. MIT physicists grew this pure crystal of herbertsmithite in their laboratory. This sample, which took 10 months to grow, is 7 mm long (just over a quarter-inch) and weighs 0.2 grams. Image: Tianheng TweetThe Legged Squad Support System (LS3) four-legged robot has been operating at an outdoors testing ground. DARPA’s LS3 program demonstrated new advances in the robot’s control, stability and maneuverability, including "Leader Follow" decision making, enhanced roll recovery, exact foot placement over rough terrain, the ability to maneuver in an urban environment, and verbal command capability. So it has many of the capabilities of a dog. Follow a person, take simple verbal commands and roll over. TweetThe V-chip is the size of a business card and can test for 50 measures (like insulin and other blood proteins, cholesterol, and even signs of viral or bacterial infection all at the same time) from one drop of blood. The V-Chip could make it possible to bring tests to the bedside, remote areas, and other types of point-of-care needs. VChip aka volumetric bar-chart chip. Photo credit: Lidong Qin and Yujun Song. TweetNIST has confirmed long-standing suspicions among physicists that electrons in a crystalline structure called a kagome (kah-go-may) lattice can form a "spin liquid," a novel quantum state of matter in which the electrons' magnetic orientation remains in a constant state of change. The research shows that a spin liquid state exists in Herbertsmithite—a mineral whose atoms form a kagome lattice, named for a simple weaving pattern of repeating triangles well-known in Japan. Kagome lattices are one of the simplest structures believed to possess a spin liquid state, and the new findings, revealed by neutron scattering, indeed show striking evidence for a fundamental prediction of spin liquid physics. This image depicts magnetic effects within Herbertsmithite crystals, where green regions represent higher scattering of neutrons from NIST's Multi-Angle Crystal Spectrometer (MACS). Scans of typical highly-ordered magnetic materials show only isolated spots of green, while disordered materials show uniform color over the entire sample. The in-between nature of this data shows some order within the disorder, implying the unusual magnetic effects within a spin liquid. Credit: NIST TweetA carbon-nanotube-coated lens that converts light to sound can focus high-pressure sound waves to finer points than ever before. Researchers say it could lead to an invisible knife for noninvasive surgery. Today focused sound waves blast apart kidney stones and prostate tumors. The tools work primarily by focusing sound waves tightly enough to generate heat. "A major drawback of current strongly focused ultrasound technology is a bulky focal spot, which is on the order of several millimeters," Baac said. "A few centimeters is typical. Therefore, it can be difficult to treat tissue objects in a high-precision manner, for targeting delicate vasculature, thin tissue layer and cellular texture. We can enhance the focal accuracy 100-fold." The team was able to concentrate high-amplitude sound waves to a speck just 75 by 400 micrometers (a micrometer is one-thousandth of a millimeter). Their beam can blast and cut with pressure, rather than heat. With a new technique that uses tightly-focussed sound waves for micro-surgery, University of Michigan engineering researchers drilled a 150-micrometer hole in a confetti-sized artificial kidney stone. Image credit: Hyoung Won Baac Nature Scientific Reports - Carbon-Nanotube Optoacoustic Lens for Focused Ultrasound Generation and High-Precision Targeted Therapy TweetScanning tunneling microscopy (STM) is routinely employed by physicists and chemists to capture atomic-scale images of molecules on surfaces. Now, an international team led by Christian Joachim and co-workers from the A*STAR Institute of Materials Research and Engineering has taken STM a step further: using it to identify the quantum states within ‘super benzene’ compounds using STM conductance measurements1. Their results provide a roadmap for developing new types of quantum computers based on information localized inside molecular bonds. To gain access to the quantum states of hexabenzocoronene (HBC) — a flat aromatic molecule made of interlocked benzene rings — the researchers deposited it onto a gold substrate. According to team member We-Hyo Soe, the weak electronic interaction between HBC and gold is crucial to measuring the system’s ‘differential conductance’ — an instantaneous rate of current charge with voltage that can be directly linked to electron densities within certain quantum states. After cooling to near-absolute zero temperatures, the team maneuvered its STM tip to a fixed location above the HBC target. Then, they scanned for differential conductance resonance signals at particular voltages. After detecting these voltages, they mapped out the electron density around the entire HBC framework using STM. This technique provided real-space pictures of the compound’s molecular orbitals — quantized states that control chemical bonding. High-resolution microscopy reveals that a benzene-like molecule known as HBC has a quantized electron density around its ring framework (left). Theoretical calculations show that the observed quantum states change with different tip positions (right, upper/lower images, respectively). TweetThe field of metamaterials involves augmenting materials with specially designed patterns, enabling those materials to manipulate electromagnetic waves and fields in previously impossible ways. Now, researchers from the University of Pennsylvania have come up with a theory for moving this phenomenon onto the quantum scale, laying out blueprints for materials where electrons have nearly zero effective mass. Their idea was born out of the similarities and analogies between the mathematics that govern electromagnetic waves — Maxwell’s Equations — and those that govern the quantum mechanics of electrons — Schrödinger’s Equations. On the electromagnetic side, inspiration came from work the two researchers had done on metamaterials that manipulate permittivity, a trait of materials related to their reaction to electric fields. They theorized that, by alternating between thin layers of materials with positive and negative permittivity, they could construct a bulk metamaterial with an effective permittivity at or near zero. Critically, this property is only achieved when an electromagnetic wave passes through the layers head on, against the grain of the stack. This directional dependence, known as anisotropy, has practical applications. The researchers saw parallels between this phenomenon and the electron transport behavior demonstrated in Leo Esaki’s Nobel Prize-winning work on superlattices in the 1970s: semiconductors constructed out of alternating layers of materials, much like the permittivity-altering metamaterial. Physical Review B - Transformation electronics: Tailoring the effective mass of electrons TweetBy the end of 2013, nearly 400,000 plug-in electric vehicles (PEVs) will be dri the worldwide market for e-bicycles will increase at a compound annual growth rate (CAGR) of 7.5% between 2012 and 2018, resulting in global sales of more than 47 million vehicles in 2018. China is anticipated to account for 42 million of these e-bicycles that year, giving it 89% of the total world market. The e-bicycle market is anticipated to generate $6.9 billion in worldwide revenue in 2012, growing to $11.9 billion in 2018.ving on roads around the world. Sales of electric bicycles in North America will grow by more than 50% in 2013 to more than 158,000 bikes. The world electric bicycle market will grow by 10% to more than 33.6 million units during that year. Almost of the global electric bikes will be in China. TweetArxiv - Experimental signature of programmable quantum annealing (12 pages) This work shows that Dwave Systems adiabatic quantum computing system is leveraging quantum effects for up to 20 milliseconds. Different experiments are needed to calculate the speedup relative to optimal classical systems. Quantum annealing is a general strategy for solving difficult optimization problems with the aid of quantum adiabatic evolution. Both analytical and numerical evidence suggests that under idealized, closed system conditions, quantum annealing can outperform classical thermalization-based algorithms such as simulated annealing. Do engineered quantum annealing devices effectively perform classical thermalization when coupled to a decohering thermal environment? To address this we establish, using superconducting flux qubits with programmable spin-spin couplings, an experimental signature which is consistent with quantum annealing, and at the same time inconsistent with classical thermalization, in spite of a decoherence timescale which is orders of magnitude shorter than the adiabatic evolution time. This suggests that programmable quantum devices, scalable with current superconducting technology, implement quantum annealing with a surprising robustness against noise and TweetA high-speed railway between Beijing and Guangzhou will open in two weeks' time, the Chinese Railways Ministry said yesterday. It will offer the longest bullet-train ride in the world and cut the journey time between the two cities to less than eight hours. When services begin on December 26 - the 119th anniversary of the birth of late Communist Party patriarch Mao Zedong - the 2,298-kilometre (1428 miles) line will eclipse the high-speed railway between Beijing and Shanghai. That 1,318-kilometre (819 miles) route opened in June last year. Map of planned and completed routes for China's high speed rail network. There are 4 main north-south lines and 4 main east -west lines. TweetDARPA’s 100 Gb/s RF Backbone (100G) intends to develop a fiber-optic-equivalent communications backbone that can be deployed worldwide. The goal is to create a 100 Gb/s data link that achieves a range greater than 200 kilometers between airborne assets and a range greater than 100 kilometers between an airborne asset (at 60,000 feet) and the ground. The 100G program goal is to meet the weight and power metrics of the Common Data Link (CDL) deployed by Forces today for high-capacity data streaming from platforms. TweetNorth Dakota increased its daily oil production to 747,239 barrels per day. This continues a string of 19 monthly increases since April, 2011. Recent national oil production numbers from the Energy Information Administration suggest that North Dakota and Texas have continued to increase oil production into December. TweetGenetically engineered mice that make extra BubR1 are less prone to cancer and live 15% longer. Researchers found that when they exposed normal mice to a chemical that causes lung and skin tumors, all of them got cancer. But only 33% of those overexpressing BubR1 at high levels did. They also found that these animals developed fatal cancers much later than normal mice—after about 2 years, only 15% of the engineered mice had died of cancer, compared with roughly 40% of normal mice. The animals that overexpressed BubR1 at high levels also lived 15% longer than controls, on average. And the mice looked veritably Olympian on a treadmill, running about twice as far—200 meters rather than 100 meters—as control animals. BuBR1's life-extending effects aren't due to only its ability to prevent cancer, although that's not yet certain. They may have identified a new drug target to slow aging with no identified negative consequences and possibly prevent cancer. Nature - Increased expression of protein BubR1 protects against aneuploidy and cancer and extends healthy lifespan TweetDr. Matthew Pickett and Stanley Williams have been collaborating on a project at HP Labs to explore the possibility of using "locally-active memristors" as the basis for extremely low-power transistorless computation. We first analyzed the thermally-induced first order phase transition from a Mott insulator to a highly conducting state. The current-voltage characteristic of a cross-point device that has a thin film of such a material sandwiched between two metal electrodes displays a current-controlled or 'S'-type negative differential. We derived analytical equations for the behavior these devices, and found that the resulting dynamical model was mathematically equivalent to the "memristive system" formulation of Leon Chua; we thus call these devices "Mott Memristors. We built Pearson-Anson oscillators based on a parallel circuit of one Mott memristor and one capacitor, and demonstrated subnanosecond and subpicoJoule switching time and energy. We then built a neuristor using two Mott memristors and two capacitors, which emulates the Hodgkin-Huxley model of the axon action potential of a neuron. Finally, through SPICE, we demonstrate that spiking neuristors are capable of Boolean logic and Turing complete computation by designing and simulating the one dimensional cellular nonlinear network based on 'Rule 137'. They have written a paper which will appear in Nature Materials - "A scalable neuristor built with Mott memristors". TweetFinding ways to diagnose cancer earlier could greatly improve the chances of survival for many patients. One way to do this is to look for specific proteins secreted by cancer cells, which circulate in the bloodstream. However, the quantity of these biomarkers is so low that detecting them has proven difficult. A new technology developed at MIT may help to make biomarker detection much easier. The researchers, led by Sangeeta Bhatia, have developed nanoparticles that can home to a tumor and interact with cancer proteins to produce thousands of biomarkers, which can then be easily detected in the patient’s urine. These nanoparticles created by MIT engineers can act as synthetic biomarkers for disease. The particles (brown) are coated with peptides (blue) that are cleaved by enzymes (green) found at the disease site. The peptides then accumulate in the urine, where they can be detected using mass spectrometry. Image: Justin H. Lo TweetThe Carnival of Nuclear Energy 135 is up at the ANS Nuclear Cafe ANS Nuclear Cafe - : Timing and Framing: How to address nuclear and climate change In the wake of superstorm Sandy, Suzy Hobbs Baker argues that “right now is the perfect time to provide a new framework for supporting nuclear as a solution to climate change.” TweetNvidia Corp. has already revealed that both of its next-generation Tegra system-on-chip for mobile devices would be taped out this year. Now, the company says that the first details about Tegra-series products code-named Wayne and Grey are set to be revealed at the consumer electronics trade-show (CES) in early 2013, whereas actual products are on schedule by Mobile World Congress in Nvidia Tegra "Grey" system-on-chip features built-in 3G and 4G/LTE communication technologies, the developer pins a lot of hopes on it as it should help it to penetrate the market of mass smartphones (that can be sold at a discount price), which will boost its market share considerably. Tegra "Wayne" is the next-generation multi-core application processor, which is presumably based on ARM Cortex-A15 general-purpose cores as well as a new GeForce-class graphics engine. TweetOn December 12th the Census Bureau said America’s projected population would rise 27% to 400m by 2050. That is 9% less than it projected for that year back in 2008. Those 65 and over will grow to 22% of the population by 2060 from 14% now, while the working-age population slips to 57% from 63%. The new projections, based on the 2010 census, are based on recent trends in fertility and immigration. The number of babies born per 1,000 women of childbearing age (also called the “general” fertility rate) fell to 63 in the 12 months that ended in June of this year, the lowest since at least 1920, and well below the recent high of 69 recorded in 2007. That is partly because the average age of women of childbearing age has increased. The “total” fertility rate adjusts for the age of the population and extrapolates how many children each woman will have over her lifetime. This, too, has fallen, and at 1.9 it is below the replacement rate of 2.1. America’s fertility rate is still higher than the average for the OECD, but has fallen sharply since 2007. Policymakers have yet to panic; the Social Security Commission, which manages America’s public pension system, reckons fertility and immigration will bounce back in the next few years. NBF - If fertility and immigration do not bounce back and instead get worse then the United States will rapidly start to look like the demographically weak countries like Japan and in Europe. TweetBeijing's leadership say they want to boost imports and speed the integration of rural migrants into cities as ways to boost domestic consumption, according to reports in China's state-owned news agency, Xinhua. China needed to show "more courage to reform," the statement said. TweetRay Kurzweil confirmed that he will be joining Google to work on new projects involving machine learning and language processing. “I’m excited to share that I’ll be joining Google as Director of Engineering this Monday, December 17,” said Kurzweil. “I’ve been interested in technology, and machine learning in particular, for a long time: when I was 14, I designed software that wrote original music, and later went on to invent the first print-to-speech reading machine for the blind, among other inventions. I’ve always worked to create practical systems that will make a difference in people’s lives, which is what excites me as an TweetResearchers have shown for the first time that a mechanism called tunneling control may drive chemical reactions in directions unexpected from traditional theories. The finding has the potential to change how scientists understand and devise reactions in everything from materials science to biochemistry. The discovery was a complete surprise and came following the first successful isolation of a long-elusive molecule called methylhydroxycarbene by the research team. While the team was pleased that it had "trapped" the prized compound in solid argon through an extremely low-temperature experiment, they were surprised when it vanished within a few hours. That prompted UGA theoretical chemistry professor Wesley Allen to conduct large scale, state-of-the-art computations to solve the mystery."What we found was that the change was being controlled by a process called quantum mechanical tunneling," said Allen, "and we found that tunneling can supersede the traditional chemical reactivity processes of kinetic and thermodynamic control. We weren't expecting this at all." TweetIn a large-scale project recently funded by the Defense Advanced Research Projects Administration, several MIT faculty members are working on a “human-on-a-chip” system that scientists could use to study up to 10 human tissue types at a time. The goal is to create a customizable system of interconnected tissues, grown in small wells on a plate, allowing researchers to analyze how tissues respond to different drugs. Another near-term goal for tissue engineers is developing regenerative therapies that help promote wound healing. “Healthy cells sitting adjacent to diseased tissues can influence the biology of repair and regeneration,” says MIT professor Elazer Edelman, who has developed implantable scaffolds embedded with endothelial cells, which secrete a vast array of proteins that respond to injury. Endothelial cells, normally found lining blood vessels, could help repair damage caused by angioplasty or other surgical interventions; smoke inhalation; and cancer or cardiovascular disease. The implants are now in clinical trials to treat blood-vessel injuries caused by the needles used to perform dialysis in patients with kidney failure. Better repair of those injuries could double the time that such patients can stay on dialysis, which is now limited to about three years, says Edelman, the Thomas D. and Virginia W. Cabot Professor of Health Sciences and Technology.
{"url":"http://nextbigfuture.com/2012_12_16_archive.html","timestamp":"2014-04-20T15:55:59Z","content_type":null,"content_length":"435857","record_id":"<urn:uuid:075499fe-fec9-4e71-b6c4-8fa6f7a41c5c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Attleboro Algebra 2 Tutor Find an Attleboro Algebra 2 Tutor ...I can help you through the math and the chemistry so you can succeed. Geometry is the study of points, lines, and planes. Although we learned to count with numbers first, geometry is the oldest branch of mathematics that had been systematically analyzed from a theoretical point of view. 19 Subjects: including algebra 2, chemistry, physics, calculus I have worked over 20 years as an engineer. I understand how science and math are used in industry. I like to help students understand the importance of trying to determine if answers make sense. 10 Subjects: including algebra 2, calculus, physics, geometry ...To help you learn to do this better, I would bring in sorting and matching activities. That way, you can get a lot of practice identifying the different kinds of equations, describing their graphs, and explaining how to solve them. I think some of the big ideas in algebra are solving and graphing different kinds of equations. 14 Subjects: including algebra 2, chemistry, calculus, geometry ...I ran camps for boys and girls of all ages. I was the founder of the Cranston Recreation Summer Basketball League, New Bedford and Bristol YMCA Basketball Camp, and Burriville Recreation Basketball Camp. I have played and taught basketball all my life and have an extreme passion for the sport. 31 Subjects: including algebra 2, reading, writing, English ...I am patient and knowledgeable and able to provide the insight needed to better understand macro, micro and the global economy. I have served in business leadership roles for more than 15 years in government, for profit, higher education and charitable organizations. My business acumen includes... 6 Subjects: including algebra 2, accounting, algebra 1, economics
{"url":"http://www.purplemath.com/Attleboro_algebra_2_tutors.php","timestamp":"2014-04-19T05:31:32Z","content_type":null,"content_length":"24013","record_id":"<urn:uuid:9379bfea-a513-4960-bd47-d7496c2ddad4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Bryn Mawr, PA Algebra 1 Tutor Find a Bryn Mawr, PA Algebra 1 Tutor ...I have also been a writing tutor at Lehigh University and have three years of experience working with students for a semester to improve their writing skills. I have experience with a wide variety of topics, including writing, history, English, reading, GRE and SAT Prep, international relations ... 31 Subjects: including algebra 1, English, reading, writing ...I also teach Solfege (Do, Re Mi..) and intervals to help with correct pitch. I use IPA to teach correct pronunciation of foreign language lyrics (particularly Latin). I have a portable keyboard to bring to lessons and music if the student doesn't have their own selections that they wish to learn... 58 Subjects: including algebra 1, reading, chemistry, biology ...I have experience in writing press releases, brochures, grant and project proposals, magazine and newsletter articles, reports and essays. I have done technical writing and video scripts as well. Grammar, writing essays, English literature, SAT's, etc. 51 Subjects: including algebra 1, English, reading, chemistry I am a youthful high school Latin teacher. I have been tutoring both Latin & Math to high school students for the past six years. I hold a teaching certificate for Latin, Mathematics, and English, and I am in the finishing stages of my master's program at Villanova. 7 Subjects: including algebra 1, geometry, algebra 2, Latin ...I completed math classes at the university level through advanced calculus. This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Algebra 2 with a national tutoring chain for five years. 12 Subjects: including algebra 1, calculus, writing, geometry
{"url":"http://www.purplemath.com/bryn_mawr_pa_algebra_1_tutors.php","timestamp":"2014-04-17T21:48:26Z","content_type":null,"content_length":"24111","record_id":"<urn:uuid:9d70744f-c97b-4e6c-b741-dd9dfe666c73>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Fuzzy proximity relations Tom HOULDER (houlder@news.tuwien.ac.at) 20 Jul 1995 15:44:50 GMT Dear all I've got several indications that my question about fuzzy proximity relations was not specific and badly formulated. Well, I try again... Fuzzy proximity relations are described in the book of Klir and Folger as an extension of mathematical relations where for instance the members of a similarity class have fuzzy membership grades. Since these extensions are done in terms of fuzzy variables I wondered if they can be given a frequentistic interpretation. Klir and Folger give the example of a fuzzy proximity relation between three cities, A, B and C. A is ``very near'' to B to a degree 0.8 and B is ``very near'' C to a degree 0.7. This is a nontransitive relation as A is possibly (but not necessarily) ``very near'' to C to a degree 0.5. Intuitively I understand this well. However, I would like to assign a frequentistic meaning of the numbers 0.5, 0.8 and 0.9 _analogously_to_ What follows is how I reason. (This may well be wrong.... ;) Let A and B be standard variables taking values on a set X. If we are given the value of A together with the equivalence relation A = B this can be used to infer the value of B (that is, if A = A0 -> B = A0) If we are given the value of A but no relation between A and B, there is no evidence of the value taken by B. I would suppose that the same notion of inference could be extracted from fuzzy relations. If the value of A is given together with a similarity relation "A SR B, alpha" this should provide some evidence of the value of B. (SR is the similarity relation, and alpha is the degree of similarity. It is easier to consider similarity relations as these are transitive in contrast to proximity relations.) Following Klir and Folger relations can be expressed by membership matrices. This should be equivalent to define a fuzzy set for each state in X in where all states of X is assigned a membership grade. For instance the following membership matrice for a reflexive asymmetric relation A B A 1.0 0.7 B 0.4 1.0 could be seen as a fuzzy set defined on A where A has membership grade 1.0 and B membership grade 0.7 together with a fuzzy set defined on B where A has membership grade 0.4 and B membership grade 1.0. It should now be possible to interpret these fuzzy sets as fuzzy possibility distributions with a following frequentistic interpretation. Said another way, I would assume that the value of A together with the degree of similarity should define a possibility distribution on X representing an evidence of the value of B. For different degrees of alpha the possibility distribution should hence say that the higher the degree alpha, B is expected to take values that are "more and more similar" to the value taken by A. [Does this means that a fuzzy relation is basically just a fuzzy set together with a variable which strengthens the linguistic meaning described by the set?? (There was a name for such a variable, but I don't remember...)] My viewpoint is that there is an aspect of inference in similarity relations and that the degree of this inference is given by the degree of similarity (The degree could for instance be related to an expectancy value). This aspect is not explicitly in the formalism of mathematical relations, but the double nature of fuzzy sets seems to imply that it should be possible to interpret fuzzy relations this The application where I want to apply such an interpretation of similarity relation is basically as follows. Let A and B be RANDOM variables taking values on a set X. Let B = B0 and consider the joint probability distribution P(A, B) and the conditional probability distribution, P_A(A| B=B0). If we are given a fuzzy similarity relation between A and B of degree 0 the variables would be uncorrelatedm P(A, B) = P(A) * P(B). If we on the other hand are given the equivalence relation A = B (a fuzzy similarity relation of degree 1.0), the variables are perfectly correlated P(A=B0 | B=B0) e this has clarified my problem. Suggestions, comments, flames or article suggestions are welcome... Tom Houlder hou= 1. For the intermediary degrees of alpha, the correlation is given in a fuzzy way, "the value of A is probably close to B". I would suppose it was possible to specify this correlation solely on the basis of a fuzzy set (in this case a ``similar to'' or ``close to'' set) and a degree of similarity. However, I do not know how to do it properly..... ;) I hope this has clarified my problem. Suggestions, comments, flames or article suggestions are welcome... Tom Houlder
{"url":"http://www.dbai.tuwien.ac.at/marchives/fuzzy-mail95/0875.html","timestamp":"2014-04-19T15:25:44Z","content_type":null,"content_length":"6785","record_id":"<urn:uuid:e6a8c62e-99c3-41ca-bd4b-23c242c29768>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Calculate the maximum amount of work that can be obtained from the combustion of 1.00 moles of ethane, CH3CH3(g), at 25 °C and standard conditions. • one year ago • one year ago Best Response You've already chosen the best response. This will be equal to the change in Gibbs free energy for the combustion reaction, which you can most easily calculate by looking up the standard formation free energies -- NOT the formation enthalpies! -- and applying Hess's Law. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a2f6c9e4b079bc145126a7","timestamp":"2014-04-18T08:07:20Z","content_type":null,"content_length":"27897","record_id":"<urn:uuid:29458500-4e4c-488c-83dd-d050b1cea0a2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Optical Properties of Graphene This Demonstration plots the observables relevant for understanding the optical properties of graphite and related materials (carbon nanotubes) over the hexagon of the 2D Brillouin zone (BZ) of a single graphitic sheet (graphene): • The constant energy contour plot of the 2D electronic band structure, which shows a trigonal structure around the points, where the bands cross the Fermi level (energy eV) • The dipole vector field for vertical transitions (a real vector); it rotates clockwise around the points, counterclockwise around the points, and is zero at the point , where an optical transition is forbidden • The oscillator strength contour plot, which is the square modulus of the dipole vector and is higher at the edges of the hexagonal 2D BZ—at the edge midpoints or points and at the corners and —and zero at the point • The optical absorption intensity, whose maxima and minima rotate by the azimuthal angle of the polarization vector of incident light; this shows that there are nodes in the optical absorption intensity around the and points, because of the 2D profile of the dipole vector field, which is a peculiarity of graphitic materials The calculated functions are based on a tight binding model for the electronic structure of graphene, as explained in the references in the Details section. The calculations and plots given in this Demonstration were implemented and reproduced from the equations in the following references: A. Grüneis, Resonance Raman Spectroscopy of Single Wall Carbon Nanotubes , Ph.D. thesis, Tohoku University, 2004. A. Grüneis, R. Saito, Ge. G. Samsonidze, T. Kimura, M. A. Pimenta, A. Jorio, A. G. Souza Filho, G. Dresselhaus, and M. S. Dresselhaus, "Inhomogeneous Optical Absorption around the K Point in Graphite and Carbon Nanotubes," Phys. Rev. B 67 (16), 2003 165402. R. Saito, A. Grüneis, Ge. G. Samsonidze, G. Dresselhaus, M. S. Dresselhaus, A. Jorio, L. G. Cançado, M. A. Pimenta, and A. G. Souza, "Optical Absorption of Graphite and Single-Wall Carbon Appl. Phys. A 78 (8), 2004 pp. 1099–1105. R. Saito, G. Dresselhaus, and M. S. Dresselhaus, Physical Properties of Carbon Nanotubes , London: Imperial College Press, 1998.
{"url":"http://demonstrations.wolfram.com/OpticalPropertiesOfGraphene/","timestamp":"2014-04-17T16:15:55Z","content_type":null,"content_length":"46279","record_id":"<urn:uuid:bb8ee4b1-923c-435e-82ef-15696e797c44>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Mr. Bishara is tutoring Steph in Algebra II, Exponential Functions! I’m back after a week of programming a nonsensical multithreaded program in Java that works about every 7 out of 8 times that I run it. 931 more words Question: Five people witnessed a murder and they each gave a slightly different description of the suspect. Which description is most likely to be correct? 237 more words Today is day 18 of Ouliposting…and it was one of those more “surreal” challenges where I just had to follow the rules and let the words come as they may…there’s obviously some interesting word choices, but as an overall poem, it’s not my favorite of the month…! 259 more words Lylanne Musselman For today’s installment of the A-to-Z challenge, I offer a bit of my mathematical background. The poetic form is pleiades with origins in the constellation Taurus. 118 more words This week,I’ve learned alot from taking on my consumer math project and even though it was vacation that didn’t stop my determination. I don’t know if its the lack of writing or pure laziness that put me in this stump but i do know for sure that moping around and stressing about it isn’t the way to go. 282 more words I can’t emphasize how ironic it is to me that I’m getting an MBA at times. I’m just not a math guy. This doesn’t mean I’m weak at math, however I wouldn’t consider it a strength, but taking a class that is strictly focusing on crunching numbers and solving equations without a purpose other than solving for the correct number isn’t terribly stimulating to me. 863 more words
{"url":"http://en.wordpress.com/tag/math/","timestamp":"2014-04-19T09:32:13Z","content_type":null,"content_length":"86961","record_id":"<urn:uuid:59565f88-2ffb-4a4f-ad07-9d637e7c1eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Feb 2013 There are several building games that are offering non-grid building, including the upcoming SimCity 5. Although I love grids, I’m also happy to see these non-grid building games. What would I do if I needed to draw curved roads in a game? I’d use Bezier curves and splines. They’re the standard solution for curves. The math is straightforward. They’re rather flexible. But they have some disadvantages, and you should also consider circular arcs as a primitive. Here’s a comparison: Why circular arcs? When using bezier curves and splines for road drawing, you usually have to turn the spline into a series of short line segments. This is fine in practice but also unsatisfying, and can lead to visual artifacts in some games. I had wondered if there was something else I could use instead. After trying some things out, I mentioned this problem to my friend Dominic, who suggested I look at circular arcs; Steffen Itterhelm’s blog post also suggested arcs; and this thread on StackExchange also suggests using arcs. The more I played with arcs, the better they looked to me. Consider a road segment: What do we need to draw and use roads in a game? 1. Offsets. Given an offset and the center of the road, we need to be able to draw a path at a given offset away from the road. Why? □ The left and right edges of the road are offsets. □ The lane markers in a multilane road are offsets. □ The boundary between the road and the regions adjacent to the road are offsets. Games such as SimCity and Cities XL divide these regions into “lots” where houses, factories, stores, etc. are □ Intersections between roads are calculated with offsets. 2. Distances. Given a position on a road segment, we need to figure out how far along the road it is. Given a distance along the road, we need to figure out the position. Why? □ Converting positions to distances lets us calculate the lengths of road segments, which are used for pathfinding and movement. □ To animate a vehicle along a road, we need to convert distances to positions. □ It may also be useful to convert mouse clicks to distances. Let’s start with simple road sections between two points. The simplest connection is a line segment. To offset a line, calculate the normal N = rotateLeft(normalize(P2 - P1)), multiply by the offset amount, and add that vector to the endpoints. Notice that the offset can be negative to go in the other direction. The offsets allow us to draw the road edges and stripes. With curved paths, we’ll want to offset using a normal from every point along the path. To convert a distance into a position (interpolation), calculate the tangent T = normalize(P2 - P1), multiply by the distance, and add that vector to the first endpoint. This allows us to move a vehicle along a road. It will be more complicated for curved paths. To convert a position to a distance (measurement), we follow the steps in reverse: subtract the first endpoint, then take the dot product with the tangent T. We can get the total length of the road by converting the second endpoint to a distance, length = (P2 - P1) ⋅ T. Note that this simplifies to ‖P2 - P1‖, because T = normalize(P2 - P1) = (P2 - P1) / ‖P2 - P1‖, and (P2 - P1) ⋅ (P2 - P1) = ‖P2 - P1‖². The first thing I think of when I want to draw a curved road is a Bezier curve. In addition to the two endpoints of the curve, a Bezier curve has additional “control points” that control the shape. The two most common forms are quadratic, with one control point, and cubic, with two control points. Here’s a road drawn along a quadratic Bezier curve. Move the endpoints and control point around to see how it works: It turns out the offset of a Bezier curve is not itself a Bezier curve. If you offset the end and control points, you end up with a road that’s not constant width; it won’t look right. Or if you try to keep constant width, and the left/right sides of the road aren’t Bezier curves, or any easily describable curve. A workaround is to turn your curve into a series of short line segments, then calculate the offsets from each of them. This diagram moves the control points without using the workaround, and you can see it doesn’t quite have a constant width: It turns out the length of a Bezier curve is not exactly computable. A workaround is to use an approximation, or to turn the curve into a series of short line segments. It also turns out the interpolations aren’t exactly computable. The same workaround, turning the curve into short line segments, can also be used for interpolations. There are also other approaches but they’re not as simple. Workaround: a series of lines Although Bezier curves are common, supported in lots of graphics libraries, and easy to use, they don’t have nice properties for road drawing. For offsets, distances, and interpolations, you can use a series of line segments to approximate the curve, and get the properties that aren’t computable exactly with Bezier curves. How many line segments do you need? There’s a tradeoff here. Using fewer but longer line segments makes the approximation visible to the player, but polygons for adjacent regions will be simpler, and that will make intersection and other calculations faster. Cities XL and SimCity 5 support regions adjacent to roads (for farms and buildings), and both of these games use longer line segments. Here’s a screenshot from Cities XL: If you tell players that they can build “curved roads”, a series of short straight roads is probably not what they expect to get. If you make the segment short enough though, most players won’t care. I don’t yet have a screenshot from SimCity 5 but from previews it looks like their roads are a bit smoother than the ones in Cities XL. The shorter the segments, the more curved the roads will look. However, if you need to perform calculations on regions and intersections, those calculations will likely be slower. Tropico 3 and 4 roads look much more curved than those in Cities XL or SimCity 5. Here’s a screenshot from Tropico 4: An alternative to quadratic Bezier curves is circular arcs. Neither is a superset of the other — Bezier curves cannot produce circular arcs and circular arcs cannot produce Bezier curves. Notice that the symmetry of the arc adds to its pleasing shape, but it also constraints the control point. The main reason to consider circular arcs is that they don’t require the “series of line segments” workaround of Bezier curves. Note that you still have to convert them to polygons to render them on a GPU, but you can reason about them in their native form. To form an offset curve, at every point we add the offset multiplied by the normal N. With a circular arc, the normal is the radius vector, so the offset curve is simply a change in radius, resulting in another circular arc. One tricky thing we need to deal with is very large radii. A straight line is an arc with infinite radius; zero radius is something you’d never want. But we would like to perform operations like adding or subtracting from the radius, and with large radii, we might have issues with numerical precision. One idea would be to use curvature (1/radius) or signed curvature instead of radius. A straight line can be represented as a circular arc with zero curvature; infinite curvature is something you’d never want. Another option would be to detect large radii and switch to line segments. The length of a circular arc is derived from the length of the circle. It’s the radius multiplied by the angle of the span (in radians). To interpolate along a circular arc we convert the position to an angle (divide by the radius), interpolate angles, and then convert back to position (multiply by the radius) Here too we can have problems with an infinite radius, because we’re dividing by the radius and then multiplying by the radius. Is there a better representation that avoids these issues? I don’t A downside of arcs is that they aren’t as flexible as Bezier curves, especially cubic Beziers. One way to increase the flexibility of arcs is to join two of them together into a biarc. Constructing a biarc requires two endpoints and their tangents, but it also has one additional degree of freedom. A key question for biarcs is where to join the two arcs. We want a point where the tangents of the two arcs will match. The places where they match all lie on a circle. In this diagram, move the endpoints and tangents, then move the control point to somewhere on the colored circle: Notice that control points not on the colored circle lead to the two arcs not connecting smoothly. Also look at the colors on the circle: they show how long the resulting biarc will be. Shorter biarcs are usually better but you also want to take into account curvature and connectivity with adjacent roads. The biarc UI above demonstrates how biarcs work but it’s not a great UI for players to build roads. A better UI would constrain the control point to lie on the circle, or automatically choose a control point based on some heuristics. In the biarc literature there’s no one best heuristic that everyone agrees on, so you might want to try several and see which is best for your game. Also, my guess is that you wouldn’t even want to give players a control over the tangents. You can easily choose tangents that lead to gigantic arcs. In a game it’s likely you’re connecting the road to an existing road, so the tangent is already determined. The default choice for curved paths is Bezier curves/splines. It’s a reasonable choice. They’re well understood and easy to work with. However, you need to convert them to piecewise linear curves to work with them. Cities XL, Tropico 4, and SimCity 5 all use Bezier curves. For roads and railroad tracks though, I think circular arcs are an interesting alternative. They have some nice properties when it comes to representation and simulation, with a possible issue with very large radii. Railroad Tycoon 3 and Sid Meier’s Railroads use them. Racing games use them. I think Train Fever uses them. Are there other building games that use circular arcs? Does it really matter which you use? Most players probably won’t care. Some will though. When I played Cities XL I was greatly annoyed by the roads not being curved, and the buildings not fitting into the spaces left by the weirdly shaped roads. I do think arcs are worth considering. For my own games, I think I will stick with grids. Also see the discussion on reddit/r/gamedev; there are lots of good comments and alternative approaches there. Connecting primitives together It’s fine to look at the primitives in isolation but in a game you’ll want to chain them together into road networks. • We can connect line segments together into piecewise linear curves, which have G^0 continuity. • We can connect Bezier curves together into Bezier splines. With cubic Beziers we can get G^2 continuity. • We can connect circular arcs into piecewise circular curves, which have G^1 continuity. I made a demo that produced piecewise circular curves by chaining together biarcs, but I’m not quite happy with the UI. I picked the control points automatically; it may be better to allow the player to choose them. Also watch a video of the track building UI in A-Train 9. In practice, we have to convert Bezier curves into piecewise linear curves. This produces polygons, which are well understood and easy to work with. I found these pages to be useful for learning about Bezier curves: Piecewise circular curves are used in manufacturing, robotics, and highway engineering, but I haven’t found many online references for them. As with circular arcs, piecewise circular curves can handle offsets, distances, and interpolation. Here are some papers I used to learn about circular arcs, biarcs, and piecewise circular curves: How can we calculate intersections? I believe we can intersect the left/right offset curves of two roads to find four points for a quadrilateral. For Bezier curves we’d need to convert to a piecewise linear form; for circular arcs we can calculate the intersections directly. However I haven’t tried implementing intersections for curved roads, and there are probably many corner cases to work out (no pun intended!). In the real world In real life, roads and railroad tracks use circular arcs, not Bezier curves. However arcs are not G^2 continuous. This means you have to abruptly shift your steering wheel when transitioning from a straight segment to a curved segment. To solve this, they use clothoid connectors. (Clothoids are also called “Euler spirals” or “Cornu spirals”.) However, clothoids have complicated distances/ interpolations and unlikely to have clean offset curves. They might be useful in racing games but I wouldn’t use them in a city-building game. (See the next section for some references.) Other curves Elliptical arcs give you more flexibility than circular arcs, and are supported by many graphics libraries. However, Paul Bourke says elliptical arcs don’t have closed form lengths, and I think they aren’t closed on offsetting either. This gives them the same disadvantages as Bezier curves. Rational Bezier Curves add weights to the control points. This allows them to represent circular arcs and other paths that regular Bezier curves can’t. However, these aren’t widely supported in graphics libraries, and still don’t have all the nice properties we want. Catmull-Rom splines produce paths that go through all the points, unlike Bezier curves which go through only the endpoints. However, I’m fairly certain (not 100% sure!) that Catmull-Rom splines do not have the nice properties we want — they likely have the same disadvantages as Bezier curves. Clothoid curves act as transitions between circular arcs, to provide G^2 continuity. Here are two great resources for clothoid curves: AbouBenAdhem on Reddit suggests computing offsets of Bezier curves using Hermite curves, as they’d take far fewer segments than a piecewise linear approximation. Geometric Algebra For a few weeks I got distracted by geometric algebra, which seemed like it might be a nice way to represent lines and curves. There are a lot of cool ideas in there, and it explained things that really bothered me about the usual vector representation in 3D graphics. However, I didn’t learn enough of it to be able to say whether I could use it in my 2D setting. I should’ve probably tried implementing circular arcs with geometric algebra; that would’ve made me learn it faster.
{"url":"http://www.redblobgames.com/articles/curved-paths/","timestamp":"2014-04-20T13:19:07Z","content_type":null,"content_length":"37091","record_id":"<urn:uuid:741c8ae7-6688-4d4d-a887-3174ea0b5424>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Category theory Universal constructions Type theory In a strict sense of the term, a function is a homomorphism $f : S \to T$ of sets. We may also speak of a map or mapping, but those terms are used in other ways in other contexts. A function from a set $A$ to a set $B$ is determined by giving, for each element of $A$, a specified element of $B$. The process of passing from elements of $A$ to elements of $B$ is called function application. The set $A$ is called the domain of $f$, and $B$ is called its codomain. A function is sometimes called a total function to distinguish it from a partial function. More generally, every morphism between objects in a category may be thought of as a function in a generalized sense. This generalized use of the word is wide spread (and justified) in type theory, where for $S$ and $T$ two types, there is a function type denoted $S \to T$ and then the expression $f : S \to T$ means that $f$ is a term of function type, hence is a function. In this generalized sense, functions between sets are the morphisms in the category Set. This is cartesian closed, and the function type $S \to T$ is then the function set. For more on this more general use of “function” see at function type. The formal definition of a function depends on the foundations chosen. • In material set theory, a function $f$ is often defined to be a set of ordered pairs such that for every $x$, there is at most one $y$ such that $(x,y)\in f$. The domain of $f$ is then the set of all $x$ for which there exists some such $y$. This definition is not entirely satisfactory since it does not determine the codomain (since not every element of the codomain may be in the image); thus to be completely precise it is better to define a function to be an ordered triple $(f,A,B)$ where $A$ is the domain and $B$ the codomain. • In structural set theory, the role of functions depends on the particular axiomatization chosen. In ETCS, functions are among the undefined things, whereas in SEAR, functions are defined to be particular relations (which in turn are undefined things). • In type theory, functions are simply terms belonging to function types. See set theory and type theory for more details. As morphisms of discrete categories If we regard sets as discrete categories, then a function is a functor between sets. The functoriality structure becomes the property that a function preserves equality: (1)$x = y \Rightarrow f(x) = f(y) .$ For classes See the MathOverflow: what-are-maps-between-proper-classes
{"url":"http://www.ncatlab.org/nlab/show/function","timestamp":"2014-04-18T16:18:18Z","content_type":null,"content_length":"63192","record_id":"<urn:uuid:40a4c633-2319-4dbc-a8ee-28db234ff454>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
On backtracking and greatest fixpoints - Semantics and Logics of Computation , 1997 "... ..." , 1996 "... In this paper we consider a particular class of algorithms which present certain difficulties to formal verification. These are algorithms which use a single data structure for two or more purposes, which combine program control information with other data structures or which are developed as a comb ..." Cited by 36 (25 self) Add to MetaCart In this paper we consider a particular class of algorithms which present certain difficulties to formal verification. These are algorithms which use a single data structure for two or more purposes, which combine program control information with other data structures or which are developed as a combination of a basic idea with an implementation technique. Our approach is based on applying proven semantics-preserving transformation rules in a wide spectrum language. Starting with a set theoretical specification of "reachability" we are able to derive iterative and recursive graph marking algorithms using the "pointer switching" idea of Schorr and Waite. There have been several proofs of correctness of the Schorr-Waite algorithm, and a small number of transformational developments of the algorithm. The great advantage of our approach is that we can derive the algorithm from its specification using only general-purpose transformational rules: without the need for complicated induction arg... , 1994 "... A wide spectrum language is presented, which is designed to facilitate the proof of the correctness of refinements and transformations. Two different proof methods are introduced and used to prove some fundamental transformations, including a general induction rule (Lemma 3.9) which enables transfor ..." Cited by 21 (14 self) Add to MetaCart A wide spectrum language is presented, which is designed to facilitate the proof of the correctness of refinements and transformations. Two different proof methods are introduced and used to prove some fundamental transformations, including a general induction rule (Lemma 3.9) which enables transformations of recursive and iterative programs to be proved by induction on their finite truncations. A theorem for proving the correctness of recursive implementations is presented (Theorem 3.21), which provides a method for introducing a loop, without requiring the user to provide a loop invariant. A powerful, general purpose, transformation for removing or introducing recursion is described and used in a case study (Section 5) in which we take a small, but highly complex, program and apply formal transformations in order to uncover an abstract specification of the behaviour of the program. The transformation theory supports a transformation system, called FermaT, in which the applicability conditions of each transformation (and hence the correctness of the result) are mechanically verified. These results together considerably simplify the construction of viable program transformation tools; practical consequences are briefly discussed. - International Journal of Software Engineering and Knowledge Engineering , 1995 "... There is a vast collection of operational software systems which are vitally important to their users, yet are becoming increasingly difficult to maintain, enhance and keep up to date with rapidly changing requirements. For many of these so called legacy systems the option of throwing the system awa ..." Cited by 17 (5 self) Add to MetaCart There is a vast collection of operational software systems which are vitally important to their users, yet are becoming increasingly difficult to maintain, enhance and keep up to date with rapidly changing requirements. For many of these so called legacy systems the option of throwing the system away an re-writing it from scratch is not economically viable. Methods are therefore urgently required which enable these systems to evolve in a controlled manner. The approach described in this paper uses formal proven program transformations, which preserve or refine the semantics of a program while changing its form. These transformations are applied to restructure ans simplify the legacy systems and to extract higher-level representations. By using an appropriate sequence of transformations, the extracted representation is guaranteed to be equivalent to the code. The method is based on a formal wide spectrum language, called WSL, with accompanying formal method. Over the last ten years we h... - In ACM Sigplan Notices , 1977 "... The need to reverse a computation arises in many contexts---debugging, editor undoing, optimistic concurrency undoing, speculative computation undoing, trace scheduling, exception handling undoing, database recovery, optimistic discrete event simulations, subjunctive computing, etc. The need to anal ..." Cited by 14 (0 self) Add to MetaCart The need to reverse a computation arises in many contexts---debugging, editor undoing, optimistic concurrency undoing, speculative computation undoing, trace scheduling, exception handling undoing, database recovery, optimistic discrete event simulations, subjunctive computing, etc. The need to analyze a reversed computation arises in the context of static analysis---liveness analysis, strictness analysis, type inference, etc. Traditional means for restoring a computation to a previous state involve checkpoints; checkpoints require time to copy, as well as space to store, the copied material. Traditional reverse abstract interpretation produces relatively poor information due to its inability to guess the previous values of assigned-to variables. We propose an abstract computer model and a programming language---Y-Lisp---whose primitive operations are injective and hence reversible, thus allowing arbitrary undoing without the overheads of checkpointing. Such a computer can be built from reversible conservative logic circuits, with the serendipitous advantage of dissipating far less heat than traditional Boolean AND/OR/NOT circuits. Unlike functional languages, which have one &quot;state &quot; for all times, Y-Lisp has at all times one &quot;state&quot;, with unique predecessor and successor states. Compiling into a reversible pseudocode can have benefits even when targeting a traditional computer. Certain optimizations, e.g., update-in-place, and compile-time garbage collection may be more easily performed, because the "... Bisimulation and bisimilarity are coinductive notions, and as such are intimately related to fixed points, in particular greatest fixed points. Therefore also the appearance of coinduction and fixed points is discussed, though in this case only within Computer Science. The paper ends with some histo ..." Add to MetaCart Bisimulation and bisimilarity are coinductive notions, and as such are intimately related to fixed points, in particular greatest fixed points. Therefore also the appearance of coinduction and fixed points is discussed, though in this case only within Computer Science. The paper ends with some historical remarks on the main fixed-point theorems (such as Knaster-Tarski) that underpin the fixed-point theory presented.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2001370","timestamp":"2014-04-21T03:20:04Z","content_type":null,"content_length":"26534","record_id":"<urn:uuid:fe846d9d-1404-4589-bb3b-04797ade7c19>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Eaves Gutter & Downpipe Sizing Calculator Instructions: Enter the horizontal roof catchment area (ie plan area) for the section of roof you desire. Enter the roof slope. Choose a location, click the check box if the eaves gutter slope is steeper than 1:500 (eg 1:200). Then press calculate to obtain the required number of downpipes and eaves gutter cross sectional area. Roof design: If using the number of downpipes calculated above, try to have approximately equal catchment areas draining to each down pipe, with high points approx midway between downpipes, and DP's as close as possible to valley gutters. (Don't forget, if there are no stop ends in the gutter, water may flow a little between catchments. i.e. if one downpipe is overloaded, excess water may continue to the next downpipe.) Also, although not stated in the plumbing code, the Building Code requires DP spacing to be not greater than 12m. This would apply to straight runs of gutter to limit the expansion, would also require an expansion joint every 12m in this case. If it is not possible to have equal catchments, and a catchment area is much larger than the others, then run the program again for just that larger catchment, it may require an extra downpipe. Any size, or shape eaves gutter may be used, as long as the cross sectional area is equal to, or greater than, the size calculated. The number of downpipes required is the theoretical number required. This is not always a whole number. So the number used in the eaves gutter area calculation, is the theoretical number rounded up, if the fraction of a downpipe is greator than 0.1, or rounded down otherwise. When designing pipework, we must always use the internal pipe diameter in all calculations, however some suppliers quote stormwater pipes as the Outside Diameter, so a 150 dia. pipe shown in the calculator is usually equivalent to a 160 dia storm water pipe in the catalogue. Also, the Code only refers to the "Nominal Diameter". The actual Internal diameter may be more or less, depending on the material and pipe class chosen. Towns not listed : For towns not listed, you may add your own entensity; but you must select the location choice to:- "I prefer to enter a known intensity" Also the intensity required in Australia should be for a 1 in 20 year storm with a 5 minute time of concentration. and in New Zealand a 1 in 10 year storm with a 10 minute time of concentration. These figures can be obtained from the graphs in the Plumbing Code, or requested from the Hydrometeorological Advisory Services of the Bureau of Meteorology (HASBM) in Australia; or in New Zealand from the National Institute for Water and Atmosphere(NIWA). Also, your local Authority, Consulting Engineer, or Hydraulic Consultant may be able to advise. Also from here Intensity, Frequency, Duration curves.. (Civil Engineers will love this site) Some Theory : The time of concentration is the time it takes water to travel from the furtherest point in the catchment to the point under investigation. To generate peak flow from a catchment, a storm must last at least this long. Now the longer a storm lasts, the less is the average intensity. eg a storm may bucket down for 5 mins, but is not likely to keep up such an intensity for hours. The flow of water in a down pipe is restricted by the size of the entry (ie the entry diameter, throat, or orifice.) Water starts to enter a downpipe as though it was flowing over a weir into the mouth of the downpipe. The weir formula is used to calculate the downpipe size. As the flow builds up, the water level over this weir increases until the entire mouth of the downpipe is submerged, just like your bath tub when you pull the plug. The downpipe entry now acts like an orifice, and the orifice formula is used to calculate the downpipe size. The greator the depth of water over the down pipe, the more water can be forced through this entry orifice, or over the entry edge (weir). This is why we have a rain water head over some downpipes, to increase the depth of water over the entry and hence force more water into the downpipe. Another way to increase the downpipe capacity, is to increase the throat diameter by having a conical entrance to the downpipe. Hence increasing the length of the entry weir. However you should approach a consultant on this as it is not always applicable. A down pipe designed to the code does not flow anywhere near full, it is the entry orifice/weir that limits the flow. If you want to investigate this further, check out the notes on the pipe size Charged downpipes etc A "charged" downpipe is one that flows full, or stays full because of a "U" shape. The above figures are based on:- Storm frequency of 1:20 years (code requirement). Non Circular Down pipes From AS3500 :- a 90 dia, down pipe is equivalent to a 5175 sq.mm rectangular down pipe. (cross sectional area) a 100 dia, down pipe is equivalent to a 6409 sq.mm rectangular down pipe. a 150 dia, down pipe is equivalent to a 14,365 sq.mm rectangular down pipe. You will find that this equates to a pipe of the same perimeter. (not cross sectional area) Hence the weir formula is used as determining the capacity of the entry. The longer the weir the more water will enter the pipe. Notes: AS3500 does not take into account the location of the downpipe along the gutter, nor does it adjust the formula for bends in the gutter. This can make a big difference. The code only allows for the worst possible case. This makes it ideal for residential buildings with many turns and bends in the roof. However for projects with long straight roofs (Large industrial Sheds) it would be conservative (ie an overdesign). For these projects, the CSIRO have produced formulas that take into account the location of down pipes and bends along the gutter. I would also be interested in any modifications, or suggestions that you would like incorporated. If you need more info, or you would like other areas of Australia, or New Zealand, added to the list, please send me an email.
{"url":"http://www.roof-gutter-design.com.au/Downp/downpipe.html","timestamp":"2014-04-18T18:11:49Z","content_type":null,"content_length":"9150","record_id":"<urn:uuid:df9b69e1-fad1-4701-b9ff-bfa23e547ddf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Bryn Mawr, PA Algebra 1 Tutor Find a Bryn Mawr, PA Algebra 1 Tutor ...I have also been a writing tutor at Lehigh University and have three years of experience working with students for a semester to improve their writing skills. I have experience with a wide variety of topics, including writing, history, English, reading, GRE and SAT Prep, international relations ... 31 Subjects: including algebra 1, English, reading, writing ...I also teach Solfege (Do, Re Mi..) and intervals to help with correct pitch. I use IPA to teach correct pronunciation of foreign language lyrics (particularly Latin). I have a portable keyboard to bring to lessons and music if the student doesn't have their own selections that they wish to learn... 58 Subjects: including algebra 1, reading, chemistry, biology ...I have experience in writing press releases, brochures, grant and project proposals, magazine and newsletter articles, reports and essays. I have done technical writing and video scripts as well. Grammar, writing essays, English literature, SAT's, etc. 51 Subjects: including algebra 1, English, reading, chemistry I am a youthful high school Latin teacher. I have been tutoring both Latin & Math to high school students for the past six years. I hold a teaching certificate for Latin, Mathematics, and English, and I am in the finishing stages of my master's program at Villanova. 7 Subjects: including algebra 1, geometry, algebra 2, Latin ...I completed math classes at the university level through advanced calculus. This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Algebra 2 with a national tutoring chain for five years. 12 Subjects: including algebra 1, calculus, writing, geometry
{"url":"http://www.purplemath.com/bryn_mawr_pa_algebra_1_tutors.php","timestamp":"2014-04-17T21:48:26Z","content_type":null,"content_length":"24111","record_id":"<urn:uuid:9d70744f-c97b-4e6c-b741-dd9dfe666c73>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalize Fourier transform to other basis than trigonometric function up vote 4 down vote favorite The Fourier transform of periodic function $f$ yields a $l^2$-series of the functions coefficients when represented as countable linear combination of $\sin$ and $\cos$ functions. • In how far can this be generalized to other countable sets of functions? For example, if we keep our inner product, can we obtain another Schauder basis by an appropiate transform? What can we say about the bases in general? • Does this generalize to other function spaces, say, periodic functions with one singularity? • What do these thoughts lead to when considering the continouos FT? fa.functional-analysis fourier-analysis add comment 5 Answers active oldest votes It is not what you want, but may be worth mentioning. There is a huge branch of abstract harmonic analysis on (abelian) locally compact groups, which generalizes Fourier transformation on reals and circle. The main point about sin and cos (or rather complex exponent $e^{i n x}$) is that it is a character (continuous homomorphism from a group to a circle) and it is not up vote 2 hard to see that those are the only characters of the circle. That what makes Fourier transform so powerful. If you generalize it along the direction which drops characters, you'll down vote probably get a much weaker theory. add comment You need the orthogonality condition to get such an integral representation for the coefficients; otherwise it would probably be more complicated. The Fourier series of any $L^2$ function converges not only in the norm (which follows from the fact that $\{e^{inx}\}$ is an orthonormal basis) but also almost everywhere (the Carleson-Hunt theorem). Both these assertions are also true in any $L^p,p>1$ but at least the first one requires different methods than Hilbert space ones. In $L^1$, by contrast, a function's Fourier series may diverge everywhere. up vote There are many conditions that describe when a function's Fourier series converges to the appropriate value at a given point (e.g. having a derivative at that point should be sufficient). 0 down Simple continuity is insufficient; one can construct continuous functions whose Fourier series diverge at a dense $G_{\delta}$. The problem arises because the Dirichlet kernels that one vote convolves with the given function to get the Fourier partial sums at each point are not bounded in $L^1$ (while by contrast, the Fejer kernels or Abel kernels related respectively to Cesaro and Abel summation are, and consequently it is much easier to show that the Fourier series of an $L^1$ function can be summed to the appropriate value using either of those methods). Zygmund's book Trigonometric Series contains plenty of such results. There is a version of the Carleson-Hunt theorem for the Fourier transform as well. add comment Any compact normal operator on a Hilbert space has an orthonormal basis of eigenvectors. If I remember correctly then the standard Fourier series comes from the second derivative operator on L^2(0,2pi) with boundary conditions f(0)=f(2pi) and f'(0)=f'(2pi). This operator is not compact, but its inverse is (and has the same eigenvectors). Using other compact normal operators up vote 0 (usually inverses of differential operators with certain boundary conditions) you obtain other orthonormal bases. down vote add comment There are certainly many other basis for spaces of functions on an interval, if we eliminate the periodicity condition. The more widely used are orthogonal polynomials. Given an interval $I \subset\mathbb{R}$ and a weight $w\colon I\to (0,\infty)$, there is a sequence of polynomials $\{P_n\}$ orthogonal with respect the weight $w$: $$\int_I P_m(x)P_n(x)w(x)\,dx=0,\quad m\ne up vote 1 n.$$ They are a basis of $L^2(I)$. A classical reference is Gabor Szego (1939). Orthogonal Polynomials. Colloquium Publications - American Mathematical Society. down vote add comment If you are interested in more general Fourier transforms, then the two things which spring immediately to my mind are: 1. Titchmarsh's book Fourier Integrals contains a detailed treatment of what he calls "generalized kernels", which vaguely are pairs of functions $h(x),k(x)\in L^2(\mathbb{R})$ such that up vote 2 down vote $\int_{0}^{\infty}k(xy)\int_{0}^{\infty}h(yw)f(w)dwdy=f(x)$. 2. There is a lovely theory of "wavelets" due to Daubechies et al, which are described in many places. agh, can someone make the integral signs work right? – David Hansen Nov 27 '09 at 21:54 They work for me! – Ilya Nikokoshev Nov 27 '09 at 23:47 add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis fourier-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/6990/generalize-fourier-transform-to-other-basis-than-trigonometric-function?answertab=active","timestamp":"2014-04-21T13:16:30Z","content_type":null,"content_length":"68334","record_id":"<urn:uuid:27553d6d-6c01-4266-9ca6-c8c75a1b30e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Mathematical World 1991; 187 pp; softcover Volume: 1 Reprint/Revision History: third printing 1999 ISBN-10: 0-8218-0165-1 ISBN-13: 978-0-8218-0165-9 List Price: US$26 Member Price: US$20.80 Order Code: MAWRLD/1 Throughout the history of mathematics, maximum and minimum problems have played an important role in the evolution of the field. Many beautiful and important problems have appeared in a variety of branches of mathematics and physics, as well as in other fields of sciences. The greatest scientists of the past--Euclid, Archimedes, Heron, the Bernoullis, Newton, and many others--took part in seeking solutions to these concrete problems. The solutions stimulated the development of the theory, and, as a result, techniques were elaborated that made possible the solution of a tremendous variety of problems by a single method. This book presents fifteen "stories" designed to acquaint readers with the central concepts of the theory of maxima and minima, as well as with its illustrious history. This book is accessible to high school students and would likely be of interest to a wide variety of readers. In Part One, the author familiarizes readers with many concrete problems that lead to discussion of the work of some of the greatest mathematicians of all time. Part Two introduces a method for solving maximum and minimum problems that originated with Lagrange. While the content of this method has varied constantly, its basic conception has endured for over two centuries. The final story is addressed primarily to those who teach mathematics, for it impinges on the question of how and why to teach. Throughout the book, the author strives to show how the analysis of diverse facts gives rise to a general idea, how this idea is transformed, how it is enriched by new content, and how it remains the same in spite of these changes. Part One • Ancient maximum and minimum problems • Why do we solve maximum and minimum problems? • The oldest problem--Dido's problem • Maxima and minima in nature ( optics\()\) • Maxima and minima in geometry • Maxima and minima in algebra and in analysis • Kepler's problem • The brachistochrone • Newton's aerodynamical problem Part Two • Methods of solution of extremal problems • What is a function? • What is an extremal problem? • Extrema of functions of one variable • Extrema of functions of many variables. Lagrange's principle • More problem solving • What happened later in the theory of extremal problems? • More accurately, a discussion
{"url":"http://ams.org/bookstore?fn=20&arg1=mawrldseries&ikey=MAWRLD-1","timestamp":"2014-04-20T01:14:24Z","content_type":null,"content_length":"16095","record_id":"<urn:uuid:c7cee355-95f8-4c48-8d24-f4cb1144009c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
T normal operator has splitting characteristic polynomial - eigenvectors form basis? November 24th 2009, 05:57 PM #1 T normal operator has splitting characteristic polynomial - eigenvectors form basis? Question: $T$ is a normal operator on a finite-dimensional real inner product space $V$ with a characteristic polynomial that splits. Prove that $V$ has an orthonormal basis of eigenvectors of By Schur's theorem, because the characteristic polynomial of $T$ splits, there exists an orthonormal basis $\beta$ for $V$ such that $[T]_{\beta}$ is upper triangular. How does the fact that the inner product space being real ensure that such a basis is composed of the eigenvectors of $T$? Thanks! Question: $T$ is a normal operator on a finite-dimensional real inner product space $V$ with a characteristic polynomial that splits. Prove that $V$ has an orthonormal basis of eigenvectors of By Schur's theorem, because the characteristic polynomial of $T$ splits, there exists an orthonormal basis $\beta$ for $V$ such that $[T]_{\beta}$ is upper triangular. How does the fact that the inner product space being real ensure that such a basis is composed of the eigenvectors of $T$? Thanks! Schur's theorem tells us that $U^{*}TU=A$ , where $A$ is upper triangular and $U$ is unitary. As all the eigenvalues of T are real (otherwise its char. pol. wouldn't split over the reals!), T is self-adjoint or Hermitian (i.e., $T=T^{*}$), and then: $A^{*}=\left(U^{*}TU\right)^{*}=U^{*}T^{*}U=U^{*}TU =A\Longrightarrow\,A=A^{*}$ , and since $A$ is upper triangular then $A^{*}$ is lower triangular, thus $A$ is in fact diagonal and we're done. November 24th 2009, 06:57 PM #2 Oct 2009
{"url":"http://mathhelpforum.com/advanced-algebra/116596-t-normal-operator-has-splitting-characteristic-polynomial-eigenvectors-form-basis.html","timestamp":"2014-04-17T20:28:14Z","content_type":null,"content_length":"39988","record_id":"<urn:uuid:2c00b168-e7c5-41bd-830a-6041eb029535>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Shorewood Science Tutor Find a Shorewood Science Tutor ...For the past year, I have had the opportunity to work in an ESL classroom to obtain 100 hours of observation as required by the state of Illinois. I was able to work with students and help them with their school work. I am finishing my masters in special education and I have been working with kids with ADD/ADHD in the classroom as a teacher for the past three years. 25 Subjects: including psychology, English, sociology, ESL/ESOL ...I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra, differential equations, advanced differential equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics. I hav... 10 Subjects: including physics, algebra 2, calculus, geometry ...I love to help others succeed in the subject as well using various tricks and tips that I have created along the way. I have tutored students in chemistry at the elementary, high school, and college level. The best results are obtained when I am made aware of the topics at hand. 13 Subjects: including biology, chemistry, English, ESL/ESOL ...I have taken Calculus 1,2 & 3 along with Differential Equations, Basic math algebra 1 & 2, Geometry etc. Science classes that I have taken but not limited to are Intro to Physics 1 & Electricity and Magnetism; along with quantum physics. I have taken Biology, Intro to Chemistry, Organic Chemistry and BioChemistry as well! 26 Subjects: including chemistry, Microsoft Excel, Microsoft Word, Microsoft PowerPoint ...My minor for school is Religious Studies. I have taken classes that describe different religions, and have done my own research. I also volunteered as a Religious Educator at my church. 17 Subjects: including biology, psychology, reading, English
{"url":"http://www.purplemath.com/Shorewood_Science_tutors.php","timestamp":"2014-04-21T10:36:49Z","content_type":null,"content_length":"23873","record_id":"<urn:uuid:f3cdaad5-fbd3-4493-b953-08a1883f46d2>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Risk Probability November 19th 2012, 10:50 PM #1 Nov 2012 Risk Probability Hi everyone. Recently came across this question in an upper level class and to be honest I am stuck on it. I am not completely sure if this is the correct forum but here it is. Any guidance is Consider a purely probabilistic game that you have the opportunity to play. Each time you play there are n potential known outcomes x1, x2, ..., xn (each of which is a specified gain or loss of dollars according to whether xi is positive or negative). These outcomes x1, x2, ..., xn occur with the known probabilities p1, p2, ..., pn respectively (where p1 + p2 + ... + pn = 1.0 and 0 <= pi <= 1 for each i). Furthermore, assume that each play of the game takes up one hour of your time, and that only you can play the game (you can't hire someone to play for you). Let E be the game's expected value and S be the game's standard deviation. 1. In the real world, should a rational player always play this game whenever the expected value E is not negative? Why or why not? 2. Does the standard deviation S do a good job of capturing how risky this game is? Why or why not? 3. If you personally had to decide whether or not to play this game, how would you decide? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/208031-risk-probability.html","timestamp":"2014-04-16T19:31:45Z","content_type":null,"content_length":"29865","record_id":"<urn:uuid:2da28a1c-9dcd-4cf8-b6ba-c8e4d3f3600e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Vertical Line Test Vertical Line Test Vertical line test is a test to determine whether the given relation is a function or not, when the domain and range of the given problem(relation) correspond to x-axis and y=axis. If we have a graph of a set of ordered pairs so we can easily draw a vertical line on it. If, the vertical line cuts the graph or intersects the graph at one point then the given graph is a function and if, it cuts the graph more than one point then the given graph is a relation.
{"url":"http://www.mathcaptain.com/algebra/vertical-line-test.html","timestamp":"2014-04-17T07:15:58Z","content_type":null,"content_length":"41847","record_id":"<urn:uuid:f164d28d-7f36-437a-b550-3cb579d8488d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Honors Physics Posted by Mathew on Tuesday, September 27, 2011 at 10:59pm. An automobile accelerates from rest at 1.3 m/s for 19 s. The speed is then held constant for 23 s, after which there is an acceleration of −0.9 m/s until the automobile What total distance was traveled? Answer in units of km • Honors Physics - bobpursley, Tuesday, September 27, 2011 at 11:18pm while accelerating: distance=1/2 1.3 19^2 constant speed period. distance= finalvelocityabove*23 seconds deaccelerating period. Vf^2=Vi^2 + 2ad where Vf=0 and Vi equals the finalvelocityabove. Solve for distance. time deaccelerating: 0=Vi+at solve for t add the total distances, and if you need, add the total times. • Honors Physics - Mathew, Tuesday, September 27, 2011 at 11:58pm I am still confused. For finalvelocity, where did you get 18? Related Questions Physics - An automobile accelerates from rest at 1.3 m/s 2 for 19 s. The speed ... physics - An automobile and train move together along parallel paths at 28.8 m/s... Physics - An automobile and train move together along parallel paths at 21.8 m/s... physics - An automobile and train move together along parallel paths at 39.1 m/... physics - An automobile accelerates from rest at 1.3m/s^2 for 19s. The speed is ... Physics - Could you please check my answers and units. Thanxs! 1. An automobile ... Physics - Could you please check my answers and units.Also, help me with 3 and 4... physics - An automobile and train move together along parallel paths at 20.3 m/s... Physics - An automobile and train move together along parallel paths at 31.6 m/s... Physics - An automobile accelerates from rest at 2.4 m/s2 for 29 s. The speed is...
{"url":"http://www.jiskha.com/display.cgi?id=1317178792","timestamp":"2014-04-20T22:02:16Z","content_type":null,"content_length":"9049","record_id":"<urn:uuid:2c3245fe-0ac3-4b53-bf41-00e10b6fffe3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Card Games Home Page | Other Invented Games Ops is a two-player card exchange game played with open hands. The goal is to arrange one's hand so that simple arithmetic operations can be performed across the rows and columns of the layout. Similar to chess, the rules are easy to understand but game play can involve complex strategies. Ops was invented by Bogart Salzberg . Each player begins with a 3 x 3 layout of cards ace through 9 of a particular suit (usually spades and hearts) arranged like a telephone or calculator keypad, with the 3 always in a position called "the hole". This places the 7, 8, 4 and 5 in a square called "the main" and the five other cards in "the margin". Tens from both suits (representing zeroes) are also placed in a common area called the "dish". The game is won when the hand is arranged so that the cards in the main can be added, subtracted, multiplied or divided to produce the values of cards in the margin. Each of the two rows and two columns of the main must correspond to the margin card below or beside it. The hole card is therefore not used in the "proof" of a winning hand. In this example proof, the four arithmetic operations are "1 + 3 = 4", "7 - 2 = 5", "1 + 7 = 8", and "3 x 2 = 6". The hole card is not shown because its value is not used in the proof. In each turn a player may exchange one his own cards with another card in his own hand or, with exceptions, a card in his opponent's hand or the dish. A player may not retrieve one of his own cards from an opponent's hand, or the dish, and may not take an opponent's card having the same value as one taken from his own hand in the last move. As trading continues, each player tends to accumulate a greater share of his opponents cards. When a player's hand is made up entirely of cards from his opponent's hand, the opponent is unable to alter it. Such a hand is "locked". At this point, the "trading round" has ended and the players continue exchanging cards independently until one has a winning hand. In a common variation of Ops, the closing of one trading round is immediately followed by the opening of another, as the players switch their "home suits". It is possible that both hands can be made "provable" in the same move, in the event of an exchange from one hand to the other during the trading round. In this case, the player who initiated the move loses the game. 1. Pat puts the red 1 (ace of hearts in a conventional deck) on the point. A large proportion of solutions have 1 on the point. Also, the left column is immediately solved, denoted by the green margin card. 2. Sam solves the center row by exchanging two cards within his own hand. 3. Pat solves his own center row. 4. Sam trades with the dish to solves his left column. 5. Pat could win the game by trading his 4 for the 0 in the dish, but misses the move and instead swaps his 1 and 4, solving the center column. 6. Sam trades for Pat's 1 and wins the game. A simple solitaire variation consists of shuffling the ace through 9 cards of a suit and laying them out, face up, in the 3-by-3 pattern. (There are 181,440 unique starting positions). The challenge is to solve the hand in the fewest possible moves. All hands can be solved. There are more than 100 solutions for the "natural" hand (1 through 9, as in the solitaire variation) and hundreds more for hands that can be assembled during the trading round in a two-player game. For solutions to the natural hand, 1 and 2 are the most likely to appear in the main, followed by 3, 5, 7, 4, 6, 9, and finally 8. The 9 card is on the point in almost a quarter of solutions to the natural hand (25) times, followed by 1 (15 times) and 2 and 7 (11 times). The 2 card appears in the center of a natural hand solution 25 times, followed by 3 (23) and 1 (20). Return to Index of Invented Card Games Last updated 12th May 2007
{"url":"http://www.pagat.com/invented/ops.html","timestamp":"2014-04-19T14:29:26Z","content_type":null,"content_length":"9170","record_id":"<urn:uuid:37716ee9-bd8e-42bb-a80c-d3717ff78ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
South Colby Prealgebra Tutor Find a South Colby Prealgebra Tutor ...I am highly qualified to tutor SAT Math, and I am currently tutoring math at levels ranging from Algebra 1 through Calculus. I recently scored over 2300 out of 2400 on a full SAT test. I hold a PhD in Aeronautical and Astronautical Engineering from the University of Washington, and I have more than 40 years of project experience in science and engineering. 21 Subjects: including prealgebra, chemistry, physics, English ...I would be happy to tutor in American or US history, but would request access to the textbook or advance knowledge of specific topics to be covered in order to be most effective. I took IB Biology in high school (the IB program is similar to AP classes), and I especially enjoyed learning about M... 35 Subjects: including prealgebra, English, reading, writing ...Throughout high school I tutored pre-calculus students. Working with them and going over multiple problems until they understood the concepts they were struggling with. I have also taken a leadership program at the University of Berkeley and through it gained skills to successfully lead others through their challenges. 15 Subjects: including prealgebra, reading, Spanish, geometry ...In addition to my role as an educator, I am a marine biology enthusiast and hope to one day work myself up to be a biologist within an accredited aquarium. I am an interpretive and life science volunteer at the Seattle aquarium during the weekends. I am also a certified SCUBA diver who is a great admirer of our Puget Sound! 9 Subjects: including prealgebra, reading, writing, algebra 1 ...I specialize in helping students with the GRE, ACT, SAT, and ASVAB. I scored perfect on my first ASVAB exam and I've been able to score perfect on repeat exams of the GRE, ACT, and SAT. I get requests from all over the country, so I usually use online meeting software. 15 Subjects: including prealgebra, GRE, algebra 1, ASVAB
{"url":"http://www.purplemath.com/South_Colby_Prealgebra_tutors.php","timestamp":"2014-04-17T04:38:02Z","content_type":null,"content_length":"24161","record_id":"<urn:uuid:78a4b5f2-53e2-4f07-8045-3308dddd07a6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: TEX Question Replies: 4 Last Post: Dec 6, 1996 7:14 AM Messages: [ Previous | Next ] Re: TEX Question Posted: Dec 5, 1996 7:58 AM Favio Miranda Perea wrote: > Hello!, could somebody please tell me if there is a command to print the > contradiction simbol in Tex? or how i can draw it? > Thanks in Advance and Merry Christmas!! there is the ulsy-package for LaTex2e which provides the contradiction Marcus Mirbach Seminar for Theoretical Economics Economics Department University of Munich Ludwigstr.28 /RG, D-80539 Muenchen Phone: ++49 89 2180-2238; Fax: ++49 89 334672 email: mirbach@lrz.uni-muenchen.de
{"url":"http://mathforum.org/kb/message.jspa?messageID=1552455","timestamp":"2014-04-16T05:31:45Z","content_type":null,"content_length":"19928","record_id":"<urn:uuid:ffd66a50-e6ac-4234-9abf-77c05ccfd30a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotations in Complex Plane You're notation is still hard to follow. For instance, the letter z is usually used to express a complex number. z = x+iy. There are some basic tools you need to perform operations on complex numbers. 1 Euler's Equation. [tex]\ e^{i \theta} = cos(\theta) + i sin(\theta) [/tex] Where [tex]x=cos(\theta)[/tex] and [tex]y= sin(\theta)[/tex], a number in the form [tex]X+iY[/tex] can be expressed in the form [tex]\ Z e^{i \Theta}[/tex]. (In this case 'Z' is a magnitude, a real positive value--so much for conventions.) X,Y,Z, and theta are all real valued numbers, and Z is positive. 2 Complex Conjugation. The complex conjugate of [tex]\ X+iY[/tex] is [tex]\ X-iY[/tex]. You just negate the imaginary part to get the complex conjugate. 3 Division. [tex] c = a+ib [/tex] [tex] z = x+iy [/tex] What is the value of c/z expressed in the form X+iY ? [tex]\frac{c}{z} = \frac{a+ib}{x+iy} [/tex] Multiply the numerator and denominator by the complex conjugate of the denominator. [tex]\frac{c}{z} = \frac{(a+ib)(x-iy)}{(x+iy)(x-iy)}[/tex] [tex]\ \ \ \ \ \ = \frac{(a+ib)(x-iy)}{x^2 + y^2}[/tex] [tex]\ \ \ \ \ \ = \frac{(ax+by) + i(bx - ay)}{x^2 + y^2}[/tex] [tex]\ \ \ \ \ \ = \frac{ax+by}{x^2 + y^2} + i \frac{(bx - ay)}{x^2 + y^2}[/tex]
{"url":"http://www.physicsforums.com/showthread.php?t=260703","timestamp":"2014-04-18T08:30:00Z","content_type":null,"content_length":"41372","record_id":"<urn:uuid:97bd5e31-32a0-4934-9573-8b51e3299880>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Associations between street connectivity and active transportation Past studies of associations between measures of the built environment, particularly street connectivity, and active transportation (AT) or leisure walking/bicycling have largely failed to account for spatial autocorrelation of connectivity variables and have seldom examined both the propensity for AT and its duration in a coherent fashion. Such efforts could improve our understanding of the spatial and behavioral aspects of AT. We analyzed spatially identified data from Los Angeles and San Diego Counties collected as part of the 2001 California Health Interview Survey. Principal components analysis indicated that ~85% of the variance in nine measures of street connectivity are accounted for by two components representing buffers with short blocks and dense nodes (PRIN1) or buffers with longer blocks that still maintain a grid like structure (PRIN2). PRIN1 and PRIN2 were positively associated with active transportation (AT) after adjustment for diverse demographic and health related variables. Propensity and duration of AT were correlated in both Los Angeles (r = 0.14) and San Diego (r = 0.49) at the zip code level. Multivariate analysis could account for the correlation between the two outcomes. After controlling for demography, measures of the built environment and other factors, no spatial autocorrelation remained for propensity to report AT (i.e., report of AT appeared to be independent among neighborhood residents). However, very localized correlation was evident in duration of AT, particularly in San Diego, where the variance of duration, after accounting for spatial autocorrelation, was 5% smaller within small neighborhoods (~0.01 square latitude/longitude degrees = 0.6 mile diameter) compared to within larger zip code areas. Thus a finer spatial scale of analysis seems to be more appropriate for explaining variation in connectivity and AT. Joint analysis of the propensity and duration of AT behavior and an explicitly geographic approach can strengthen studies of the built environment and physical activity (PA), specifically AT. More rigorous analytical work on cross-sectional data, such as in the present study, continues to support the need for experimental and longitudinal study designs including the analysis of natural experiments to evaluate the utility of environmental interventions aimed at increasing PA. Physical activity contributes to health through its direct effects on disease risk as well as its indirect effects via contributions to weight loss and weight maintenance. These benefits have been comprehensively reviewed in a recent report from the US Physical Activity Guidelines Advisory Committee [1]. However, there is evidence to indicate that there is an epidemic of sedentary behavior in the developed world [2]. Recent results based on objective measurement of physical activity using accelerometers in the US and Sweden suggest that the prevalence of adherence to PA guidelines is even lower than that indicated by studies based on health surveys, with only about 5% of US and Swedish adults adhering to physical activity guideline recommendations of 30+ minutes of moderate or greater intensity PA five or more days per week [3,4]. Walking and bicycling for transportation and/or leisure are a major form of physical activity worldwide [5], and such activities can meet recommendations for physical activity [6]. Individual interventions to increase walking/bicycling are expensive and have seldom been implemented at the population level. Furthermore, campaigns aimed at changing behavior absent environmental change may have small or poorly maintained effects [7-9]. Thus, there is considerable interest in the potential for understanding and improving the active transportation (AT) environment as a way to increase walking and bicycling for health and to alter mode share away from automobiles towards AT, a goal thought to have environmental, energy, and potentially social benefits [10,11]. Street connectivity is one major environmental or 'built environment' feature that could have direct or indirect influences on AT. Street networks that are more connected are thought to increase walkability and those that include longer blocks, fewer intersections, and more dead-ends are argued to be less conducive to walking. Direct effects of connectivity could include ease of walking from place to place and the aesthetic correlates of more connected networks. Indirect effects of connectivity are often associated with the association between destinations and connectivity. Connectivity creates more and shorter routes to such destinations [12-15]. Diverse studies have examined the association between various measures of street connectivity including block length [16], block size [17-19], intersection density [18,20], percent four way intersections [16,21]; street density [22,23]; connected intersection ratio [19,24], and link node ratio [25]. Grid block and path length characteristics and derived indices such as the alpha and gamma index (see below) have also been reported and analyzed in relation to pedestrian behavior and mode choice [14,26-29]. Many, but not all of these studies find positive associations between measures of connectivity and AT or leisure walking. Recent papers have also called attention to the fact that many of these positive associations are weak, even when statistically significant [30-33 ]. It also seems likely that such measures are correlated with one another and therefore it is not obvious what specific recommendation about street network design arise from this body of work. The first goal of this paper is to extract multiple measures of street connectivity in a single study and try to identify the underlying factors describing street networks that are associated with active transportation via walking and bicycling. A second goal of the paper is to add a geographic perspective to the analysis of associations between street connectivity and AT. Past studies of street connectivity have largely or completely ignored the fact that respondent environments are distributed spatially and likely to be correlated with one another over some (unknown) spatial scale. Sometimes this issue has been addressed by comparing specific neighborhoods selected to differ with respect to urban form and other variables and separated geographically [22]. In this paper we explicitly explore the effects of geography by including spatial random effects in our analysis of associations between street connectivity and active transportation behavior. The third goal of the paper is to examine propensity and duration of AT separately. Behavioral traits such as leisure time walking and bicycling, AT or other forms of physical activity have at least two components, the probability or propensity to engage in the behavior and the duration of the behavior in the people who are active (we acknowledge that other components such as intensity and affect are not included here). Many past studies of built environment and walking have analyzed propensity and duration separately; thus we aim to illustrate the use of a multivariate distribution with a binary component for walking propensity and a log normal component for walking duration. This approach should provide more statistical power to detect covariates associated with both aspects of AT. To address these goals we analyzed street connectivity and its association with AT using a large spatially identified data set collected as part of the 2001 California Health Interview Survey. Street connectivity represents a major class of environmental variables of great interest to health geographers because they are potentially correlated with multiple health behaviors and organized over diverse spatial scales. Additional detail concerning the survey and variables analyzed here are presented in Huang et al. 2009 [34]. This study is based on a subset of data from the 2001 California Health Interview Survey (CHIS). This large (N = 55,428 households) random digit dial telephone survey in California is administered in seven languages (English, Spanish, Mandarin, Cantonese, Vietnamese, Korean and Khmer) and had a response rate, based on the American Association for Public Opinion Research equation RR4 [35], of 43.3% with a cooperation rate of 63.7% (weighted to account for the sample design) and 77.1% (unweighted). We studied residents of San Diego and Los Angeles counties where over 70% of survey respondents supplied the name of the nearest intersection to their residence (In LA County, 8728/12196 = 71.5%, and in SD County 1952/2672 = 73%). These addresses were geocoded to represent the location of each respondent for purposes of this analysis. After exclusion of respondents with missing or invalid data, 8506 respondents from LA and 1883 respondents from SD were used in the analysis. These two counties were the only ones with nearest intersection data available in CHIS 2001. The paper has two sections. In the first, we characterize street connectivity based on GIS-derived measures from buffers around the nearest intersections to respondents homes. In the second section we used a combination of CHIS variables, Census data, and the street connectivity data in a model-based analysis to explore the relative contributions of street connectivity and other variables to active transportation (AT). Contextual and connectivity variables We compiled street connectivity and two density-related variables using circular buffers (areas around a point) of radius 0.5 km surrounding each respondent's location (nearest intersection to home). These buffers were defined using TIGER map files from the 2000 U.S. Census Bureau and implemented with GIS software (ArcView, ESRI, Inc.). Data concerning population and employment density and characteristics of the street network for each buffer were then calculated at the census tract or census block (administrative units that are nested within census tracts) level. Population density within a buffer was generated by downloading US Census data at the census block level. Each half-kilometer buffer usually overlapped more than one census block. We assumed that population density is uniform within each census block and assigned a portion of the population within the census block to the buffer based on the area of the census block within the buffer. For example, if a buffer covers half of a census block, half of the census block's population is assigned to that buffer, in addition to the population in census blocks that were completely within the buffer. The total population in the buffer was then divided by the area (0.785 square kilometers). Employment density data were generated using data from the metropolitan planning organization for each area - the Southern California Association of Governments (SCAG) for Los Angeles and the San Diego Association of Governments (SANDAG) for San Diego. Each agency provided total employment data by census tract for the year 2000. The method to calculate employment density was identical to that of population density, except that because of census data availability, we used tracts instead of blocks. Therefore, the variance associated with population and employment densities are likely to differ in this study. For our measures of street connectivity, we first extracted or calculated values for nine variables for each buffer. Later we used principal components analysis (see results) to reduce the number of variables used in our analysis of variance. Variables included: 1) Link/Node Ratio, the link/node ratio is the total number of links divided by the total number of nodes. All nodes are included, meaning intersections and the ends of cul de sacs and dead-end streets. A higher ratio = higher connectivity. Links are defined as street segments and nodes as intersections or dead ends. 2) Intersection Density, intersection density is the number of real nodes (nodes that are at 4-way or 3-way intersections, not the end of cul de sacs) divided by the buffer area (0.785 sq. km.). A higher density = higher connectivity. 3) Street Network Density, the street network density is calculated by summing the lengths of all the links within the buffer (the total network distance within the buffer, ignoring the number of lanes on a road) and dividing by the area of the buffer (0.785 sq. km.) (Note buffer size choice was based on our expert opinion, budget constraints precluded analysis of more buffer sizes). The portion of a street (link) that continued outside the buffer was not included. A higher density = higher connectivity. 4) Connected Node Ratio, connected node ratio (CNR) is the number of real nodes divided by the total number of all nodes. If all the nodes in a buffer were at 4-way or 3-way intersections, the CNR would be 1.0. A higher ratio = higher connectivity (maximum = 1.0). 5) Block Density, block density is the total number of Census blocks within a buffer divided by the area of the buffer (0.785 sq. km.). Census block boundaries generally coincide with streets and are consistent with a block defined by the area within connecting streets. If a portion of a block was outside a buffer, only the area of the block within the buffer was included. A higher density = higher connectivity. 6) Average Block Length, the average block length is the average length of the links that are completely or partially within the buffer. For links (blocks) that continue outside the buffer, the entire length of the link is included in the calculation. Truncating the link at the buffer boundary would have reduced the length of the block artificially. A higher average length = less connectivity. 7) Median Block Length, median block length was calculated in the same manner as average block length. A higher median length = less The eighth variable was the Gamma index, the ratio of the number of links in the network to the maximum possible number of links between nodes. The maximum possible number of links is expressed as 3 * (# nodes - 2) because the network is abstracted as a planar graph. In a planar graph, no links intersect, except by nodes [28]. Values for the gamma index range from 0 to 1 and are often expressed as a percentage of connectivity, e.g. a gamma index of 0.54 means that the network is 54 percent connected. Only links that are completely within the buffer were included. This was because every link must have a node on each end. If links were truncated at the buffer, there would be no node. In addition, only the nodes that intersect with these links were included. Gamma was only calculated for buffers with three or more nodes. All the locations with the number of nodes less than 3 were treated as missing (3 points in SD and 6 points in LA). A higher value = higher connectivity (maximum = The ninth variable was the Alpha index. The alpha index uses the concept of a circuit - a finite, closed path starting and ending at a single node. The alpha index is the ratio of the number of actual circuits to the maximum number of circuits and is equal to: Values for the alpha index range from 0 to 1. As with gamma, only links that are completely within the buffer were included and only the nodes that intersect with these links were included. Alpha can not be calculated if the number of nodes in a buffer is less than three or the number of nodes is equal to or greater than the number of links. These cases were coded as missing data (98 points in SD and 128 points in LA). The second condition was violated more often than the first, because only links within a buffer be included. This was usually in more rural areas. A higher value = higher connectivity (maximum = 1.0) Several of the above measures were highly correlated; 7 of the 36 possible pairs of the 9 variables had correlation coefficients above 80% (See below). Including highly correlated covariates in a regression model leads to instability of the model, so we used principal components (orthogonal rotation) and factor analysis to identify the main components of variance in this data set. This process constructed indices that explained most of the variance of the built environment across the locations and that could be used as independent predictors in the models. Similar principal components were derived from analyses considering LA and SD separately. These analyses were carried out in SAS JMP Version 8.0 (Cary, NC). Active transportation, demographic, and anthropometric variables from CHIS CHIS 2001 survey data included in this study were a measure of active transportation, and multiple relevant demographic and anthropometric variables. AT was measured by asking three short questions: 1) "Over the past 30 days, have you walked or bicycled to or from work, school, or to do errands?", 2) "How many times per day, per week or per month did you do this?" and 3) "And on average, about how many minutes did you walk or ride your bike each time?". AT was analyzed either as a measure of prevalence such as yes/no (any AT or none) from the answers to the first question, or as a measure of duration such as minutes per week among walkers/bicyclists derived from the answers to the second and third questions. Demographic and socioeconomic status (SES) variables including age, gender, race, education, and income were also extracted from the CHIS survey resource for each respondent, as were self-reported health status, immigration status and employment status. For self related health status we chose an activity related variable based on responses to the query "How much does your health limit you when climbing several flights of stairs?". Responses were on a three part scale, "Limited a lot", "Limited a little", "Not limited at all". CHIS includes a variety of other variables related to diet, tobacco and alcohol use, cancer screening practices, health care coverage; we focused on variables commonly used in past studies of active transportation. For some analyses we also used self-reported data on height and weight to obtain body mass index [BMI = Height/Weight (kg)^2], a measure of obesity. Geographic identifiers included latitude and longitude rounded to 0.01 degrees and Zipcode of address. Data concerning bus stops and light rail were obtained from the Los Angeles and San Diego Public transit agencies coded as present or absent within a buffer (Thanks to R. Adamski). Presence or absence of a freeway within a buffer was obtained from Tiger Line files. Statistical analysis Preliminary analysis showed that the distribution of the number of minutes reported in AT was skewed and had a spike at zero, representing respondents who do not report any AT. A logarithmic transformation normalized the distribution of non-zero minutes. The importance of the potential explanatory variables was tested separately by a logistic model for the AT/no AT response and a lognormal model for the number of minutes reported by those with any AT [36]. These fixed effects models included all main effects and all possible two-way interactions at first. Non-significant (p > 0.05) interactions and then main effects were removed by a stepwise procedure. Once the initial subset of variables and their interactions were determined, the data were analyzed by a multivariate regression, with a binary component for whether a person reported any AT and a lognormal component for the number of minutes of AT. This approach has the advantage of increased power to detect significant effects that indicate a common association with both responses. For example, if older respondents were less likely to report any AT and those who did report any AT spent less time in AT, then the combined model could estimate a single parameter for the age effect, increasing the power over that from two separate models. Another advantage of the multivariate model is that it can measure any correlation between the propensity to report AT and the length of time spent in AT in geographic areas with multiple respondents. A disadvantage of this approach is that the more complex model is difficult to apply, requiring larger sample sizes and greater computational effort to estimate its parameters than either model component separately. These difficulties are compounded by the need to account for the correlation of responses among neighbors. Methods have been developed to analyze data that result from a mixture of two different statistical distributions. Zero-inflated Poisson (ZIP) methods, introduced by Lambert in 1992 [37], are regression models for count data with an excess number of zero responses. These models include a model component to represent the probability the dependent variable occurred in a subject. More recently, these zero-inflated mixture model methods have been extended to other types of data [38]. For example, Tooze et al. proposed a mixture model that included random effects correlation among the repeated responses of individuals [39]. This method has been applied successfully to 24-hour dietary recall data, with separate regression components for whether the respondent ate a particular food during that day and for their amount consumed of that food [40]. The probability that a person ate the food is modeled by a logistic regression model and the usual amount consumed is modeled by a normal regression model, after a suitable normalizing data transformation. This model produces a direct estimate of the correlation between the two model components but does not allow estimation of spatial correlation of the respondents, an important goal of our CHIS analysis. We used SAS PROC GLIMMIX to implement a multivariate model that is a mixture of logistic and lognormal regression components for the probability that a person reported any AT and the amount of AT, respectively, similar to the model for dietary intake described above [[41] example 5]. Covariates found to be significant predictors of either outcome (any AT and amount of AT) were included and were initially allowed to vary by type of outcome. Those with non-significant effects, as measured by p-values of the Type III (partial) sums of squares F test greater than 0.05, were removed. If there was no significant difference in an effect between the two model components, the two parameters were replaced by a single common parameter for that effect. Covariates that were significant predictors for only one of the two counties were retained in both county models for comparability of effects. Covariates indicating gender, race and age were retained regardless of significance in order to compare effects across models and counties. Each of the two regression components could include correlation among persons living in the same small geographic area, i.e., AT habits could be similar in small neighborhoods. Failure to account for this correlation, if it exists, violates the assumption of independent residual errors in standard regression analyses and can lead to mis-specification of the variances and covariances of model parameters, which in turn leads to mis-specification of the corresponding statistical significance. The spatial correlation in the original data can be accounted for by model covariates that explain the spatial patterns or by use of a spatial error structure for the variance/covariance matrix of a model random effect (a hierarchical analysis) or of the model residuals. For this analysis, we attempted to include covariates that would explain most of the underlying spatial pattern in AT behavior but also included a random effect to account for any remaining spatial correlation. We did not assume that the degree of spatial correlation was identical for the two types of responses. Spatial correlation was assessed in two ways: by an exponential decay function where correlation decreased with increasing distance between respondents' addresses, and by a threshold function where responses of persons who lived within a defined neighborhood had a constant correlation but were not correlated at all with responses from outside that neighborhood. Spatial correlation for each county was assessed by using a spline approximation on a 30 × 30 cell grid, corresponding to neighborhoods approximately 2.3 miles square; smaller neighborhoods had too few respondents for stable assessment of the correlation. The threshold model was repeated with neighborhood defined by the respondents' postal zip codes. No single statistic is available to assess how well mixed effects models fit because of the complexity of the likelihood in the presence of random effects. We compared values of the generalized chi-square statistic for goodness-of-fit and checked the final models by rerunning their fixed effects equivalents separately to calculate the Hosmer-Lemeshow statistic [42] for the logistic component and the likelihood ratio statistic for the lognormal component. Residuals were examined and variograms were plotted and compared for the original and residual data. Distances for the variogram calculations were Great Circle distances based on the geocoded locations. The spatial and non-spatial models cannot be compared directly because of the default likelihood approximation used by SAS PROC GLIMMIX for random (spatial) effects models. Therefore we attempted to rerun the final models on a more powerful LINUX PC to obtain exact likelihood results. The local neighborhood spatial models did not converge, required more computer memory than was available or produced an invalid variance/covariance matrix. The zip code threshold models did converge using adaptive quadrature integral approximation methods. Because of the computational difficulties in optimizing the exact likelihoods, particularly for the larger LA sample, the results in this paper are the pseudo-likelihood (Restricted Maximum Likelihood) results, unless otherwise specified. The computational difficulties involved in estimating parameters in models where the variances/covariances are unknown, as is the case for spatial models, are well documented [[43], Chapter 9]. Inclusion of random effects or the need to estimate the covariance parameters requires use of an iterative estimation procedure, i.e., there is no exact solution to the optimization equations. Assessment of convergence, as reported above, is essential for any of these models, as it gives some assurance that the results are reliable. We addressed this problem by using a well-tested commercial software program [34] for the iterative parameter estimation process and by screening covariates and their interactions carefully to develop a parsimonious model to improve model stability. Finally, we compared results for several types of models (fixed and random effects, separate and joint propensity and duration models), with several subsets of covariates and at different geographic scales, looking for consistent effects. The joint model of propensity and duration is complex but allows information about one type of outcome (propensity or duration) to aid in predicting the other, in theory providing a more robust approach than analyses treating propensity and duration separately or simply using logistic regression with zero or zero + low levels of activity as one of the categories in the dependent variable. This paper concerns the association between active transportation as measured by self-reported levels of active transportation (AT) and independent variables including street connectivity, demographic characteristics of respondents, and a set of contextual variables related to neighborhood SES and transit access. Respondents from the study counties, LA (n ~8,500) and SD (n ~1,900), have moderately similar characteristics compared to the entire state of California [34]. There are some differences between California and the US as a whole, between California and LA/SD, and between the two counties. The LA/SD sample is more racially/ethnically diverse than California as a whole (Table 1). Compared the United States, LA and SD combined and the entire state of California are more racially/ethnically diverse, younger, have lower income, and have more immigrants and more college graduates and residents who did not graduate from high school (see also [34]. The two counties are similar in age structure, but San Diego has a much higher percentage of non-Hispanic Whites, a lower percentage of people earning less than 100% of the poverty level, and a lower percentage of people with less than a high school education. The percent of respondents reporting any active transportation in LA was higher than in SD (42.0% vs. 36.1%), whereas the average duration of active transportation LA and SD were similar (84 vs. 80 minutes per week). Table 1. Demographics of subject counties (based on respondents only), California (from CHIS 2001) and the entire USA (from the 2001 National Health Interview Survey [34]). Street connectivity We extracted information concerning nine measures of street connectivity. Values of these measures are typical for large urban areas in the western and southern US (Table 2). The nine measures of street connectivity show a complex pattern of correlation (Table 3). Measures of block length are positively correlated with each other but negatively correlated with intersection and street density. Not surprisingly, there were strong positive correlations between alpha and gamma and measures of node characteristics, link node ratio and connected node ratio. This correlation structure made data reduction seem desirable, but inspection does not make it obvious if one or two of the existing variables could adequately represent the variation present in these measures of street connectivity (Table 2, 3). Therefore we chose to perform principal components analysis to try to identify underlying axes or factors accounting for variation in the data. Two factors account for 84% of the observed variance, with the third and fourth axes accounting for only 7 and 3% of the total variance (Table 4). Principle component one (PRIN1), accounting for 55% of the variance, showed positive loadings on all the variables except for negative loadings on the two measures of block length. Thus, it represents neighborhoods with relatively short blocks and relatively higher intersection density and proportion of 4 way intersections. The second axis (PRIN2), accounting for an additional 29% of the variation, had positive loadings on street length and negative loadings on intersection density, street density, and block density. Thus it represents buffers with longer block lengths. Measures of node characteristics are still loading positively, thus these are connected neighborhoods, but with longer blocks reducing the density of intersections and blocks. Analysis of these two variables preserves most (84%) of the variation present in our data, but removes several computational difficulties by replacing 9 highly correlated predictor variables with two independent ones. Table 2. Means and standard deviations for connectivity variables in Los Angeles (n = 8542) and San Diego (N = 1942) counties. Table 3. Spearman correlations amongst street connectivity variables Table 4. Principle Components analysis of street connectivity variables Spatial characteristics of the data Several figures illustrate spatial characteristics of respondents in LA and San Diego. Respondent density is roughly proportional to population density (Fig 1a, b) with concentrations, for example, of respondents in the cities of San Diego, downtown Los Angeles, Santa Monica, and Long Beach. Choropleth plots of percent reporting any AT by zip code (Fig 2a, b) and average duration of AT in respondents with any AT (Fig 3a, b) illustrate regional heterogeneity in the prevalence of AT. Huang et al. [34] used spatial scan statistics to identify clusters of elevated or reduced AT prevalence. In this paper we take the complementary approach of examining the impact of pre-selected candidate determinants of AT prevalence and duration simultaneously in an analysis that accounts for spatial clustering using random effects. Figure 1. a, b. Approximate locations of respondents in Los Angeles (a) and San Diego (b) counties. Figure 2. a, b. Choropleth maps of % reporting any active transportation by zip code in Los Angeles (a) and San Diego (b) counties. Figure 3. a, b. Choropleth maps of mean active transportation duration (minutes per week) by Zipcode Tabulation Area's (ZCTA's) in Los Angeles (a) and San Diego (b) counties. Semivariograms were used to determine the scale of spatial autocorrelation [43]. These graphical analyses indicated that the correlations within each county were stronger than the correlations of responses between counties, so Los Angeles and San Diego were analyzed separately. The semivariograms also suggested that the spatial correlation was limited to respondents who lived within 10 (SD) to 20 (LA) kilometers of each other (Fig 4a). Therefore spatial correlation for each county was assessed by using a spline approximation on a 30 × 30 cell grid, corresponding to neighborhoods approximately 2.3 miles square; smaller neighborhoods had too few respondents for stable assessment of the correlation. The threshold model was repeated with neighborhood defined by the respondents' postal zip codes. These larger geographic units masked very localized spatial correlation as evident in the semivariograms, but had the advantage of large numbers of respondents in most areas with which to assess the correlation between the propensity to report AT and the amount of AT. Figure 4. a, b. Semiovariograms illustrating the level of spatial autocorrelation for AT duration (logarithm of number of minutes) in Los Angeles (a) and San Diego (b) counties. To explore the spatial scale of street connectivity and AT, multivariate analyses were run at 2 geographic levels: zip code (large) and latitude/longitude (small, rounded to 0.01 degrees); there were 277 unique zip codes and 2463 unique latitude/longitude combinations in LA, 91 zip codes and 856 latitude/longitude combinations in SD. On average, there were 31 people/zip code and 2.5 people/ latitude-longitude in LA and 21 people/zip code and 1.5 people/lat/long in SD. The square latitude/longitude "neighborhood", rounded to 0.01 degrees, has a diameter of about 0.6 miles, close to the buffer size (circle radius = 0.31 miles). Model results The final sets of covariates (Additional File 1) fit the observed data well according to the logistic goodness-of-fit fixed effects test (Hosmer-Lemeshow chi-square statistic = 9.04, p = 0.33 in LA and 12.28, p = 0.14 in SD) and a residual analysis of the lognormal fixed effects model of duration. Inclusion of neighborhood characteristics (see Additional File 1) was a significant improvement over the fixed effects model with only individual characteristics in LA (likelihood ratio chi-square statistic = 101.94, df = 19, p < 0.0001) but not in SD (likelihood ratio chi-square statistic = 12.64, df = 19, p = 0.856). The fixed effects logistic model of propensity to report AT showed no over-dispersion, suggesting that the decision to use AT was made independently by people within a neighborhood. In contrast, the observed semivariogram of the logarithms of duration of AT suggested a small spatial correlation within 10 (SD) to 20 (LA) kilometers, necessitating a spatial model (Fig 4a) and a spatial resolution below the observed level of spatial correlation. Additional file 1. Regression coefficients from multivariate spatial analysis. Regression coefficients from multivariate spatial analysis of the association between street connectivity, individual and neighborhood characteristics and active transportation. To address these goals we analyzed street connectivity and its association with AT using a large spatially identified data set collected as part of the 2001 California Health Interview Survey. Street connectivity represents a major class of environmental variables of great interest to health geographers because they are potentially correlated with multiple health behaviors and organized over diverse spatial scales. Format: DOC Size: 193KB Download file This file can be viewed with: Microsoft Word Viewer The spatial neighborhood models, i.e., random effects models with local neighborhood effects, were fit to propensity and duration of AT separately and by a combined multivariate model. Although the spatial random effect estimates were not significantly greater than 0, the multivariate (joint) local neighborhood model seems justified by a smaller sum of squared errors, particularly in SD (generalized chi-square/df in LA = 1.00 for logistic, 1.44 for lognormal, 1.45 for multivariate with a common spatial effect, 1.43 for multivariate with local neighborhood spatial effect; generalized chi-square/df in SD = 1.03 for logistic, 1.39 for lognormal, 1.43 for multivariate with a common spatial effect, 1.39 for multivariate with local neighborhood spatial effect). An additional justification for the multivariate model was that there were common covariate effects for most of the main effects, i.e., most of the main effects impacted propensity and duration of AT to approximately the same degree (Additional File 1). This was particularly true in SD, probably due to the smaller sample size there and the resulting lower power to detect differences in effects between the two model components. The use of common effects gives greater power than either of the separate models to detect a significant effect. Also, the multivariate model can account for the correlation between the percent who reported AT and the mean number of minutes walked; e.g., the observed Pearson correlations in zip codes with more than 1 respondent were 14.20% (p = 0.02) in LA and 49.1% (p < 0.0001) in SD. This reinforces the importance of our effort to model the propensity and amount of AT jointly. Additional File 1 gives the joint model results for Los Angeles and San Diego counties respectively. This table, reflecting the model's complexity, requires some explanation. The magnitude of some associations were the same for both propensity and duration of AT; these regression coefficients and corresponding p values that test the statistical significance of the covariate (not just a single category of the covariate) for predicting AT are shown in the columns labeled "Common coefficients for duration and propensity". Some covariates had a different association with duration compared to propensity, so these regression coefficients were estimated separately by the model and are shown in the columns for duration and propensity, respectively. Thus, results for a covariate and its categories, if any, will be shown in either the "common coefficients" column or in the duration and propensity columns, but not both. Exceptions to this format are for the age effect by poverty level and for working status by race due to the presence of interactions of these effects in the model. We have chosen to display the stratified coefficients, e.g., a coefficient for the age effect for each category of poverty, rather than showing the main effect and interaction regression coefficients separately, requiring the reader to calculate the combined effects. As a result, there are two sets of p values for these stratified effects: the usual F test p value is shown in the duration and propensity columns, but an extra p value is shown that represents the significance of the difference between the stratified effects and the referent category effect. For example, the age effects for poverty levels no greater than 200% of the federal poverty level were highly significant compared to the referent level (300+%) but there was no difference between the age effect for people with incomes 201%-300% and over 300% of the federal poverty level. In general, we emphasize p-values rather than the values of regression coefficients. This seems appropriate because the variables considered in this study are measured on many different scales. Combined consideration of regression coefficients and statistical significance of the variables examined in Additional File 1 should allow the readers to make their own judgments concerning the relative importance of the many variables examined in our analysis. Consideration of the mean values for the connectivity variables and levels of AT can also provide information about the magnitude of the associations observed here. A variogram of the model residuals (Fig 4b) still showed some spatial autocorrelation, i.e., there was still a small association between neighborhood (within 3 km) and the duration of AT that was unexplained by the sociodemographic and built environment neighborhood measurements. Separate covariances for the logistic and lognormal components of the multivariate model could be estimated for the latitude/longitude model, but not for the zip code model. The zip code model with separate effects for the 2 model components would not converge. That is, a more complex covariance structure, i.e., one with separate spatial effects for each of the two model components, could be detected at the smaller area level compared to the larger zip code level model. This suggests that zip code areas are too large to capture the spatial variation in AT. Common model effects across SD/LA and latitude/longitude and zip code There were a number of common effects across the two counties and smaller spatial units, latitude/longitude and zip code (Additional File 1, Zip code effects not shown). 1) Gender had no association with AT at any spatial scale. 2) Age had nearly the same association with amount of AT for all 4 models- older respondents had slightly more minutes or AT (approximately 1% more per year of age); however, older age had the reverse association with propensity to report AT for all 4 models - older ages were less likely to report AT (approximately 1% less per year of age). In LA, older residents with an income less than 200% of the federal limit were less likely to report AT and tended to have less AT than residents with a higher income. 3) There is a trend for less reported AT among those with more health limitations; an even stronger association was seen between propensity to report AT than amount of AT in LA; no significant difference could be detected in SD. 4) Hispanics are more likely to report AT than Whites, but this is not significant in SD. 5) People who were working were much less likely to report AT and tended to report less AT; this association was attenuated in Blacks in LA. Difference between SD and LA San Diego and Los Angeles differed in a number of ways. 1) There is no significant effect of BMI, except for the obese in LA and overall in LA for the zip code model. 2) Birth outside the US had a significant positive effect on propensity to report AT and amount of AT, but is stronger in SD. 3) Education had a significant effect in LA, not SD, and the LA effect varies for binary and lognormal components (Additional File 1). 4) There was a strong, but nonlinear across categories, effect of population density on both propensity and amount of AT in LA, not SD. 5) There was a stronger effect of poverty level in LA than in SD for both outcomes (lower income associated with more AT). Only the 100-200% of poverty level has a significant effect in SD and no trend is evident across categories. 6) There was no difference between Blacks and Whites in SD but in LA Blacks who work are more likely to report any AT and more AT. Among higher educated residents of LA, Blacks were less likely to report AT and had less AT than other racial/ethnic groups. 7) People in SD who had lived in the US longer tended to report less AT (propensity and distance). Differences between local neighborhood and Zip Code models Comparison of the AIC statistics for the models that converged using maximum likelihood estimation methods suggested that there was no advantage to the zip code threshold model over a simple fixed effects model, i.e., one that ignores any spatial autocorrelation in the data (AIC in LA = 22352 for zip code model, 22348 for fixed effects model; AIC in SD = 4501 for zip code model, 4497 for fixed effects model; lower values are better). The latitude/longitude models would only converge using a linearizing approximation to the maximum likelihood, so that no AIC statistics are available for comparison. However, these models did converge and provided spatial autocorrelation estimates for both components of the model (propensity and duration), suggesting that any spatial correlation of AT was at a very local geographic scale. There were a few differences in covariate effects between the Latitude/Longitude and Zip Code models (Not Shown). Employment density was not at all significant for predicting amount of AT in SD at the latitude/longitude level, but is a significant predictor of propensity to report AT at the zip code level (lower density was associated with less AT); results for LA were similar for both geographies. In places with more connected streets (PRIN1), a higher percentage of respondents reported AT in both LA and SD in LA there was an even stronger effect for propensity to report AT than for amount of AT, but both were significant. Built environment influences on active transportation Residents of places with more connected streets and short blocks (PRIN1) were more likely to report AT in Los Angeles (p = 0.015) but the positive association of PRIN1 with duration of AT was not significant (p = 0.08). In San Diego, the association was significant for both propensity and duration (p = 0.0019). The second measure of street connectivity (PRIN2) had a small but non-significant association with AT in both Los Angeles (p = 0.0591) and San Diego (p = .1227). PRIN1 appeared to be normally distributed and had means and standard deviations of 0.26 (2.1) and -1.2 (2.3) for LA and SD respectively; PRIN2 had mean 0.095 (1.6) and -0.44 (1.7) for LA and SD. Log transformed AT minutes for respondents with any AT were 4.54 (S.D. = 1.2) for LA and 4.55 (S.D. = 1.2) for SD, or 93.7 and 94.6 minutes respectively. Residents in SD latitude/longitude level neighborhoods with a bus stop were significantly more likely to report AT, but their duration was less. There was a common positive association of bus stops with AT in LA local neighborhoods for both outcomes, but this was not significant. There was no association of bus stops with AT in zip code areas. Despite the pedestrian unfriendliness of freeways, Los Angeles areas with freeways had residents who were more likely to report AT and had more AT. Conversely, the presence of bus routes was negatively associated with both outcomes in Los Angeles. Note that the SD zip code model does not include bus stops, freeways, bus routes or rail. Because of the smaller sample size in SD than LA, fewer covariates could be included in the SD model in order to obtain model convergence. These particular covariates were excluded because they were not at all significant in the initial propensity model for SD. This study has two main results. First, diverse measures of street connectivity can be summarized by two dominant axes, one representing areas with shorter more connected blocks and the second representing areas with longer blocks, but still exhibiting a more grid like pattern. It remains to be seen whether this observation extends beyond two large cities in Southern California. Second, mixture models accounting for spatial autocorrelation indicate significant associations between measures of street connectivity and both the propensity to report AT and the amount of AT. As in past studies of built environment characteristics including street connectivity and physical activity, particularly walking [44,45], the associations between built environment remain modest. However, even small improvements in individual behaviors can have significant population health benefits. Additionally, the methodological and analytical advances implemented here are important in that they can enhance confidence in estimates of effect sizes as well as separate influences on the propensity versus duration of health behaviors generally and walking or other forms of physical activity specifically. This analytical approach could apply to diet variables, tobacco use, alcohol consumption, substance abuse, and any other behavior divisible into occurrence and dose in time or quantity. Street connectivity and active transportation This study identified small but significant or near statistically significant associations between two aggregate measures of street connectivity, particularly an index representative of areas with a pattern of short blocks and a grid like structure, and active transportation (AT). This measure of connectivity (PRIN1) was more strongly associated with propensity to report AT, but was still positively associated with AT duration. These results are consistent with our recent finding that PRIN1 is elevated in clusters of active transportation identified with spatial scan analysis [34]. Without attempting to reconcile different scales for the independent variables, the magnitude of coefficients estimated for the street connectivity variables are challenging to compare directly. Consideration of mean levels of AT and means and standard deviations for PRIN1 and PRIN2 in the two counties and the coefficients reported in the supplementary file should give the reader sufficient information to think about the relative magnitude of the reported associations. A number of past studies have also examined street connectivity and its association with different measures of AT or leisure time physical activity [10,30,46]. These studies are notable for the lack of standardization in their outcome variables, measures of connectivity and analysis approach. Handy's [30] review tabulates about 50 studies concerning built environment, AT and physical activity. More such studies have appeared since her review, including a review of built environment and walking [45]. Both reviews report consistent associations between transportation walking and density, destination distance, and land use mix, but a mix of results concerning connectivity, parks and parkland, and safety. Saelens and Handy (2008) report positive associations between route/network connectivity and walking in three of seven studies of transportation walking, zero of four studies of leisure walking, and three of six studies of general walking [45]. The remainder of the studies had null or unexpected associations. A few studies report interactions between measures of walkability and other variables such as safety or demographic characteristics - more work is needed systematically examining such interactions. Another recent study reports positive associations between density and travel walking and positive associations between large block sizes and leisure walking [31,32]. Adoption of standard metrics for connectivity would facilitate more specific comparisons of results and effect sizes in such studies. Demographic correlates of active transportation Demographic correlates of active transportation were somewhat different than those reported in a recent national study of transportation walking based on data from the 2005 National Health Interview Survey [5]. In the US as a whole, transportation walking is more prevalent in men than women, decreases with age, is higher in black men and Asian/Native Hawaiian/Pacific Islander women, and is highest in the highest and lowest income categories and highest education category. By contrast, in Los Angeles and San Diego counties we found positive associations between age and duration of AT but negative associations for propensity to report AT, higher propensity but lower duration of AT in those with higher or lower than high school education (i.e., not just a high school diploma), and less AT working respondents. It is difficult to know if these differences are due to regional differences, the effects of including bicycling as a mode of AT, or effects of survey characteristics. NHIS is an in person survey and CHIS is a telephone survey. Both NHIS and CHIS results are based on self report. Accelerometer based measures of overall physical activity [4] and step counts based on accelerometry [47] give somewhat different results as well. Overall physical activity declines with age, is higher in men than women and exhibits age by race/ethnic interactions. Step counts estimated by accelerometer are higher in US males than females; US national level pedometry data analyzed by other demographic variables are not yet available. In Colorado, walking, as measured with a Yamax SW-200 pedometer declined with age, was greatest in single men and women, was highest in respondents with incomes from $25-99,000 [48]. Lack of consistent study designs, measurement modalities, and reporting schemes makes it hard to generalize about walking/bicycling in different geographic areas. Comprehensive and objectively measured data addressing walking distance and duration might be required to fully describe age related changes in propensity to walk and the characteristics of walking Strengths and limitations Major strengths of this study include 1) our development of aggregate measures of street connectivity using principal components analysis of multiple aspects of connectivity, 2) Use of a multivariate model that is a mixture of logistic and lognormal regression components for the probability that a person walked and the amount walked, respectively, and 3) Explicit analysis of the spatial scale of street connectivity and AT implemented by running multivariate analyses at two geographic levels: zip code (large) and latitude/longitude (small). Together all three of these analytical approaches are advances over past studies. In particular, use of the multivariate model allows estimation of common effects of covariates on both propensity and duration and properly accounts for spatial autocorrelation. Residual analysis demonstrates that the model covariates explained all but the most local spatial effects in the original data. There are at least four major weaknesses of the current study. First, this is a cross-sectional data set and so there are several possible alternatives to a simple causal relationship between connectivity and AT. Most notably, recent work suggests that self selection into neighborhoods with desirable features such as walkability, by people with a preference for walking could account for as much variation in walking as causal associations between neighborhood characteristics and walking [[10] p. 112,49,50]. Second, we were unable to obtain some important data elements in this project, specifically more comprehensive measures of land use mix. Land use mix is believed to be an important correlate of transportation walking and our use of employment density as a partial proxy for land use mix is not optimal. Ideally parcel level data on the use of different building would be collected, summarized in an index of mixed use and included in the kinds of models described here [51]. Recent examples of this approach [51,52], use square footage in three or more land use types such as residential commercial and office, in indices of walkability or the built environment. Such studies have reported positive associations between walking and walkability [45,52], but do not always attempt to separate the effects of connectivity, land use mix and other aspects of the built environment. Decomposition of these effects could increase the use of such studies by policy makers and urban planners [53]. CHIS 2001 queried respondents concerning walking and bicycling for transportation. While use of both modes represents 'active transportation', we acknowledge that walking and bicycling involve different skills, equipment, rewards, and infrastructure [54]. It seems likely that most of the active transportation examined in this study was due to walking and our examination of 'street connectivity' is arguably more relevant to walking. However, separate measures of walking and bicycling and examination of environmental features specifically related to walking vs. bicycling could strengthen and refine future studies. Later versions of CHIS have chosen to focus on walking, with separate questions concerning leisure and transportation walking as well as statewide geocoding http://www.chis.ucla.edu/ webcite. Walking and bicycling use networks of roads, paths and sidewalks in different ways from each other and from automobiles. The present paper is entirely based on street networks. A few recent studies have contrasted the effects of pedestrian network analysis versus street network analysis on walking [55,56]. These two papers suggest that analysis of pedestrian networks can identify stronger and novel associations between network characteristics and pedestrian behavior than the analysis of street networks. CHIS data and further work to collect and analyze pedestrian network data from California could add to this promising research area. The magnitude of the associations between street connectivity and AT observed in this study and others may seem small [30-33]. However, street connectivity is a modifiable feature of the environment and for a population with low levels of physical activity and high levels of sedentary behavior such as that of the United States [4,57], even small increases in physical activity could have significant population and individual health benefits [58]. This paper significantly advances the analysis of street connectivity and AT by first identifying dominant axes from multiple measures of connectivity, using mixture models for the joint analysis of active transportation propensity and duration, and thirdly by explicitly examining spatial autocorrelation in the street connectivity variables and accounting for this variation in our analysis. Together the results indicated that aggregate measures of street connectivity are statistically significant correlates of AT independent of a number of individual and neighborhood characteristics. This result should encourage planners and policy makers interested in influencing physical activity for health, but also provide a cautionary note concerning the magnitude of expected effects. Authors' contributions JD extracted the street connectivity variables and contributed to writing the paper. LP participated in the design of the study, performed the regression analyses and contributed extensively to writing the paper. DB conceived of the study, contributed to the statistical analyses, wrote the first draft of the manuscript, and integrated comments and text from the other authors and reviewers. All authors read and approved the final manuscript.' We thank P. Randall-Levy for bibliographic support and Jeremy Lyman for map creation. Bryce Reeve discussed principle components analysis with us and carried out a factor analysis. Rachel Ballard-Barbash and Richard Troiano made helpful comments on the MS. We also thank Rob Adamski for extracting bus, train and freeway route data and David Stinchcomb for discussions of this project. We are grateful to the CHIS respondents for providing these data. 1. Physical Activity Guidelines Advisory Committee: Physical activity guidelines advisory committee report, 2008. Washington, DC: U.S. Department of Health and Human Services; 2008. 2. Sallis JF, Bowles HR, Bauman A, Ainsworth BE, Bull FC, Craig CL, Sjostrom M, De BI, Lefevre J, Matsudo V, Matsudo S, Macfarlane DJ, Gomez LF, Inoue S, Murase N, Volbekiene V, McLean G, Carr H, Heggebo LK, Tomten H, Bergman P: Neighborhood environments and physical activity among adults in 11 countries. Am J Prev Med 2009, 36:484-490. PubMed Abstract | Publisher Full Text 3. Hagstromer M, Oja P, Sjostrom M: Physical activity and inactivity in an adult population assessed by accelerometry. Med Sci Sports Exerc 2007, 39:1502-1508. PubMed Abstract | Publisher Full Text 4. Troiano RP, Berrigan D, Dodd KW, Masse LC, Tilert T, McDowell M: Physical activity in the United States measured by accelerometer. Med Sci Sports Exerc 2008, 40:181-188. PubMed Abstract | Publisher Full Text 5. Kruger J, Ham SA, Berrigan D, Ballard-Barbash R: Prevalence of transportation and leisure walking among U.S. adults. Prev Med 2008, 47:329-334. PubMed Abstract | Publisher Full Text 6. U.S.Department of Health and Human Services: Physical activity and health: a report of the Surgeon General. Atlanta, GA: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion; 1996. 7. Dunn AL, Andersen RE, Jakicic JM: Lifestyle physical activity interventions. History, short- and long-term effects, and recommendations. Am J Prev Med 1998, 15:398-412. PubMed Abstract | Publisher Full Text 8. Baier M, Calonge N, Cutter G, McClatchey M, Schoentgen S, Hines S, Marcus A, Ahnen D: Validity of self-reported colorectal cancer screening behavior. Cancer Epidemiol Biomarkers Prev 2000, 9:229-232. PubMed Abstract | Publisher Full Text 9. Beaudoin CE, Fernandez C, Wall JL, Farley TA: Promoting healthy eating and physical activity short-term effects of a mass media campaign. Am J Prev Med 2007, 32:217-223. PubMed Abstract | Publisher Full Text 10. Frank L, Engelke P, Schmid T: Health and community design: the impact of the built environment on physical activity. Washington, DC: Island Press; 2003:250. 11. Saelens BE, Sallis JF, Frank LD: Environmental correlates of walking and cycling: findings from the transportation, urban design, and planning literatures. Ann Behav Med 2003, 25:80-91. PubMed Abstract | Publisher Full Text 12. Moudon AV, Hess P, Snyder MC, Stanilov K: Effects of site design on pedestrian travel in mixed-use, medium-density environments. Transp Res Record 1997, 1578:48-55. Publisher Full Text 13. Randall TA, Baetz BW: Evaluating pedestrian connectivity for suburban sustainability. J Urban Plann Dev 2001, 127:1-15. Publisher Full Text 14. Moudon AV, Lee C, Cheadle AD, Garvin C, Johnson D, Schmid TL, Weathers RD, Lin L: Operational definitions of walkable neighborhood: theoretical and empirical insights. 15. Cervero R, Kockelman K: Travel demand and the 3Ds: density, diversity, and design. Transp Res D 1997, 2:199-219. Publisher Full Text 16. Hess PM, Moudon AV, Snyder MC, Stanilov K: Site design and pedestrian travel. Transp Res Record 1999, 1674:9-19. Publisher Full Text 17. Influence of urban form and land use on mode choice: evidence from the 1996 Bay area travel survey. Presented at the 81st annual meeting of the TRB, Washington, DC. 2002. 18. Song Y: Impacts of urban growth management on urban form: a comparative study of Portland, Oregon, Orange County, Florida and Montgomery County, Maryland. National Center for Smart Growth Research and Education, University of Maryland; 2003. 19. Cervero R, Radisch C: Travel choices in pedestrian versus automobile oriented neighborhoods. Transport Policy 1995, 3:127-141. Publisher Full Text 20. Boarnet MG, Sarmiento S: Can land-use policy really affect travel behaviour? A study of the link between non-work travel and land-use characteristics. Urban Studies 1998, 35:1155-1169. Publisher Full Text 21. Handy SL: Urban form and pedestrian choices: a study of Austin neighborhoods. Transp Res Rec 1996, 1552:135-144. Publisher Full Text 22. Mately M, Goldman LM, Fineman BJ: Pedestrian travel potential in northern New Jersey. Transp Res Rec 2001, 1705:1-8. Publisher Full Text 23. Allen E: Measuring the new urbanisation with community indicators. In Contrasts and transitions, American Planning Association National Conference. San Diego, CA: American Planning Association; 24. Ewing R: Best development practices: doing the right thing and making money at the same time. Chicago, IL: American PlanningAssociation; 1996. 25. Messenger T, Ewing R: Transit-oriented development in the Sunbelt. Transp Res Rec 1996, 1552:145-152. Publisher Full Text 26. Boarnet M, Crane R: The influence of land use on travel behavior: specification and estimation strategies. 27. Scott MM, Dubowitz T, Cohen DA: Regional differences in walking frequency and BMI: what role does the built environment play for Blacks and Whites? Health Place 2009, 15:897-902. Publisher Full Text 28. Handy S: Critical assessment of the literature on the relationships among transportation, land use, and physical activity. Washington, DC: Transportation Research Board and the Institute of Medicine Committee on Physical Activity, Health, Transportation, and Land Use; 2005. 29. Oakes JM, Forsyth A, Schmitz KH: The effects of neighborhood density and street connectivity on walking behavior: the Twin Cities walking study. Epidemiol Perspect Innov 2007, 4:16. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 30. Rodriguez DA, Aytur S, Forsyth A, Oakes JM, Clifton KJ: Relation of modifiable neighborhood attributes to walking. Prev Med 2008, 47:260-264. PubMed Abstract | Publisher Full Text 31. Wendel-Vos W, Droomers M, Kremers S, Brug J, van Lenthe F: Potential environmental determinants of physical activity in adults: a systematic review. Obes Rev 2007, 8:425-440. PubMed Abstract | Publisher Full Text 32. Huang L, Stinchcomb DG, Pickle LW, Dill J, Berrigan D: Identifying clusters of active transportation using spatial scan statistics. Am J Prev Med 2009, 37:157-166. PubMed Abstract | Publisher Full Text 33. American Association for Public Opinion Research: Standard definitions: final dispositions of case codes and outcome rates for surveys. 3rd edition. Lenexa, KS: American Association for Public Opinion Research; 2004. 34. Lambert D: Zero-inflated Poisson regression, with an application to defects in manufacturing. Technometrics 1992, 34:1-14. Publisher Full Text 35. Zhang Z: Marginal models for zero inflated clustered data. Stat Modelling 2004, 4:161-180. Publisher Full Text 36. Tooze JA, Grunwald GK, Jones RH: Analysis of repeated measures data with clumping at zero. Stat Methods Med Res 2002, 11:341-355. PubMed Abstract | Publisher Full Text 37. Tooze JA, Midthune D, Dodd KW, Freedman LS, Krebs-Smith SM, Subar AF, Guenther PM, Carroll RJ, Kipnis V: A new statistical method for estimating the usual intake of episodically consumed foods with application to their distribution. J Am Diet Assoc 2006, 106:1575-1587. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 38. SAS Institute Inc: PROC GLIMMIX documentation. In SAS 9.1.3. help and documentation. Cary, NC: SAS Institute, Inc; 2004. 39. Waller LA, Gotway CA: Applied spatial statistics for public health data. Hoboken, NJ: John Wiley & Sons; 2004. Publisher Full Text 40. Berrigan D, Troiano RP: The association between urban form and physical activity in U.S. adults. Am J Prev Med 2002, 23:74-79. PubMed Abstract | Publisher Full Text 41. Saelens BE, Handy SL: Built environment correlates of walking: a review. Med Sci Sports Exerc 2008, 40:S550-S566. PubMed Abstract | Publisher Full Text 42. Measuring network connectivity for bicycling and walking. Presented at the 83rd Annual Meeting of the Transportation Research Board, Washington, DC. 2004. 43. Tudor-Locke C, Johnson WD, Katzmarzyk PT: Accelerometer-determined steps per day in US adults. Med Sci Sports Exerc 2009, 41:1384-1391. PubMed Abstract | Publisher Full Text 44. Wyatt HR, Peters JC, Reed GW, Barry M, Hill JO: A Colorado statewide survey of walking and its relation to excessive weight. Med Sci Sports Exerc 2005, 37:724-730. PubMed Abstract | Publisher Full Text 45. Cao XY, Handy SL, Mokhtarian PL: The influences of the built environment and residential self-selection on pedestrian behavior: evidence from Austin, TX. Transportation 2006, 33:1-20. Publisher Full Text 46. Handy SL, Cao X, Mokhtarian PL: The causal influence of neighborhood design on physical activity within the neighborhood: evidence from Northern California. Am J Health Promot 2008, 22:350-358. PubMed Abstract 47. Frank LD, Kerr J, Sallis JF, Miles R, Chapman J: A hierarchy of sociodemographic and environmental correlates of walking and obesity. Prev Med 2008, 47:172-178. PubMed Abstract | Publisher Full Text 48. Frank LD, Schmid TL, Sallis JF, Chapman J, Saelens BE: Linking objectively measured physical activity with objectively measured urban form: findings from SMARTRAQ. Am J Prev Med 2005, 28:117-125. PubMed Abstract | Publisher Full Text 49. Story M, Giles-Corti B, Yaroch AL, Cummins S, Frank LD, Huang TT, Lewis LB: Work group IV: Future directions for measures of the food and physical activity environments. Am J Prev Med 2009, 36:S182-S188. PubMed Abstract | Publisher Full Text 50. Foster C, Hillsdon M, Jones A, Grundy C, Wilkinson P, White M, Sheehan B, Wareham N, Thorogood M: Objective measures of the environment and physical activity--results of the environment and physical activity study in English adults. J Phys Act Health 2009, 6(Suppl 1):S70-S80. PubMed Abstract 51. Chin GK, Van Niel KP, Giles-Corti B, Knuiman M: Accessibility and connectivity in physical activity studies: the impact of missing pedestrian data. Prev Med 2008, 46:41-45. PubMed Abstract | Publisher Full Text 52. Shay E, Rodriguez DA, Cho G, Clifton KJ, Evenson KR: Comparing objective measures of environmental supports for pedestrian travel in adults. Int J Health Geogr 2009, 8:62. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 53. Prevalence of sedentary lifestyle--Behavioral Risk Factor Surveillance System, United States, 1991 MMWR 1993, 42:576-579. PubMed Abstract 54. Hill JO, Wyatt HR, Reed GW, Peters JC: Obesity and the environment: where do we go from here? Science 2003, 299:853-855. PubMed Abstract | Publisher Full Text Sign up to receive new article alerts from International Journal of Health Geographics
{"url":"http://www.ij-healthgeographics.com/content/9/1/20","timestamp":"2014-04-18T13:17:42Z","content_type":null,"content_length":"177358","record_id":"<urn:uuid:6f96b588-7aa5-43e3-9ad8-8ba6b99a6318>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
On Chebyshev-type inequalities for primes - Ann. of Math , 2002 "... We present an unconditional deterministic polynomial-time algorithm that determines whether an input number is prime or composite. 1 ..." "... Abstract—We introduce bisimulation up to congruence as a technique for proving language equivalence of non-deterministic finite automata. Exploiting this technique, we devise an optimisation of the classical algorithm by Hopcroft and Karp [12] that, instead of computing the whole determinised automa ..." Cited by 5 (0 self) Add to MetaCart Abstract—We introduce bisimulation up to congruence as a technique for proving language equivalence of non-deterministic finite automata. Exploiting this technique, we devise an optimisation of the classical algorithm by Hopcroft and Karp [12] that, instead of computing the whole determinised automata, explores only a small portion of it. Although the optimised algorithm remains exponential in worst case (the problem is PSPACEcomplete), experimental results show improvements of several orders of magnitude over the standard algorithm. I. , 903 "... Abstract. Let k ≥ 0, a ≥ 1 and b ≥ 0 be integers. We define the arithmetical function gk,a,b for any positive integer n by gk,a,b(n): = (b+na)(b+(n+1)a)···(b+(n+k)a) lcm(b+na,b+(n+1)a,·· ·,b+ (n+k)a). Letting a = 1 and b = 0, then gk,a,b becomes the arithmetical function introduced previously by Farh ..." Cited by 3 (2 self) Add to MetaCart Abstract. Let k ≥ 0, a ≥ 1 and b ≥ 0 be integers. We define the arithmetical function gk,a,b for any positive integer n by gk,a,b(n): = (b+na)(b+(n+1)a)···(b+(n+k)a) lcm(b+na,b+(n+1)a,·· ·,b+(n+k)a). Letting a = 1 and b = 0, then gk,a,b becomes the arithmetical function introduced previously by Farhi. Farhi proved gk,1,0 is periodical and k! is a period. Hong and Yang improved Farhi’s period k! to lcm(1, 2,...,k) and conjectured that lcm(1,2,...,k,k+1) k+1 divides the smallest positive period of gk,1,0. Recently, Farhi and Kane proved this conjecture and determined the smallest positive period of gk,1,0. For the general integers a ≥ 1 and b ≥ 0, it is natural to ask the interesting question: Whether gk,a,b is periodical? If the answer is affirmative, then one asks the further question: What is the smallest positive period of gk,a,b? In this paper, we mainly study these questions. We first show that the arithmetical function gk,a,b is periodical. Consequently, we provide detailed p-adic analysis to the periodical function gk,a,b. Finally, we determine the smallest positive period of gk,a,b. So we answer completely the above two questions. Our result extends the Farhi-Kane theorem from the set of positive integers to the general arithmetic progression. 1. - In Proc. of STACS, volume 5 of LIPIcs , 2010 "... Abstract. One-counter processes (OCPs) are pushdown processes which operate only on a unary stack alphabet. We study the computational complexity of model checking computation tree logic (CTL) over OCPs. A PSPACE upper bound is inherited from the modal µ-calculus for this problem [20]. First, we ana ..." Cited by 2 (2 self) Add to MetaCart Abstract. One-counter processes (OCPs) are pushdown processes which operate only on a unary stack alphabet. We study the computational complexity of model checking computation tree logic (CTL) over OCPs. A PSPACE upper bound is inherited from the modal µ-calculus for this problem [20]. First, we analyze the periodic behaviour of CTL over OCPs and derive a model checking algorithm whose running time is exponential only in the number of control locations and a syntactic notion of the formula that we call leftward until depth. In particular, model checking fixed OCPs against CTL formulas with a fixed leftward until depth is in P. This generalizes a corresponding result from [12] for the expression complexity of CTL’s fragment EF. Second, we prove that already over some fixed OCP, CTL model checking is PSPACE-hard, i.e., expression complexity is PSPACE-hard. Third, we show that there already exists a fixed CTL formula for which model checking of OCPs is PSPACEhard, i.e., data complexity is PSPACE-hard as well. To obtain the latter result, we employ two results from complexity theory: (i) Converting a natural number in Chinese remainder presentation into binary presentation is in logspace-uniform NC 1 [8] and (ii) PSPACE is AC 0-serializable [14]. We demonstrate that our approach can be used to obtain further results. We show that model-checking CTL’s fragment EF over OCPs is hard for P NP, thus establishing a matching lower bound and answering an open question from [12]. We moreover show that the following problem is hard for PSPACE: Given a onecounter Markov decision process, a set of target states with counter value zero each, and an initial state, to decide whether the probability that the initial state will eventually reach one of the target states is arbitrarily close to 1. This improves a previously known lower bound for every level of the Boolean hierarchy shown in [5]. 1 , 2003 "... These notes contain a description and correctness proof of the deterministic polynomial-time primality testing algorithm of Agrawal, Kayal, and Saxena. Some background from number theory and algebra is given in Section 4. 1 A polynomial identity for prime numbers Theorem 1.1 Let n 2 and a 0 be in ..." Add to MetaCart These notes contain a description and correctness proof of the deterministic polynomial-time primality testing algorithm of Agrawal, Kayal, and Saxena. Some background from number theory and algebra is given in Section 4. 1 A polynomial identity for prime numbers Theorem 1.1 Let n 2 and a 0 be integers. 1. If n is a prime number, then in the ring Z n [x]. , 2009 "... ∀k ∈ N. Recently, Farhi proved a new identity: lcm ( `k ´ `k ´ `k ´ lcm(1,2,...,k+1) 0 1 k k+1 ∀k ∈ N. In this note, we show that Nair’s and Farhi’s identities are equivalent. Throughout this note, let N denote the set of nonnegative integers. Define N ∗:= N \ {0}. There are lots of known results ab ..." Add to MetaCart ∀k ∈ N. Recently, Farhi proved a new identity: lcm ( `k ´ `k ´ `k ´ lcm(1,2,...,k+1) 0 1 k k+1 ∀k ∈ N. In this note, we show that Nair’s and Farhi’s identities are equivalent. Throughout this note, let N denote the set of nonnegative integers. Define N ∗:= N \ {0}. There are lots of known results about the least common multiple of a sequence of positive integers. The most renowned is nothing else than an equivalent of the prime number theory; it says that log lcm(1, 2,..., n) ∼ n as n approaches infinity (see, for instance [6]), where lcm(1, 2, · · · , n) means the least common multiple of 1, 2,..., n. Some authors found effective bounds for lcm(1, 2,..., n). Hanson [5] got the upper bound lcm(1, 2,..., n) ≤ 3 n (∀n ≥ 1). Nair [12] obtained the lower bound lcm(1, 2, · · · , n) ≥ 2 n (∀n ≥ 9). Nair [12] also gave a new nice proof for the well-known estimate lcm(1, 2, · · · , n) ≥ 2 n−1 (∀n ≥ 1). Hong and Feng [7] extended this inequality to the general arithmetic progression, which confirmed Farhi’s conjecture [2]. Regarding to many other related questions and generalizations of the above results investigated by several authors, we refer the interested reader to [1], [4], [8]-[10]. By exploiting the integral ∫ 1 0 xm−1 (1 − x) n−m dx, Nair [12] showed the following identity involving the binomial coefficients: Theorem 1. (Nair [12]) For any n ∈ N ∗ , we have n n n lcm ( , 2,..., n) = lcm(1, 2,..., n). 1 2 n Recently, by using Kummer’s theorem on the p-adic valuation of binomial coefficients ([11]), Farhi [3] provided an elegant p-adic proof to the following new interesting identity involving the binomial coefficients: Theorem 2. (Farhi [3]) For any n ∈ N, we have n n n lcm(1, 2,..., n + 1) lcm ( , ,..., ) =. 0 1 n n + 1 , 907 "... coefficients ..." "... Abstract. In this paper, we prove that for positive integers k and n, thecardinality of the symmetric differences of {1, 2,...,k}, {2, 4,...,2k}, {3, 6,...,3k},..., {n, 2n,...,kn} is at least k or n, whichever is larger. This solved a problem raised by Pilz in which binary composition codes were stu ..." Add to MetaCart Abstract. In this paper, we prove that for positive integers k and n, thecardinality of the symmetric differences of {1, 2,...,k}, {2, 4,...,2k}, {3, 6,...,3k},..., {n, 2n,...,kn} is at least k or n, whichever is larger. This solved a problem raised by Pilz in which binary composition codes were studied. 1. , 1993 "... (n,k) n-k + l (ra + U) ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=844239","timestamp":"2014-04-24T20:13:45Z","content_type":null,"content_length":"33366","record_id":"<urn:uuid:ec6237cf-1e6c-498d-b3fe-64e3a631ad92>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions as Processes or Rules Discussion: "Function Machines" Mentor: We are going to study functions by using a model. This particular model closely resembles the way scientists think about functions. Let me draw a machine, which can be anything with two openings. Arrows show "In" and "Out" directions. Student: What are the X and Y for? Mentor: X and Y are traditional names for input and output, that is, for the numbers that we put into the machine and the number that the machines puts out. A number goes in, the machine does something to it, and another number comes out. The function machine does the same thing to every number. Sometimes we just call it "function" for short. This is how the machine works: the directions are written in a special code people use for functions. Let's use this special code for an example: X + 2 = Y. Mentor: So, what is your number? Student: 5. Mentor (points to the equation): The function machine does something to 5, and out comes 7. Student: 12. Mentor: In comes 12, out comes 14. Student: It adds 2! Mentor: You have revealed the secret of this function. See if you can read the mathematical language. Student: Sure, X plus 2 is equal to Y. Mentor: Here is another machine. Try to find out what this one does! Give me some numbers. Student: 3. Mentor: In comes 3, out goes 6. Student: It adds 3! Mentor: Try another number just to check. Student: 10. Mentor: In comes 10, out goes 20. Student (puzzled): 5. Mentor: 10. Student: Aha! I know this one. It is multiplying by 2. I would like to construct the next machine! Mentor: Can you make a machine that describes something from your life? Maybe one about a recent price increase. If every price was raised 10%, what do I pay extra? Student: Sorry, it is my turn! Can you think of some other function machines?
{"url":"http://www.shodor.org/interactivate1.0/discussions/fd1.html","timestamp":"2014-04-19T07:01:43Z","content_type":null,"content_length":"3391","record_id":"<urn:uuid:30e1eecb-9e8e-4081-a49c-df9b23e20b70>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 214 Chapter 13 Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Aberystwyth University - MAT - 214 4.A card is selected from an ordinary deck of 52 cards. Find the probabilities for each of the following: (a) A red card (b) A face card (c) A red card or a 10 (d) A queen (e) Not a queen (f) A face card or a club (g) A face card and a club (h) Not a face Aberystwyth University - STATS - 202 1.3. Obviously if you wind up facing in the same direction at the end that you were in the beginning the total of your angles, countinf one direction as positivr and the other as negative must be a multiple of 360 degrees. Think of a rectangular flag and Aberystwyth University - STATS - 202 How Does My Garden Grow? Appendix FPlease save this document to your computer as a Word document. Input your answers and work into the document, and save. Post as at attachment in your Individual Forum. Dont forget to show your work! 1. Each year, I plan Aberystwyth University - STATS - 202 Part 1 (one question) :Describe what the graph of interval cfw_ -4,10 looks like. Answer: In terval [-4,10] looks like a segment starting at -4 and ending at 10 where points at -4 and 10 are included. (interval is represented in red) -[-] -4 10Definition Aberystwyth University - STATS - 202 Use the following to answer question 1:1. The pie chart shows the distribution of a person's total yearly income of $36,000. Find the amount budgeted for utilities.8/100*36,000=$28802. The following scores were recorded on a 100-point examination: 95, Aberystwyth University - STATS - 202 P10-2A In recent years, Juresic Transportation purchased three used buses. Because of frequent turnover in the accounting department, a different accountant selected the depreciation method for each bus, and various methods were selected. Bus Acquired Cos Aberystwyth University - MAT - 117 Aberystwyth University - MAT - 128 Assessment 12-1A 6. How many different triangles can be constructed with toothpicks by connecting the toothpicks only at their ends if each triangle can contain at most five toothpicks per side? 22 Triangles can be constructed with toothpicks by connectin Aberystwyth University - MAT - 214 Assessment 12-1A 6. How many different triangles can be constructed with toothpicks by connecting the toothpicks only at their ends if each triangle can contain at most five toothpicks per side? 22 Triangles can be constructed with toothpicks by connectin Aberystwyth University - STATS - 202 2. Describe the distribution of sample mean (shape, expected value, and standard error) for samples of n =36 selected from a population with a mean of u= 100 and a standard deviation of o = 12. By the central limit theorem we know that the sample mean Xba Aberystwyth University - STATS - 202 The social-cognitive theory of personality is basically a development on the earlier work of the behaviorists, such as Skinner and Watson, who held a fairly simplistic view that environment was the sole determinant of personality. Albert Bandura, while ac Aberystwyth University - STATS - 202 What are at least two legal issues associated with clinical psychology? What impact do these issues have on the field of clinical psychology? Two legal issues associated with clinical psychology include the use of forensic psychologists in courts and the Aberystwyth University - STATS - 202 Chapter 7 Social Process Theories: Learning, Control and ReactionCriminology 9th &amp; 10th edition Larry J. Siegel 2003 Wadsworth Publishing Co.Social Process TheoriesTheories which are based on the concept that an individuals socialization determines th Aberystwyth University - STATS - 202 Criminal BehaviorTheories, Typologies, and Criminal JusticeJ.B. Helfgott Seattle UniversityCHAPTER 10The Influence Of Technology, Media, &amp; Popular Culture The On Criminal Behavior: Copycat Crime &amp; CybercrimeThe Influence Of Technology, Media, &amp; Popul Aberystwyth University - STATS - 202 Chapter 5 Trait TheoryCriminology 9th 2003 Wadsworth Publishing Co.editionLarry J. SiegelQuestionDo you think that people who commit crime are physically or mentally abnormal?Trait TheoriesTrait theories are made up of biosocial and psychological Aberystwyth University - CS - 192 Employee Data Create an application that continues to prompt the user for employee data and writes a report that contains a line for each employee. The application continues until the user enters 0 for an ID number to indicate the desire to quit. While th Aberystwyth University - PSY - 130 Schools of thought give rise to human understanding The early teachings in philosophy are credited to Socrates, his student Plato and his subsequent student Aristotle. Under the influence of Socrates, Plato developed into a founding father of philosophy o Aberystwyth University - PSY - 220 Happiness and Me MeTake Timeout Take Timeawayfromwork Timetothink Timetoreflectonlife,sortmythoughtsandregainfocusonthethingsreally importantinlifeEnjoy time with Others Enjoy Lifeisshort Familyandfriendsareprecious WorkandpossessionsarefleetingsoI Aberystwyth University - PSY - 240 1. Many people don't realize that just as the body needs exercise to stay in shape, clear thinking requires effort and practice. Name and give five examples of each of the eight guidelines to critical thinking. Ans- Eight guidelines to critical thinking a Aberystwyth University - PSY - 240 4. What is meant by a &quot;culture of honor&quot;? What role does one's means of livelihood (herding vs. agriculture) play in this culture? Ans- Culture of Honor is a term that is used to describe the American southern culture. Culture of Honor is related to the u Aberystwyth University - PSY - 240 1. In classical conditioning procedures, it is sometimes possible for higher-order conditioning to occur. Explain this process. Describe the research on higherorder conditioning in which slugs were the research subjects. AnsHigher order conditioning somet Aberystwyth University - PSY - 430 Week 2 Quiz - worth 100 pointsMark one answer by highlighting the letter marking your answer in red. Example: When a member attempts to change the topic, which conflict style is the member using? a. Accommodation b. Avoidance c. Collaboration d. Competit Aberystwyth University - STATS - 202 The Information Processing Theory: the components, functions, and interrelationshipsThe following describes the components of the information processing theory. The definition of the information theory components and their functions are addressed. A summ Aberystwyth University - STATS - 202 Exista 3 tipuri de gandire:inductiva,deductiva si abductiva.Cea deductiva nu te invata nimic nou, ci doar iti arata ce include ceea ce stii si te indreapta spre certitudine. Cea inductiva te indreapta spre probabilitate si te invata ceva nou pe baza sinte Aberystwyth University - STATS - 202 How does one progress through the stages and why should one want to progress through this process in the first place? Share how you will develop your thinking skills in order to grow as a leader and a scholar. The transition from a stage to another stage Aberystwyth University - STATS - 202 Intelligence TestingIntroduction Creative thinking requires divergent thought characterized by fluency, flexibility, and originality. Creative solutions are usually practical, sensible, and original. Much creative work is actually incremental problem sol Aberystwyth University - STATS - 202 Thefollowingfrequencydistributionreportstheelectricitycostforasampleof50twobedroomapartmentsinAlbuquerque,NewMexicoduringCI Midpoint (x) 80-100 90 100-120 110 120-140 130 140-160 150 160-180 170 180-200 190 Sums =f 3 8 12 16 7 4 50fx d = x - x-bar 270 Aberystwyth University - RES - 341 Based on your skew value and histogram, discuss the best measures of central tendency and dispersion of your data. Justify your selections. There are three measures of central tendency used in statistics: the mean, the median, and the mode. Each have thei Aberystwyth University - RES - 341 The tourist bureau of the Caribbean Islands surveyed a sample of 6 United States tourists as they left to return home. The tourists were asked how many days they spent on their visits. Their responses were as follows: 9, 7, 12, 4, 8, 8. Find the standard Aberystwyth University - RES - 341 The following list contains the average annual total returns (in percentage points) for mutual funds. The mutual funds appear in an online brokerage firm's &quot;all-star&quot; list. -6, -2, -2, 1, 4, 7, 9, 9, 11,12,13,14,15,18,19, 22, 24, 24, 26, 28, 33, 34, 38.N Aberystwyth University - RES - 341 A survey showed that 24% of college students read newspapers on a regular basis and that 82% of college students regularly watch the news on TV. The survey also showed that 20% of college students both follow TV news regularly and read newspapers regularl Aberystwyth University - RES - 341 Suppose that A and B are independent events such that P (A) = 0.20 and P( B ) = 0.50 Find P ( A B ) and P( A A B ) . if P( B ) = 0.50 P(B)=1-P( B )=1-0.5=0.5P( A B ) = P ( A) P ( B) = 0.2 0.5 = 0.1 P ( A B ) = P ( A) + P ( B ) P( A B ) = 0.2 + 0.5 0.1 = Aberystwyth University - RES - 341 Here are the numbers of calls received during customer support service: 23, 8, 23, 19, 8, 7, 8, 23. What is the mean of this data set? first we sort the numbers 7, 8, 8, 8, 19, 23, 23, 23.randomly chosen,-minute time intervals at aThe mean is the arith Aberystwyth University - RES - 341 1. A tail on the last toss = 4/8 = 2. More heads than tails = 4/8 = 3. No heads on the last two tosses = 2/8 = 1/4 Aberystwyth University - RES - 341 Using Probability Distribution 1Using Probability Distribution in Research SimulationUsing Probability Distribution in Research Simulation University of PhoenixUsing Probability Distribution 2AquineMemoTo: From: CC: Date: Re:Howard Gray Neil Robins Aberystwyth University - RES - 341 9.54 Faced with rising fax costs, a firm issued a guideline that transmissions of 10 pages or more should be sent by 2-day mail instead. Exceptions are allowed, but they want the average to be 10 or below. The firm examined 35 randomly chosen fax transmis Aberystwyth University - RES - 341 Aberystwyth University - RES - 341 Directions: If you work the problems using Excel or other statistical software, please include your output. If you use manual calculation, please show how you solved the problem. You must answer each part of the question to receive full credit. Please do Aberystwyth University - RES - 341 1) Explain the difference between the null hypothesis and the alternate hypothesis. How is the null hypothesis chosen? Explain. What is the importance of rejecting the null hypothesis?The null hypothesis is always the simpler hypothesis and is generally Aberystwyth University - RES - 341 1) What is a Type I error in terms of our null hypothesis ?A type I error is basically a false alarm. That is, it is the probability of rejecting the null hypothesis when the null hypothesis is true.2) What is a Type II error in terms of the null hypoth Aberystwyth University - RES - 341 Week 5 Revised E-text Assignment: (3 pts): Due August 2, 2010. Directions: Each question is worth 1.5 points. You may include the statistical software output, but you must also include a well-written explanation of the findings. Be sure to answer the ques Aberystwyth University - RES - 341 Calculatethemeasuresofcentraltendency,dispersion,andskewforyourdata.DisplayyourdescriptivestatisticaldatausinggraphicandtaMen 19306 33351 52762 60152 60626 83601Women 14476 21716 22133 24509 33959 37664 Men 51633 56457 None Women 25743 23321 None(a) Me Aberystwyth University - RES - 341 Price vs Distance Distance (from Town in Miles)18 19 16 9 24 10 8 19 14 21 26 8 15 10 20 21 8 15 8 13 7 8 9 13 11 7 8 9 9 21 10 11 20 11 11 20 9 20 17 17 17 8 11 7 11Price (thousands)125 164.1 171.6 173.6 175.6 179 179 182.4 182.7 186.7 187 188.1 188.3 Aberystwyth University - RES - 342 A sample of 20 pages was taken withut replacement from the 1,591-page phone directory Ameritech Pages Plus Yellow Pages. On each page, the mean area devoted to display ads was measured (a display ad is a large block of multicolored illustrations, maps, an Aberystwyth University - RES - 342 9.54 Faced with rising fax costs, a firm issued a guideline that transmissions of 10 pages or more should be sent by 2-day mail instead. Exceptions are allowed, but they want the average to be 10 or below. The firm examined 35 randomly chosen fax transmis Aberystwyth University - RES - 342 9.56 A coin was flipped 60 times and came up heads 38 times. (a) At the .10 level of significance, is the coin biased toward heads? Show your decision rule and calculations. (b) Calculate a p-value and interpret it. H0: p 0.5 H1: p &gt; 0.5 Hypothesized p 0. Aberystwyth University - RES - 342 9.62 The Web-based company Oh Baby! Gifts has a goal of processing 95 percent of its orders on the same day they are received. If 485 out of the next 500 orders are processed on the same day, would this prove that they are exceeding their goal, using = .0 Aberystwyth University - RES - 342 Problem: The top food snacks consumed by adults aged 1854 are gum, chocolate candy, fresh fruit,potato chips, breath mints/candy, ice cream, nuts, cookies, bars, yogurt, and crackers. Out of a random sample of 25 men, 15 ranked fresh fruit in their top f Aberystwyth University - RES - 342 10.21(a) At = .05, does the following sample show that daughters are taller than their mothers? (b) Is the decision close? (c) Why might daughters tend to be taller than their mothers? Why might they not?Family 1 2 3 4 5 6 7Daughters Height(cm) 167 166 Aberystwyth University - RES - 342 DQ#1&quot;WhenwouldyouuseANOVAatyourplaceofemployment?Giveaspecificexamplethatincludesthetreatments.Addsome thoughtonwhatthatapplicationmightachieve,value,etc.The purpose of conducting ANOVA is to see if there is any difference between two or more groups on s Aberystwyth University - STATS - 202 1.The Downtown Parking Authority of Tampa, Florida, reported the following information for a sample of 250 customers on the number of hours cars are parked and the amount they are charged. Frequency Amount Charged 20 $3.00 38 6.00 53 9.00 45 12.00 40 14.0 Aberystwyth University - STATS - 202 1.Findthefollowingprobabilities: (A)EventsAandBaremutuallyexclusiveeventsdefinedonacommonsample space.IfP(A)=0.5andP(AorB)=0.70,findP(B). So P(A or B) = P(A) + P(B) - P(A and B) P(B) = 0.7-0.5 = 0.2 Aberystwyth University - STATS - 202 The accounting department at Weston Materials, Inc., a national manufacturer of unattached garages, reports that it takes two construction workers a mean of 32 hours and a standard deviation of 2 hours to erect the Red Barn model. Assume the assembly time Aberystwyth University - STATS - 202 Dehydration-1-Dehydration: Effects Upon The Body[YOUR NAME HERE][CLASS SUBJECT HERE] [PROFESSORS NAME HERE] dateDehydration-2-Our bodies are consisted of two-thirds part water. It is well known that water is essential for the maintenance for the he Aberystwyth University - SCI - 201 Alternative Medical Systems Chart SCI/201 Version 31University of Phoenix MaterialAlternative Medical Systems ChartSystem Description Key Principles Types of Treatments Differences Compared to Conventional MedicineAyurvedaTraditional medicine native Aberystwyth University - SCI - 201 Complementary Healing Therapies Chart SCI/201 Version 31University of Phoenix MaterialComplementary Healing Therapies ChartComplete the following table using information from your text and other sources. Treatment Modality Acupuncture Philosophy and P Aberystwyth University - SCI - 201 Dietary Supplement and Herb Chart SCI/201 Version 3 University of Phoenix Material1Dietary Supplement and Herb ChartComplete the following table using information from sources other than your text. Name of Herb Therapeutic Use Scientific Studies That S Aberystwyth University - SCI - 230 Similarities of Plant and Animal Cells: 1. Both have a nucleus that contains the cell's DNA. 2. Both have mitochondria to produce energy. 3. Both have ribosomes to synthesize proteins. 4. Both have lysosomes that contain enzymes to break down large molecu Aberystwyth University - SCI - 245 Deserts, Glaciers, and Climate Desert landforms such as plateaus, buttes, and mesas are the product of weathering and erosion but are a function of rock structure. Rocks of different hardnesses and strengths weather at different rates producing some of th Aberystwyth University - SCI - 275 Running Head: WATER RESOURCESWater Resources: Mitigation Strategies and Solution Name: Axia College SCI 275Water Resources: Mitigation Strategies and Solution Water is one of the most important and abundant resources on Earth. Most of the earth is compr Aberystwyth University - SCI - 275 Malathion Assessment1Malathion AssessmentMalathion Assessment 2 West Nile Virus is an extremely dangerous and life threatening disease that has continued to be a problem for this country. There have been many cases of this virus, and unfortunately many
{"url":"http://www.coursehero.com/file/6245931/MATH-214-Chapter-13/","timestamp":"2014-04-20T19:09:09Z","content_type":null,"content_length":"55172","record_id":"<urn:uuid:a3f109e6-0f7f-4dbd-95f3-10afc1fb5b67>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Jacobs Physics Most textbooks define work and kinetic energy, then show the connection: the net work done on an object is equal to the change in that object’s kinetic energy. My students have always had difficulty with this principle. I spend a LOT of time explaining how to calculate the net work on an object, either by taking the scalar sum of the work done by each of several forces, or by calculating the work done by the net force. It doesn’t matter – students will continue to set the work done by any old force equal to a change in kinetic energy (or sometimes to just kinetic energy, without the “change” part.) The most confusing type of problem asks something like, “How much work must you do to raise a 1 kg block 1 m high?” Students don’t know whether the answer should be +10 J, -10 J, or zero. And don’t get me started on “How much work must be done by an external force to move a charge from point P to infinity.” These problems are tricky because of the implicit assumption of zero kinetic energy at each position. Using the standard textbook definitions, I can get students to understand that since there’s no change in kinetic energy in these cases zero net work was done. But that doesn’t answer the question! This idea of work done by a non-conservative force is difficult in the context of the work-energy My answer this year: I didn’t technically teach the work-energy theorem, as described in textbooks, at all. I defined work properly, including how to find the sign of work done by a force. (If the force has a component parallel to displacement, work done by the force is positive; if the force component is antiparallel to the displacement, work done by the force is negative.) I defined kinetic energy, and gravitational potential energy. Next I jumped straight into the conservation of energy, but I wrote it in a weird way. Consider an object moving between two positions A and B. We know the total mechanical energy at position A must equal the total mechanical energy at position B, less the work done by any non-conservative force that dissipates mechanical energy to thermal or other energy. Instead of using one side of an equation for position A and one side for position B, I wrote the following: W[NC] = (KE[B] – KE[A]) + (PE[B] – PE[A]) How did I define this W[NC] term? I called that the work done by a non-conservative force. (How did I define a “non-conservative force?” At this stage, I said a non-conservative force was anything aside from the force of gravity. Bear with me.) In the standard conservation of energy without friction problem, W[NC] goes to zero. If a block slides with some friction or air resistance, then W[NC]becomes a negative value; if we’re talking, say, an airplane with a propeller, then W[NC] is the (positive) work done by the propeller. The class had little trouble with this formulation, especially as they quickly recognized that most of these terms will usually go to zero. As we discussed other forms of potential energy – elastic potential energy and electrical potential energy – we merely changed our equation for PE, and we called W[NC] the work done by any force that isn’t gravity, a spring, or electricity. This definition is entirely correct and consistent as long as we stick to problems involving a single form of potential energy. (At higher levels of physics we’d have to discuss the deeper meaning of a “conservative” force and how the potential energy is defined… but we’re not at a higher level of physics right now.) The main advantages of this formulation of what I now call the “work energy theorem:” • I spend essentially no time explaining how to find the NET work on an object. The concept of net work is not particularly important in this formulation. Good – that used to be confusing to my • We only have to learn and use one overriding equation for all energy conservation-type problems. I used to teach Wnet = ΔKE, plus conservation of energy without friction, plus energy conservation with friction. That’s all accounted for in my newly formulated work-energy theorem. • The problem in green in the third paragraph above no longer causes trouble! Because I’ve explicitly included how work done on an object can change both the object’s kinetic AND potential energy, there’s no confusion. Both KE terms go to zero, one of the PE terms goes to zero, and we end up with W[NC] = PE[B]. When we lift the object, we have to do work equal to mgh; when we move the charge from point P to infinity, since potential energy at infinity is zero, we do work equal to qV at point P. My colleague teaching honors freshman physics tried this formulation, and he loved it. I’m going to do this again next year. Try it! Tell me what you think. Darren Tarshis, a physics teacher in Hayward, CA, has some physical optics questions: Imagine that red light with a wavelength of 600 nm passes from air to a chunk of, say, diamond. In the diamond, I know the speed slows, which causes the wavelength to shorten (because the frequency remains constant). In the diamond, would the light have a different color because of its new wavelength? I always teach my students that for a sound wave, pitch is determined by the wavelength/frequency, and for a light wave, color is determined by the wavelength/frequency. But I'm starting to think this may be incorrect, and the pitch is actually determined by frequency only, not wavelength, and color is determined by wavelength only, not frequency. Yup. Frequency determines color and pitch. The red light stays red even in diamond. As a quick example: My voice is baritone. Imagine that you are in the pool with your ears just under water, and I am standing on deck talking to you. When the sound waves from my voice enter the water, they start moving about 4 times faster. The frequency doesn't change -- frequency of a wave NEVER changes when the wave changes materials -- so the wavelength increases by a factor of four as well. If pitch were determined by wavelength, then my voice would sound not only soprano, but squeaky soprano. Similarly, have you ever stood underwater and looked up at the trees overhanging the pool? The leaves of the trees still look green, even though the light speed (and thus the wavelength) has decreased by 25%. I also propose a fanciful biological rationale for pitch being related to frequency only. The eardrum vibrates in response to incoming sound waves. It is the rate of vibration -- the frequency -- that can be measured by the ear and converted to a frequency. But how would an ear measure wavelength? With a meterstick? With a teeny weeny tape measure that an invisible goblin sticks out of the ear to measure the peak-to-peak distance of the incoming sound wave? As long as the sound wave is in room temperature air, or as long as the light wave is in a vacuum (or air), then wavelength and frequency can be used to desribe color and pitch interchangably. That's why it's perfectly okay to say that red light is about 700 nm, and violet is about 400 nm. Those wavelength values must change when the light enters diamond, but the frequency of a given color will never change. Although I spent three straight days working on class and demo setup at the end of spring break, I don't have anything I'm ready to post about these. Instead, I recommend two links: 1. This music video by the Illinois rock band OK Go reminds me of the halcyon days when MTV actually played clever music videos rather than girls dancing without their vests on. The 4-minute video chronicles the world's most awesome rube goldberg device -- it puts that Honda commercial from a few years back to shame. Physics content galore! Thanks to Carrie Russell for making me aware. 2. Randall Munroe, author of the xkcd comic strip that I linked a couple of weeks ago, frequently produces content that I want plastered on the walls of my classroom. I've already ordered his visual presentation of character interactions in Star Wars and Lord of the Rings (click on the image to enlage; go to his store to order the poster). This chart about metric units and orders of magnitude will be coming next, once I get around to ordering again. But today's special awesome poster isn't yet available in Munroe's store. A friend of his who works in nuclear physics was becoming frustrated at fielding so may calls from ignorant journalists in the wake of the issues at the Japanese nuclear plant. At the friend's request, Munroe created this Radiation Dose Chart. I've linked here to the "blag"* post about the chart because he summarizes the chart's history and rationale there. Click on the small image for a large image. Look for the bananaphone reference! This chart is public domain, so you can get it printed and hung yourself; I'm just waiting until it shows up in the xkcd store. * Munroe writes a "blag," not a "blog." He just likes being technologically different. I'm generally not a fan of "link dump" posts, but (a) these were too good to pass up, and (b) I don't have anything else today. Time for class -- I'll have lots more by the weekend. When solving problems with energy levels, and a question asks them to draw all the possible transitions for an electron that is starting in say, the 3rd excited state. Should they only be drawing transitions that start there, or would they draw a transition from 3 down to 2, then also from 2 down to 1, etc. Does that question make sense? It does make sense. They should draw ALL transitions, including: * from level 3 to level 1 * from level 3 to level 2 * from level 2 to level 1 Students should be able to calculate the energy, wavelength, and frequency of all three possible emitted photons -- one of those will likely be parts (b) and (c) of the question you pose. I hear several common arguments advanced about why physics-first is a good or bad idea. Thing is, having been in a successful physics-first school for 11 years, the common arguments don’t stand up to scrutiny. The true reasons that I think physics-first can work are rarely mentioned in literature or conversation. Read on… Four arguments that I have frequently heard about freshman physics: #1) Since physics concepts underlie those of chemistry, students must learn physics in order to better understand chemistry. Only the merest kernel of truth. Yes, electrical attraction along with quantum rules define the structure of the atom, as well as the chemical interactions that an atom may have. Not even the most advanced freshman physics student will develop a deep enough knowledge of electrical attraction or quantum rules to assist in understanding chemistry. Okay, sure, a first-year physics course can deliberately do some introduction to atomic structure as a prelude to chemistry. So what? Most everyone was supposed to have learned the structure of the atom in 7th or 8th grade. Who says we can better teach that under the auspices of physics rather than physical science or chemistry? #2) A deep understanding of physics requires mathematics beyond the freshman level. Not true. It is easily possible to teach conceptual physics with no explicit algebra required. Such a course can still teach the “Big Three” skills. Yeah, I’ll grant you that alumni of a freshman conceptual physics course are not ready to do the advanced waves/optics/quantum sophomore undergraduate sequence, but that’s not the point of high school physics. #3) Physics involves more equipment that students can see, feel, and touch… more concepts within the realm of students’ prior experiences. And, so, physics is a better stepping stone into more abstract sciences. True, assuming physics is being taught correctly. A freshman physics course that consists of a teacher solving math problems on the board is doomed to failure. However, a successful freshman physics class allows students to test their predictions with actual carts, springs, balls, boats, lenses, resistors, lasers… and that is a perfect introduction to rigorous #4) Physics is simply too difficult for general population freshmen. A descriptive biology course is more likely to promote success in students who don’t have a future in the quantitative sciences. Possibly true, possibly defeatist. If a school does not have teachers who are willing and able to teach freshmen physics enthusiastically and appropriately, then students are better served by a descriptive biology course, taking a physics course in college or as a more intellectually mature senior. However, with appropriate teaching, even general freshmen can be successful in physics; and some of these folks might see that they’re NOT actually shut off from a future in the quantitative sciences. I know my school has seen a non-negligible number of initially pessimistic students discover an interest in physics through our freshman course. Two arguments that I rarely hear, but that are the most important reasons that we teach 9th grade physics: Many freshmen will at first struggle with or rebel against the “rigorous” problem solving required in physics. By the end of the year, though, the class is comfortable with the problem solving process. They thus have little trouble with the more abstract problems in sophomore chemistry. YES! Something like 20% of our sophomores are new, having not taken freshman physics. Except for the very top-end students, the new sophomores exhibit the same initial struggles with chemistry that our freshmen did with physics. Our conclusion: the first rigorous, quantitative high school science course, whatever the content, presents intellectual obstacles to the general population of students. Those obstacles can be overcome just as effectively by freshmen as sophomores. The question is, do we want to teach chemistry or physics to these freshmen; then we’re back to argument #3 above. It is politically easier to sell a required physics course to bright, enthusiastic new freshmen than to grade- and college- focused upperclassmen. The most important truth of all. Because so many of their parents fear physics, so do our students often fear physics. Our freshmen, and their parents, just accept that physics is the required science course that everyone takes. There’s little drama – students try to do what the teacher asks, they see their grades improve if they work hard; it’s just another class. However, an upperclass physics course takes on heightened significance in students’ minds. A poor start to the year in 12th grade physics causes anger and resentment – “This course is keeping me from getting into a good college!” Even before the course starts, the gossip amongst the fear mongerers will prejudice students. The initial attitude of freshmen toward physics is generally open-minded neutrality – they’re so busy adapting to everything high school entails that they don’t think about physics beyond the next day’s homework. But to teach a class of general-level seniors who arrive with their jaws set, their eyes narrowed, prepared to game their grade with every trick they know – that’s a setup for disaster. Please understand that I don’t believe that physics-first is the correct approach in all situations. A successful physics-first program requires top-rate, dedicated 9th grade physics teachers, as well as a supportive department and higher administration. If any of those pieces is not in place, 9th grade physics might not work, and the bio-chem-physics sequence might well be more appropriate. In today’s and yesterday’s post I merely deconstruct arguments so that the potential of a 9th grade program (or the necessity to maintain a physics last sequence) is not dismissed out of hand. No, not the FIRST robotics competition, PHYSICS-FIRST. I’m in the middle of a two-week spring break right now. Since I’m incapable of relaxing and ignoring physics teaching for that long – just ask Burrito Girl, my wife and sidekick – I’ve been working on a major redesign of my school’s overall physics curriculum. In the process of this revision, I’ve had occasion to reflect on all of the physics-first conversations I’ve had over the years. In tomorrow’s post, I will address some of the major arguments for and against a physics-first approach. One of the most important realizations I’ve come to about a physics-first discussion is that the discussion itself is pointless until agendas are revealed and acknowledged. The decision about an overall school curriculum should be made on the merits of the proposed courses, and how well a curriculum does or doesn’t suit the school’s particular constituency of students and teachers. Too often, participants in the conversation steer consensus toward their own predetermined outcomes without listening to or acknowledging reasonable arguments. The physics first supporters with agendas whom I’ve encountered generally fall into two categories: (1) Evangelical types, generally disciples of Leon Lederman, who are on a mission to grant physics its rightful place as the first, best, and most important of the three major sciences.* (2) Administrators who want change for the sake of self-promotion, so they can say “Look what I did, I ushered in a physics first curriculum! Now give me a promotion.” * A line about physics first from Lederman’s Wikipedia entry, emphasis mine: “Also known as “Right-Side Up Science” and “Biology Last,” this movement seeks to rearrange the current high school science curriculum…” That’s, without question, evangelism. The physics first detractors with an agenda whom I’ve known generally have a very simple point of view: “I don’t want to change, because I’m happy teaching whatever I’m teaching. Doing something different would require a lot of work, and I might lose my monopoly on my special course or special students.” Is a physics-first approach right? There can be no general yes-or-no answer. The question is ill-posed without substantial context. The better questions are, how can a physics-first program be successful at our school, what about physics first could define its success for us, and why use physics-first rather than a standard alternative. Answering these questions neutrally, without promoting an agenda, is the way toward designing physics course offerings appropriate for your school. Tomorrow, I’ll address some common physics-first pro- and con- arguments. My first reaction when my colleague El Mole showed me the cartoon at the right was that 2 cm is way too significant a difference. [Pause while you read the comic. Good, ain't it?] The fundamental principle is correct. Because the linear speed of a point on the earth's surface is larger at the equator than at the poles, the "apparent weight" of a person, and thus the "apparent gravitational field g," will be smaller. In advanced mechanics classes, Newton's Second Law is formulated in the rotating reference frame of the earth, and the effective g is reduced by a centrifugal acceleration term equal to v^2/r. Why centrifugal and not centripetal? In an INERTIAL reference frame, acceleration in circular motion is toward the center, i.e. centripetal. "Inertial reference frame" means, in a sense, imagine that we observe the universe from a stationary camera placed above the rotating object. Then the net force on the object is continually changing direction so as to push the object toward the circle's center. However, if we instead observe the world from the eyes of the rotating object itself, then it seems like we are being pushed away from the center of the circle, i.e. in a centrifugal direction. And if we consider a person rotating at the equator, it makes sense to consider the rotating reference frame; it's more interesting and useful to figure out what the rotating person feels than to figure out what would be observed by a stationary flying saucer over the north pole.. But to have the earth's rotation make a difference of nearly an inch? An inch is significant in pole vaulting! The last time the pole vaulting world record was broken, it was by Ukrainian Sergey Bubka over a ten year stretch from 1984-1994. Each time he broke the previous record, he did so by just one or two centimeters. The question that the comic begs is, should someone aspire to break records, should he compete exclusively in Ecuador rather than in, say, London? Would the location make any difference at all? I made my own order-of-magnitude estimate to check the comic.* The gravitational field due to earth, without reference to rotation, is about 10 m/s^2. That term will be lessened by the "centrifugal" acceleration** v^2/r. First, find v. The radius of the earth is about 6000 km. Multiply by 2π to get the circumference at the equator to be about 40,000 km, which is 40 million m. We go around this circumference in 24 hrs = 80,000 s or so. This gives a speed in the neighborhood of 500 m/s. The "centrifugal" acceleration is then (500 m/s)^2/(6,000,000 m) = 0.04 m/s^2. Compared to the gravitational g of 10 m/s^2, the centrifugal term is, say, four tenths of a percent. Now, in the absolutlely simplest model, we might consider a pole vaulter as running at a fast horizontal speed, then launching himself as a projectile with that same speed. The maximum vertical height the vaulter obtains is governed by vertical kinematics, with a known vertical launch velocity v[oy], final vertical velocity of zero, and acceleration of g downward. This max height can be shown to be v[oy]^2 / 2g. Point is, the maximum height depends inversely on the first power of g. So now we reach the end of the story: what happens when we reduce g by a few tenths of a percent? We increase the pole vaulter's maximum height by a few tenths of a percent as well. Bubka's record vault is 6.15 m. Increasing that jump by four tenths of a percent would increase his vault height by... a couple of centimeters. The comic is right. In practice, could Bubka have just gone to Indonesia to add two centimeters to his record? Not exactly. Four tenths of a percent is the difference in the apparent g between the pole and the equator. If Bubka set his record at the 1994 Santa Claus's Merry Elves Invitational, then our analysis is sound. But the farthest north city I can envision holding a major international track meet is, say, Oslo, Norway, at 60 degrees north latitude. In Oslo, the effective g will be less than at the pole, but not an entire 0.4% less. Since the linear speed of someone rotating on the globe drops off from equator to pole as the cosine of the latitude***, in Oslo the effective g is reduced by only 0.1%. The Oslo-Jakarta pole vault differential is more like 1.5 cm, not a full 2 cm. Close enough. Having read all this, my question for you is, who is the more complete nerd? The xkcd author for carrying out this calculation and basing a comic strip on it, or me for checking the accuracy of the *This particular comic is generally quite good about its physics. In fact, I'd be far more comfortable asking the xkcd writers to check me than vice versa. ** xkcd can explain this better than I ever could: http://xkcd.com/123/ *** At the equator, cos (0) = 1, so his speed relative to Earth's center would be 500 m/s; at the pole, cos (90) = 0, so his speed is 0 m/s. In Oslo, his speed is 500 m/s cos (60), or half his speed at the equator, and by the calculation above, the correction to g is one-fourth of the correction at the equator. Q and U from a They Might Be Giants wiki W represents work done on a gas. W cannot increase or decrease. W can be positive or negative; negative W means work is done BY a gas. Q represents heat added to a gas. Q cannot increase or decrease. Q can be positive or negative; negative Q means heat is REMOVED from a gas. U represents internal energy of a gas. U can increase or decrease. ΔU represents the change in internal energy. ΔU cannot increase or decrease. ΔU can be positive or negative; negative ΔU means the internal energy DECREASES. This was all posted to a class folder yesterday, after I graded a problem about a PV diagram and the first law of thermodynamics. The 4 students who used illegal phrasology* such as "W decreases" lost credit. I thought it was worth a classwide reminder with our exam coming up today... * penalty: 5 yards and loss of down A trimester exam -- or in your case, probably, a semester exam -- should be an authentic evaluation of what each student has learned or not learned in your course. I think of the exam much like a playoff football game, or a state track meet. It's the culminating experience for the season, showing in black-and-white how well your students have done, and how well you've done teaching them. My friend the football coach points out to me why his job is more stressful than mine. "Put yourself in my shoes," he asks. "Have someone else, not you, administer your exam. Have everyone in the community -- students, parents, alumni, administrators, EVERYONE -- watching, so they see every right answer and every ridiculous answer. And, put up a real-time scoreboard so all can see how your students are doing compared to our rival schools' students. That's my reality, every game, every season." Although I would accept his challenge at the drop of a hat, I'm not recommending competitive examinations for all. Don't worry. I'm putting forth this coach's point of view because for him and his team, every game is a test -- a fair, objective measure of his team's skill and preparation. No one ever complains "that's not fair, the other team threw a pass!" Everyone knows the rules up front. The referee and his crew enforce the rules, but do not give advice on strategy. In football in particular, a bit of scouting by watching film will even let the team know what "topics" will be on their "test" -- will it be "Their quarterback is fast and can throw on the run. How do you defend him?" Or could it be "Their offensive line is huge and mean, but their running back is small. What do you do?" I encourage introductory physics teachers to make the conditions of their exams clear and consistent. The format should not be fundamentally different from the other tests that have been assigned throughout the year. Just as a football team would be thrown for a loop if they were asked to play the playoffs under Canadian rules with a 120 yard field, your students will be less successful on an exam if they see question types that are brand new. My AP tests and exams are all authentically in AP format -- 1.3-minute, no-calculator multiple choice, followed by free response questions of 10-15 minutes each. In general physics this year, all tests and exams include an equal mix of 2-minute, calculator-okay multiple choice; 4-minute "justify your answer" items; and open response items of 2-4 minutes each. No one will have to read the directions on the exam -- they will be able to dive into the physics without worrying about the format. How would a football team react if the official were ambiguous with his decisions and instructions? If he didn't tell anyone he'd started the clock? If he called a penalty, but marked off yardage without an announcement? If he didn't bother to tell anyone what down it was? Perhaps he could say, with some justification, that a high school football team ought to know when they break the rules, and they should be able to keep track of downs. But that official still puts the teams in a situation in which they cannot show off their football prowess. The game becomes an argument about nebulous rules rather than a contest of skill. Similarly, exam questions must be crystal clear. It's not acceptable to ask, "In an ideal gas, what happens to temperature when P goes up?" Sure, perhaps every problem you did in class involved a closed, rigid container, so perhaps you expected the class to assume a constant volume and to understand that P means pressure. But why not write the question clearly? Why make the students "That's the problem, Greg," you might say. "I wrote what I thought was a perfectly clear and fair question, but still the students had questions about the question. How am I to know what's clear and what's not? How am I supposed to get my students to develop a sense for the level and difficulty of exam items, and then ensure that the exam matches their expectations?" Here I strongly suggest using some sort of EXTERNAL evaluation. Aim your class from day one toward some sort of test available from someone else. The most common example of such a test is associated with the Advanced Placement program -- the AP exam's topics and difficulty are carefully controlled from year to year. I have hundreds upon hundreds of authentic test items from which I can populate my tests and exams. These questions have already been vetted for clarity, difficulty, and correctness. You don't have to be teaching an AP course to use AP test items! For years I taught a general physics course that covered only about 1/3 of the AP curriculum, but still used authentic AP items on tests and exams. If you want a lower level than AP physics, there are other vetted test banks available. The New York Regents test has been given since the 1930s, and pretty much all published exams are available online. The SAT II physics likewise has released exams, prep books, and topic outlines available publicly. Some colleges release their freshman exams year after year; pick one and follow their lead. No matter which publicly available exam you choose to follow, your testing becomes much more authentic through using that external source. You become less the "bad guy" for writing questions that are unclear, or too hard. Rather, you become like the beloved coach, the one who carefully prepares his team to meet the challenge of the playoffs. "That was a hard exam," they might say. "But it was exactly the kind of exam you had prepared us for. I think I did well." That's what I want to hear.
{"url":"http://jacobsphysics.blogspot.com/2011_03_01_archive.html","timestamp":"2014-04-16T19:05:20Z","content_type":null,"content_length":"147231","record_id":"<urn:uuid:eabccbbf-2497-4e0f-9ac7-16b897f22426>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}