content
stringlengths
86
994k
meta
stringlengths
288
619
Daly City Calculus Tutor Find a Daly City Calculus Tutor ...Wish you good luck in your education, since the above quote by Benjamin Franklin is true for all times. Best, MarineI took 2 Econometrics courses as an Undergraduate students at UCI, and received my bachelors degree upon completing the courses. I have been tutoring the subject since then for about 3 years. 29 Subjects: including calculus, reading, statistics, geometry ...I am a capable teacher who can intuitively understand both the math problems and the ways in which a student is struggling to comprehend the math. I took AP Calculus BC in 2007 and earned a 5 on the AP test. I have used calculus in my physics courses in high school and college. 27 Subjects: including calculus, chemistry, English, reading ...I am a trained engineer, with a M.S. from UC Berkeley, and a B.S. from the University of Illinois at Urbana-Champaign. At UC Berkeley I taught CE100, an introductory fluid mechanics course, for which I obtained outstanding student reviews. In the past I have also independently tutored engineering graduate students in physics, water chemistry, calculus, and fluid mechanics. 15 Subjects: including calculus, Spanish, geometry, ESL/ESOL ...As an undergrad at Harvey Mudd, I helped design and teach a class on the software and hardware co-design of a GPS system, which was both a challenging and rewarding experience. I offer tutoring for all levels of math and science as well as test preparation. I will also proofread and help with technical writing, as I believe good communication skills are very important. 27 Subjects: including calculus, chemistry, physics, geometry ...My recent four students in SAT Physics and Math Subject tests received 770, 800, 800, and 800 (out of 800)! My very first student needed help to get a B in physics and, with my help, got an A! My very first SAT Physics Subject test student improved his score from 610 to 790 (out of 800) in his ... 8 Subjects: including calculus, physics, geometry, algebra 2
{"url":"http://www.purplemath.com/Daly_City_Calculus_tutors.php","timestamp":"2014-04-19T07:05:17Z","content_type":null,"content_length":"24114","record_id":"<urn:uuid:fe31110b-84a2-4ea8-b1d2-22e49abd73a8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Spectrum gap of large random weighted semiregular bipartite graph up vote 0 down vote favorite 1. I need the bound for the spectrum gap of random semiregular ($\ell$, $r$)-bipartite graph. This paper (http://arxiv.org/abs/1212.5216) gives the bound for $\ell$-regular bipartite graphs (with $\ ell = r$). But are there similar results for $\ell \leq r$? 2. Assume each edge $(ij)$ in the graph is assigned with a random variable $w_{ij}$, with finite mean and finite variance. What is the spectrum gap of the weighted adjacency matrix $W = [w_{ij}]$? Again assuming the graph is a random semiregular ($\ell$, $r$)-bipartite graph again. Thank you very much! random-matrices graph-theory spectral-graph-theory add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged random-matrices graph-theory spectral-graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/131701/spectrum-gap-of-large-random-weighted-semiregular-bipartite-graph","timestamp":"2014-04-20T03:37:25Z","content_type":null,"content_length":"46264","record_id":"<urn:uuid:a6cce684-1b07-4912-8e74-fc63bbc8cb06>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Mapping electron density in the ionosphere: a principal component MCMC algorithm Khorsheed, E., Hurn, M. and Jennison, C., 2011. Mapping electron density in the ionosphere: a principal component MCMC algorithm. Computational Statistics & Data Analysis, 55 (1), pp. 338-352. Related documents: This repository does not currently have the full-text of this item. You may be able to access a copy if URLs are provided below. ( Contact Author Official URL: The outer layers of the Earth's atmosphere are known as the ionosphere, a plasma of free electrons and positively charged atomic ions. The electron density of the ionosphere varies considerably with time of day, season, geographical location and the sun's activity. Maps of electron density are required because local changes in this density can produce inaccuracies in the Navy Navigation Satellite System (NNSS) and Global Positioning System (GPS). Satellite to ground based receiver measurements produce tomographic information about the density in the form of path integrated snapshots of the total electron content which must be inverted to generate electron density maps. A Bayesian approach is proposed for solving the inversion problem using spatial priors in a parsimonious model for the variation of electron density with height. The Bayesian approach to modelling and inference provides estimates of electron density along with a measure of uncertainty for these estimates, leading to credible intervals for all quantities of interest. The standard parameterisation does not lend itself well to standard Metropolis-Hastings algorithms. A much more efficient form of Markov chain Monte Carlo sampler is developed using a transformation of variables based on a principal components analysis of initial output. Item Type Articles Creators Khorsheed, E., Hurn, M. and Jennison, C. DOI 10.1016/j.csda.2010.04.029 Uncontrolled Keywords tomography, inversion, bayesian modelling, principal components, markov chain monte carlo, ionospheric mapping Departments Faculty of Science > Mathematical Sciences Refereed Yes Status Published ID Code 21435 Actions (login required)
{"url":"http://opus.bath.ac.uk/21435/","timestamp":"2014-04-21T10:08:41Z","content_type":null,"content_length":"29816","record_id":"<urn:uuid:50a9bda5-31a0-4497-89bd-a1f1212086db>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
No math gene: learning mathematics takes practice What makes someone good at math? A love of numbers, perhaps, but a willingness to practice, too. And even if you are good at one specific type of math, you can’t trust your innate abilities enough to skip practicing other types if you want to be good. New research at the Norwegian University of Science and Technology (NTNU) in Trondheim could have an effect on how math is taught. Not born with it If you want to be really good at all types of math, you need to practice them all. You can’t trust your innate natural talent to do most of the job for you. This might seem obvious to some, but it goes against the traditional view that if you are good at math, it is a skill that you are simply born with. Professor Hermundur Sigmundsson at Department of Psychology is one of three researchers involved in the project. The results have been published in Psychological Reports. The numbers
 The researchers tested the math skills of 70 Norwegian fifth graders, aged 10.5 years on average. Their results suggest that it is important to practice every single kind of math subject to be good at all of them, and that these skills aren’t something you are born with. “We found support for a task specificity hypothesis. You become good at exactly what you practice,” Sigmundsson says. Nine types of math tasks were tested, from normal addition and subtraction, both orally and in writing, to oral multiplication and understanding the clock and the calendar. “Our study shows little correlation between (being good at) the nine different mathematical skills, says the researcher. “For instance there is little correlation between being able to solve a normal addition in the form of ‘23 + 67’ and addition in the form of a word problem.” This example might raise a few eyebrows. Perhaps basic math is not a problem for the student, but the reading itself is. Up to 20 percent of Norwegian boys in secondary school have problems with Struggles with algebra Sigmundsson also finds support in everyday examples.
 “Some students will be good at geometry, but not so good at algebra,” he says. If that is the case they have to practice more algebra, which is the area where most students in secondary school have problems. “At the same time this means there is hope for some students. Some just can’t be good at all types of math, but at least they can be good at geometry, for example,” he says. It is this finding that might in the end help change the way math is taught. Support in neurology 
The fact that you are good at precisely what you practice is probably due to the fact that different kinds of practice activate different neural connections. The results can also be transferred to other areas. The football player who practices hitting the goal from 25 yards with a perfectly placed shot will become good at exactly this. But she is not necessarily good at tackling or reading the game. “This is also supported by new insights in neurology. With practice you develop specific neural connections,” says Sigmundsson. The research has been carried out in cooperation with Professor Remco C. J. Polmand at Victoria University, Melbourne, Australia and PhD candidate Håvard Lorås at the Faculty of Health Education and Social Work, Division of Physiotherapy, Sør-Trøndelag University College, Trondheim.
{"url":"http://sciencenordic.com/no-math-gene-learning-mathematics-takes-practice","timestamp":"2014-04-19T22:28:29Z","content_type":null,"content_length":"49024","record_id":"<urn:uuid:49738dd2-c491-48cc-9cac-44862b421e97>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Lorentz group Lorentz group - Wikipedia, the free encyclopedia Thus, the Lorentz group is an isotropy subgroup of the isometry group of Minkowski spacetime. The Lorentz group is a 6-dimensional noncompact Lie group which is not connected, and whose connected components are not simply connected. The restricted Lorentz group is generated by ordinary spatial rotations and Lorentz boosts (which can be thought of as hyperbolic rotations in a plane that includes a time-like direction). en.wikipedia.org /wiki/Lorentz_group (3220 words) Lorentz transformation - Wikipedia, the free encyclopedia (Site not responding. Last check: 2007-10-20) The Lorentz transformation is a group transformation that is used to transform the space and time coordinates (or in general any four-vector) of one inertial reference frame, $S, into those of another one, S", with S" traveling at a relative speed of {v} to S along the x-axis.$ Lorentz believed the luminiferous aether hypothesis; it was Albert Einstein who developed the theory of relativity to provide a proper foundation for its application. The Lorentz transformations were first published in 1904, but their formalism was at the time imperfect. www.newlenox.us /project/wikipedia/index.php/Lorentz_transformation_equations (792 words) Lorentz group -- Facts, Info, and Encyclopedia article (Site not responding. Last check: 2007-10-20) The Lorentz group is the ((chemistry) two or more atoms bound together as a single unit and forming part of a molecule) group of all (Click link for more info and facts about Lorentz transformation) Lorentz transformations of (Click link for more info and facts about Minkowski spacetime) Minkowski spacetime. Mathematically, the Lorentz group is the (Click link for more info and facts about generalized orthogonal group) generalized orthogonal group O(1, 3), (or O(3, 1) depending on the (Click link for more info and facts about sign convention) sign convention). The restricted Lorentz group is generated by ordinary (Click link for more info and facts about spatial rotations) spatial rotations and (Click link for more info and facts about Lorentz boost) Lorentz boosts (which can be thought of as hyperbolic rotations in a plane that includes a time-like direction). www.absoluteastronomy.com /encyclopedia/L/Lo/Lorentz_group.htm (844 words) Lorentz group (Site not responding. Last check: 2007-10-20) The Lorentz group is the group of all Lorentz transformations of Minkowski spacetime. It is the subgroup of the Poincaré group consisting of all isometries that leave the origin fixed. The restricted Lorentz group is generated by ordinary spatial rotations and Lorentz boosts, which can be thought of as rotations in a plane that includes a time-like direction. www.sciencedaily.com /encyclopedia/lorentz_group (547 words) The Lorentz group is a subgroup of the Poincaré_group, the group of all isometries of Minkowski spacetime. The Lorentz group is a 6-dimensional noncompact Lie_group which is not connected, and whose connected components are not simply_connected. The restricted Lorentz group is generated by ordinary spatial rotations and Lorentz_boosts (which can be thought of as hyperbolic rotations in a plane that includes a time-like direction). www.witwib.com /Lorentz_group (3757 words) Special relativity - Encyclopedia.WorldSearch (Site not responding. Last check: 2007-10-20) Lorentz suggested an aether theory in which objects and observers travelling with respect to a stationary aether underwent a physical shortening (Lorentz-Fitzgerald contraction) and a change in temporal rate (time dilation). While Lorentz suggested the Lorentz transformation equations, Einstein's contribution was, inter alia, to derive these equations from a more fundamental principle without assuming the presence of an aether. It is also worth noting that Maxwell's equations, combined with the Lorentz force law, can also be used to mathematically demonstrate several consequences of special relativity such as Lorentz contraction and time dilation, at least for rulers and clocks which operate via electromagnetic forces. encyclopedia.worldsearch.com /special_relativity.htm (5220 words) Henri Poincaré, great mathematician Lorentz group the "Poincaré group", a terminology favoured by Wightman, in honour of his contributions. The group thus generated in now known as the conformal group, This, of course, is not an invariance group for particles of non-zero mass. Pais says that Lorentz and Poincaré did not regard Einstein's paper as the final word, since it just postulates what we would want to be true (the constancy of the speed of light) but did not construct a theory in which it was true. www.mth.kcl.ac.uk /~streater/poincare.html (630 words) Special relativity (Site not responding. Last check: 2007-10-20) The theory, known as Lorentz Ether Theory (LET) was criticized, even by Lorentz himself, because of its ad hoc nature. While Lorentz suggested the Lorentz transformation equations, Einstein's contribution was, inter alia, to derive these equations from a more fundamental theory, which theory did not require the presence of an aether. Under Special Relativity, the seemingly complex transformations of Lorentz and Fitgerald derived cleanly from simple geometry and the Pythagorean theorem. www.sciencedaily.com /encyclopedia/special_relativity (2286 words) Lorentz group Lorentz group is a subgroup of the Poincaré group. The Lorentz group is generated by rotations and Lorentz boosts. The text of this article is licensed under the GFDL. www.ebroadcast.com.au /lookup/encyclopedia/lo/Lorentz_group.html (52 words) Phase Space Picture The Lorentz group is known to be a difficult subject to mathematicians, because it is a non-compact group. The groups Sp(2) and Sp(4) are locally isomorphic to the (2 + 1)-dimensional and (3 + 2)-dimensional Lorentz groups. Since we are combining the Wigner function with group theory, we have reprinted in the Appendix Wigner's 1932 paper on the Wigner function as well as his 1939 paper on the representations of the inhomogeneous Lorentz group. www2.physics.umd.edu /~yskim/home/book91.html (2134 words) Group Theory & Rubik's Cube Group theory is the study of the algebra of transformations and symmetry. Given a group G with a subgroup H={h1,h2,...}, the "left coset" of H corresponding to an element x of G is defined as the set { x h1, x h2, x h3,... A representation of a group G is a set of matrices M which are homomorphic to the group. akbar.marlboro.edu /~mahoney/courses/Spr00/rubik.html (3602 words) CMS group: Lorentz angle of silicon detectors The Lorentz angle has to be determined for silicon detectors in high energy physics. The Lorentz angle is the angle under which charge carrier are deflected in a magnetic field perpendicular to the electric field. In Karlsruhe a comprehensive study of the Lorentz angle of non-irradiated and irradiated silicon detectors was performed. www-ekp.physik.uni-karlsruhe.de /cms/lorentz/index.html (486 words) Symmetry and Symmetry Breaking The use of the mathematics of group theory to study physical theories was central to the work, early in the twentieth century in Göttingen, of the group whose central figures were F. Klein (who earlier collaborated with Lie) and D. The extension of the concept of continuous symmetry from “global” symmetries (such as the Galilean group of spacetime transformations) to “local” symmetries is one of the important developments in the concept of symmetry in physics that took place in the twentieth century. It is therefore possible to describe symmetry breaking in terms of relations between transformations groups, in particular between a group (the unbroken symmetry group) and its subgroup(s). plato.stanford.edu /entries/symmetry-breaking (9818 words) [No title] (******************************************************************************) (* :Title: Lorentz Group *) (* :Author: Jeff Olson *) (* :Summary: Extends the definitions in ClassicalGroups to include the Lorentz group. LorentzGroup[omega, zeta]." LorentzGroupQ::usage = "LorentzGroupQ[lambda] determines whether lambda is an element of the Lorentz group." LorentzInverse::usage = "LorentzInverse[lambda] gives the inverse transformation for lambda. LorentzInverse is more efficient than Inverse for Lorentz transformations." LorentzRotation::usage = "LorentzRotation gives an element of the rotation subgroup of the proper, orthochronus Lorentz www.ph.utexas.edu /~jdolson/math/LorentzGroup.m (136 words) Biblioteca Virtual Leite Lopes (Site not responding. Last check: 2007-10-20) The study of the finite-dimenslonal representations (irreducible) of the proper Lorentz group yields the result that the field variables can only be scalars, spinors, four-vectors, and tensors and spinors of higher rank. We remark that while the transformations U(L) which constitute an infinite-dimensional representation of the Lorentz group can be taken as unitary, the transformations such as L, D, which transform the (finite-dimensional) 4 - vector and spinor space into themselves respectively, can not be unitary. The latter are seen to be the infinitesimal operators which determine the infinite-dimensional representations (unitary) of the inhomogeneous Lorentz group. www4.prossiga.br /lopes/prodcien/inversionoperations/inver1-1.html (489 words) Can You See the Lorentz-Fitzgerald Contraction? The appearance of "half angles" p/2 is characteristic of the two-fold "spinorial covering" of the Lorentz group by SL For instance, matrices of the second form above yield a "one parameter subgroup" of SL (C) as we let p vary, which "covers" the one parameter subgroup SO(2) of PSL (C), the Lorentz group, and the Moebius group are all isomorphic as abstract groups. Lorentz transformations may be classified into four types according to their geometric effect on the night sky: math.ucr.edu /home/baez/physics/Relativity/SR/penrose.html (1653 words) [No title] (Site not responding. Last check: 2007-10-20) Jones-matrix formalism as a representation of the Lorentz group It is shown that the two-by-two Jones-matrix formalism for polarization optics is a six-parameter two-by-two representation of the Lorentz group. The attenuation and phase-shift filters are represented, respectively, by the three-parameter rotation subgroup and the three-parameter Lorentz group for two spatial dimensions and one time imaqs.uh.edu /kasa/paper/1997/sci/han.d.1997.2.html (106 words) The Lorentz Group Uning Rotations and Dilations The approach is too general, and must be restricted to graft the results to the Lorentz group. This is the general form of the Lorentz transformation presented by Möller. Real quaternions are used in a rotation and a dilation to perform the work of the Lorentz group. world.std.com /~sweetser/quaternions/relativity/rotationdilation/rotationdilation.html (685 words) Quantum field theory - Wikibooks 3.1 Relativity principle and the group of coordinate transformations. Any quantity which transforms like the space-time coordinates under Lorentz transformation is defined as a four-vector. Relativity principle and the group of coordinate transformations. en.wikibooks.org /wiki/Quantum_field_theory (487 words) ABSTRACTS OF PAPERS ERICH W. ELLERS ERICH W. Hermitian presentations of Chevalley groups I. We give a presentation for a Chevalley group arising from a Hermitian Lie algebra whose roots have all the same length. The Lorentz group Omega(V) is bireflectional and all involutions in Omega(V) are conjugate. Let G be a simple and simply-connected algebraic group that is defined and quasi-split over a field K. We investigate properties of intersections of Bruhat cells of G with conjugacy classes C of G, in particular, we consider the question, when is such an intersection not empty. www.math.toronto.edu /ellers/abstracts.html (817 words) [No title] This doesn't happen in the relativistic case because elements of D (spinors) are not left unchanged by the action of the Lorentz group. When I said "unitary action of the Lorentz group," I meant the (cover of the) of the Poincare group, which is the the semi-direct product of the Lorentz group with spacetime translations. A Lorentz transformation takes this to 4-momentum (p^0, p^1, p^2, p^3) subject to the constraint (p^0)^2 - [(p^1)^2 + (p^2)^2 + (p^3)^2] = m^2, i.e., the 4-momentum of the electron must stay on the mass hyperboloid. www.math.niu.edu /~rusin/known-math/01_incoming/QFT (2052 words) Seeing all Six Dimensions of the Lorentz Group of Special Relativity, in the Planetarium Sky (Site not responding. Last check: 2007-10-20) The transformation is the exponential of an infinitesimal Lorentz transformation, times a steadily increasing parameter; so the figure is actually a stage for a movie of a 1-parameter group of Four of the six dimensions of the Lorentz group are used up in telling where in the sky to place the two poles, as each place is worth two parameters. So you are indeed ``Seeing all Six Dimensions of the Lorentz Group of Special Relativity, in the Planetarium Sky'', four dimensions directly in the figure, and two more in your mind's eye by imagining a constant flow on the Mercator map suggested by the figure's latitudes and longitudes. www.uwm.edu /~eli/thisfigure.html (713 words) Groups in GR - Physics Help and Math Help - Physics Forums Is the guage group for gravity defined as the group of all possible Weyl tensors on a general 4D Riemann manifold? The equivalent group is the group of continuous transformations that locally look like lorentz transformations(essentially a lorentz transformation for each point in space-time such that the mapping from space-time to lorentz transformations is continuous). I've been told elsewhere that the gauge group for gravity is the general diffeomorphism group (i.e. www.physicsforums.com /showthread.php?t=12518 (710 words) In the Mind's Eye Now there are a number of mathematical ways to represent the Lorentz group. Those representations, that cannot be decomposed into subgroups that also represent the Lorentz group, are called irreducible representations of the Lorentz group. What it means is that the irreducible representations of the basic symmetries of our universe give rise to the possibilities of all the sub-atomic particles, and thus to all the matter, in the www.dogchurch.com /tract/lorentzgroup.html (675 words) This is the only book on the subject of group theory and Einstein's theory of gravitation. The first six are devoted to rotation and Lorentz groups, and their representations. The entire book is self-contained in both group theory and general relativity theory, and no prior knowledge of either is assumed. www.worldscibooks.com /physics/p199.html (338 words) Quantum Gravity Concept Map - Symmetries When the angular momentum of any of these femionic quanta is parallel to their momentum they are "left-handed" and transform as an "isospin doublet" (the up and down quarks form a doublet, as do the electron and neutrino). The use of gauge bosons as intermediaries in nonlocal interactions serves to restore the local nature of the theory. The parameter space of the "unitary" groups (U(1), SU(2) and SU(3)) are "isomorphic to" (have a one-to-one correspondence with) the circle, the sphere (a surface!) and the "three sphere" (not a www.rwc.uc.edu /koehler/qg/sym.html (593 words) Lorentz Group Lie Algebra Map of Ultra-Relativistic Radiating Electron (Site not responding. Last check: 2007-10-20) Well-known high accuracy integrator scheme, the symplectic integrator, cannot guarantee its calculation accuracy for the radiating electron orbit because the symplectic integrator is valid for the Hamilton systems and the radiating electron motions are not classified into the Hamilton systems. In this paper, it is shown that one can obtain highly accurate numerical solutions to the radiating ultra-relativistic electron motion, based on the Lorentz group Lie algebra^1 concept and the concrete numerical calculation scheme is constructed. After the theoretical discussions of the Lorentz group Lie algebra and some practical numerical calculation technics, the accuracy of the presented method is checked to use some numerical flux.aps.org /meetings/YR97/BAPSPAC97/abs/S510086.html (219 words) Try your search on: Qwika (all wikis)
{"url":"http://www.factbites.com/topics/Lorentz-group","timestamp":"2014-04-20T14:06:57Z","content_type":null,"content_length":"47011","record_id":"<urn:uuid:2626c341-f9cf-45a6-aef3-74e3a380ea33>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Ratio involving 2nd order numerical derivatives Replies: 0 Paul Ratio involving 2nd order numerical derivatives Posted: Oct 1, 2012 6:20 AM Posts: 385 Registered: 7 I would like to understand the maths behind the mathworks implementation of the Hurst exponent. That is perhaps more involved than it sounds because my main problem is to interpret the /12/10 meaning behind the process of taking a ratio involving different 2nd order numerical derivatives. The mathworks wfbmesti function seems to involve the following: Let y be a function of t where t represents time. Consider numerical computations of the 2nd derivative of y. Let s2 be the second derivative of y computed by central differences with a timestep of 2*delta_t Let s1 be the second derivative of y computed by central differences with a timestep of delta_t. s2 and s1 are both evaluated at every timestep (except for some points at or near t = 0 where the data is not available). The Hurst exponent is 0.5 * log to base 2 of ( 16 * mean(s2 ^ 2) / mean(s1 ^ 2) ). Could anyone explain the maths behind this methodology? A good starting point would be to understand the idea behind the ratio for which the log is taken. I understand all the definitions involved and I also understand what the end result (Hurst exponent) indicates. I don't understand why the above algorithm leads to a computation of the Hurst exponent or the physical intuition behind the above algorithm. Thank you very much for your help, Paul Epstein
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2405903","timestamp":"2014-04-19T12:15:16Z","content_type":null,"content_length":"14853","record_id":"<urn:uuid:df35bf44-7beb-4b30-9eeb-84a9aec904d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: 2009; 420 pp; softcover Number: 328 ISBN-10: 2-85629-289-5 ISBN-13: 978-2-85629-289-1 List Price: US$135 Member Price: US$108 Order Code: AST/328 This is the second of two volumes that contain original research articles submitted by colleagues and friends to celebrate the 60th birthday of Jean-Michel Bismut. These articles cover a wide range of subjects in probability theory, global analysis, and arithmetic geometry to which Jean-Michel Bismut has made fundamental contributions. A publication of the Société Mathématique de France, Marseilles (SMF), distributed by the AMS in the U.S., Canada, and Mexico. Orders from other countries should be sent to the SMF. Members of the SMF receive a 30% discount from list. Graduate students and research mathematicians interested in geometry. • J. Brüning -- The signature operator on manifolds with a conical singular stratum • U. Bunke and T. Schick -- Smooth \(K\)-theory • H. Gillet and F. M. Ünlü -- An explicit proof of the generalized Gauss-Bonnet formula • S. Goette -- Torsion invariants for families • F. R. Harvey and H. B. Lawson, Jr. -- Boundaries of positive holomorphic chains and the relative Hodge question • K. Liu and H. Xu -- Mirzakharni's recursion formula is equivalent to the Witten-Kontsevich theorem • V. Maillot and D. Rössler -- Formes automorphes et théorèmes de Riemann-Roch arithmétiques • V. Mathai, R. B. Melrose, and I. M. Singer -- The index of projective families of elliptic operators: the decomposable case • P.-É. Paradan and M. Vergne -- Index of transversally elliptic operators • S. T. Paul and G. Tian -- CM stability and the generalized Futaki invariant II • K.-i. Yoshikawa -- Calabi-Yau threefolds of Borcea-Voisin, analytic torsion, and Borcherds products
{"url":"http://www.ams.org/bookstore?fn=20&arg1=astseries&ikey=AST-328","timestamp":"2014-04-16T19:23:16Z","content_type":null,"content_length":"15933","record_id":"<urn:uuid:57575b58-b9be-4bc3-8349-95e8a349f523>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
First Grade Math: Number and Number Sense Here are some books that can assist with the instruction of numbers and number sense. All of them are easy to read and provide excellent images to help students obtain a better understanding of numbers and their relation to place value. One Hundred Hungry Ants written by Elinor J. Pinczes and illustrated by Bonnie Mackan is a story about one-hundred ants on their way to eat the food at a picnic. While traveling to the picnic one ant decides they will get there much faster if they divide into two rows of fifty. After walking for a short time the ant decides they should divide into four rows of twenty-five then five rows of twenty and finally ten rows of ten. This book is a good resource for a lesson introducing base-10 blocks Greater Estimations by Bruce Goldstone is a picture book that asks students to estimate how many items are in each picture. The beginning of the book shows rubber duck in groups of ten and all the ducks lined up in a row of one-hundred. This book also uses popcorn kernels and groups of sky divers to give students a better understanding of number sense. Millions of Cats by Wanda Gag is a story about a man who ventures out into the countryside to find a cat for his wife. Once there he finds hundreds and thousands of cats and decides to bring them all home. The cats get to his house and start to fight over which one of them is the prettiest and after all the fighting only one cat is left. This book is perfect for showing students how big and small numbers can be and the language is very easy to read. More M&M’s Math written by Barbara Barbieri McGrath and illustrated by Roger Glass asks students to drop the candies out of their bag and use a graph to count the number of each color they have. This book is excellent for teaching ordinal numbers. Sir Cumference and All The King’s Tens written by Cindy Neuschwander and illustrated by Wayne Geehan is a story about Sir Cumference and his wife Lady Di preparing to have a surprise birthday party for King Arthur. But things get out of hand when so many guests show up that they have trouble counting them all. They decide to make the guests stand in rows and columns to make the guests easier to count. The guests are then placed in tents with each tent representing a place value. This is a great book to read to students before a lesson on place value. Websites for Kids Hacker’s Numbers is a interactive online game that challenges students to make larger numbers than Hacker. The student must place a number in the hundreds, tens and ones. This game is allows students to practice place value. The Cats in Line is an online activity that asks students to identify the ordinal number of the orange cat in a line of gray cats. This site is good for helping students gain an understanding of ordinal numbers and their relation to a set of objects. The “Less Than” Lake Maze is a game that challenges students to help a monster cross a lake by jumping from one numbered stone to another stone with a lower number on it. If the students move to a larger number the monster falls in the lake. Guess the Number is a game where the students can pick a number range (i.e. one to fifty) and then guess which number the computer has selected. With each turn the computer tells the student higher or lower and then provides a smaller range. The object of the game is to see how many turns it takes the student to guess the right number. Enter Your Number is and interactive online math activity that allows students to enter any number then want that then have the computer tell them the place values of the numbers within the number. This site also has the option of generating a number for the student to challenge them. Additional Resources Bring It is an awesome online resource to support instruction for teaching one-to-one correspondence and other early elementary math skills such as addition, subtraction and even skip counting. This activity also offers a two player mode for students working in teams. Estimation Exploration is an offline activity that asks student to estimate the amount of items in a jar or other container. This activity assists students with gaining number sense and uses physical objects such as shells, jelly beans or foam balls. Counting Votes is another offline activity where the teacher asks students to help him/her create a list of vegetables on a large piece of paper. Then using small cups and counter chips the students get to vote for which vegetables are their favorites. The teacher writes the number of votes next to each vegetable and the students get to count their total number of votes for each vegetable.
{"url":"http://blog.richmond.edu/openwidelookinside/archives/2683","timestamp":"2014-04-19T14:46:35Z","content_type":null,"content_length":"35393","record_id":"<urn:uuid:c818ca0b-ecba-4851-ad38-983ff70ca06a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of rhombus In geometry, a rhombus (from Ancient Greek ῥόμβος - rrhombos, “rhombus, spinning top”), (plural rhombi or rhombuses) or rhomb (plural rhombs) is an equilateral quadrilateral. In other words, it is a four-sided polygon in which every side has the same length. The rhombus is often casually called a diamond, after the diamonds suit in playing cards, or a lozenge, because those shapes are rhombi (though not all rhombi are actually diamonds or lozenges). of any rhombus is the product of the lengths of its divided by two: $Area=\left(\left\{D_1 times D_2\right\}\right) /2$ Because the rhombus is a parallelogram, the area also equals the length of a side (B) multiplied by the perpendicular distance between two opposite sides(H) $Area=B times H$ The area also equals the square of the side multiplied by the sine of any of the exterior angles: $Area=a^2 sintheta$ where a is the length of the side and $theta$ is the angle between two sides. A proof that the diagonals are perpendicular One of the five 2D types is the rhombic lattice, also called centered rectangular lattice. If A, B, C and D were the vertices of the rhombus, named in agreement with the figure (higher on this page). Using $overrightarrow\left\{AB\right\}$ to represent the vector from A to B, one notices $overrightarrow\left\{AC\right\} = overrightarrow\left\{AB\right\} + overrightarrow\left\{BC\right\}$ $overrightarrow\left\{BD\right\} = overrightarrow\left\{BC\right\}+ overrightarrow\left\{CD\right\}= overrightarrow\left\{BC\right\}- overrightarrow\left\{AB\right\}$. The last equality comes from the parallelism of CD and AB. Taking the inner product, $= 0$ since the norms of AB and BC are equal and since the inner product is bilinear and symmetric. The inner product of the diagonals is zero if and only if they are perpendicular. The word rhombus is from the Greek word for something that spins. Euclid used ρόμβος (rhombos), from the verb ρέμβω (rhembo), meaning "to turn round and round". Archimedes used the term "solid rhombus" for two right circular cones sharing a common base. This is also a called Tessellation. External links
{"url":"http://www.reference.com/browse/rhombus","timestamp":"2014-04-24T09:01:16Z","content_type":null,"content_length":"82464","record_id":"<urn:uuid:ba2a9b51-7557-407e-ae0e-80d950bd03bf>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate Acreage Original post by Lee Grayson of Demand Media The most accurate, and expensive, way to determine the acreage of a parcel is to hire a surveyor to make the formal determination. The easiest way to locate the parcel acreage is to look at the tax assessment or property title report. Both provide a fairly accurate reporting of the acreage of a parcel. If this information isn't available, or you suspect that the reports have major errors, determining the acreage using an online calculator is fast and easy. People without access to the Internet, need to do some "old school"-style calculations using paper and pencil to figure out the total acreage of a parcel. Online Calculator Approach for Regular Parcels Step 1 Locate an online acreage calculator on the Internet by using the search terms "acreage calculator." Step 2 Type the width in feet of the parcel in the blank space for "width." Step 3 Insert the length in feet of the parcel from your property tax or title reports in the blank space for "length." Step 4 Click on the webpage icon labeled "submit," "calculate" or "find acreage" to return the amount of total acreage. Online Calculator Approach for Irregular Parcels Step 1 Locate an online acreage calculator allowing the use of a diagonal distance or directional degree on the form by conducting an Internet search using the search terms "acreage calculator." Step 2 Locate at least one of the diagonal distances or directional degrees for one angle from the information listed on the title report or county property tax records describing the parcel. The county property description typically includes a narrative with a starting direction for the first parcel measurement, then the number of degrees and a statement of the next measurement. Use any of the parcel degrees with the online calculator. Step 3 Insert the distance measurements for the sides of the property parcel into the online form using meters, feet or yards. Step 4 Enter one of the property degree angles or a diagonal distance on the online form. Step 5 Click on the page icon labeled "recalculate," "calculate" or "area total" to receive the total acreage for the parcel. Old-School Approach Step 1 Locate the width and length of the parcel from the property tax or title report. Step 2 Multiply the parcel's width in feet by the length in feet. This calculation provides the net square footage. Step 3 Divide the total net square footage by 43,560 -- the square footage of an acre -- to provide the total acreage of the parcel. Graph Paper Approach Step 1 Locate a copy of the parcel made by an engineer or surveyor. County or city assessor offices and title reports typically provide maps for use in this graph paper approach to calculating acreage. Step 2 Place the graph paper over the map and trace the outside boundaries of the parcel. Step 3 Locate the scale of the map by looking at the scale listed. The scale typically uses 1 inch to represent 100 feet, but the map may use another representation. Step 4 Determine the scale of each grid for your graph paper. Measure the length of any grid on the paper and multiply this figure by the scale used on the assessor or title report map. This squared sum equals the area represented by each of the grids on the graph paper. The squared figure represents the sides of the grid box. The formula to determine each grid's scaled length uses this calculation: Measured length of the grid square x scale of map - scaled length of square. Step 5 Fill in the grid squares for the parcel. Fill in the grid square completely for areas of the property covering the entire square. The grid squares in the center of the property typically have full grid marks. The grid squares around the parcel's edges require partial fill marks. If the property edges cover one-quarter of the grid square, mark the square with "1/4." Mark the grid squares covered approximately one-half with a "1/2" label. Label grid squares filled by three-quarters of the property with "3/4." Step 6 Complete a chart for the grid squares. Label the top row "full," "3/4," "1/2" and "1/4," and count the number of squares from your graph paper matching the labels. A paper with 30 complete squares, for example, shows this number under the column labeled "full." Step 7 Calculate the acreage of the parcel using your chart. Multiply the number under the chart labeled full by 1. Multiply the number squares under the chart labeled "3/4" by 0.75 and the number on your chart under "1/2" by 0.50. Multiply the number on the chart listed "1/4" by 0.25. These figures represent the amount of land in the grid squares. Step 8 Add the numbers for each of the column totals from Step 7. This figure represents the total grids filled on the graph paper. Step 9 Convert the grid information from Step 8 into acreage. Multiply the number of filled grid squares calculated in Step 8 by the area of the grid square from Step 4 to find the total area of the parcel. The formula ((number of squares) x (area of a grid square) = total area) provides this information. Tips & Warnings • Several states, including Michigan, offer information for land parcels online. Most link this information from the city or county assessor's webpage. Things Needed • Parcel dimensions • Internet access • Calculator • Graph paper About the Author Lee Grayson has worked as a freelance writer since 2000. Her articles have appeared in publications for Oxford and Harvard University presses and research publishers, including Facts On File and ABC-CLIO. Grayson holds certificates from the University of California campuses at Irvine and San Diego. Photo Credits • Hemera Technologies/Photos.com/Getty Images
{"url":"http://wiki.fool.com/How_to_Calculate_Acreage","timestamp":"2014-04-19T22:09:32Z","content_type":null,"content_length":"57486","record_id":"<urn:uuid:7dde12cf-1824-413a-9eb0-972b4055da99>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Tewksbury Algebra Tutor Find a Tewksbury Algebra Tutor I am a great Tutor. Proof is that all my three boys- now 40, 38, 30- all did very well through my tutoring in High School and beyond. I strongly believe in Life-Long Learning; that's why I applied as an Engineering Master with my company VTech in 2004, when I was 63, to study with Suffolk University for my MBA, where I graduated in 2006 as BEST IN CLASS.-Rolf S. 28 Subjects: including algebra 1, reading, English, physics ...Through this work I assist them in determining the best path for their ultimate career and academic options. I also assist students in writing highly literate, targeted personal statements, ensuring that they are presenting an engaged and authentic self to admissions officers. I am a consulting... 41 Subjects: including algebra 1, chemistry, English, reading ...I find real life examples and a crystal clear explanation are crucial for success. My schedule is flexible as I am a part time graduate student. I am new to Wyzant but very experienced in tutoring, so if you would like to meet first before a real lesson to see if we are a good fit, I am willing to arrange that.I was a swim teacher for 8 years at Swim facilities and summer camps. 19 Subjects: including algebra 1, algebra 2, chemistry, Spanish ...I also worked as a teaching assistant and ran a series of classes during MIT's Independent Activities Period (IAP) for other students, faculty and staff. I can meet near Alewife, Harvard or MIT and at your house by previous arrangement.I have presented before groups as large as 4,500, led course... 63 Subjects: including algebra 2, GRE, English, writing ...I was a GMAT instructor for Princeton Review and Kaplan As someone who has had to juggle many subjects, papers, projects, tests, and deadlines in my undergraduate and graduate studies, I am delighted to help and have helped numerous students throughout the years to improve their study effectivene... 67 Subjects: including algebra 1, algebra 2, English, calculus
{"url":"http://www.purplemath.com/Tewksbury_Algebra_tutors.php","timestamp":"2014-04-18T21:35:39Z","content_type":null,"content_length":"23956","record_id":"<urn:uuid:86bbd4d6-f0d1-47b8-b716-29322368a6e7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Category and Measure James Hirschorn James.Hirschorn at univie.ac.at Mon Sep 17 15:36:31 EDT 2007 On Monday 17 September 2007 03:32, joeshipman at aol.com wrote: > But, can someone explain what's so useful about meager sets when > working with a measure space like the real numbers? > In other words, what kinds of results of (ordinary real) analysis can > be proven with arguments about category but not with arguments about > measure? (The more elementary the statement of the *result*, the better > -- the *proofs* don't have to be elementary.) I'm not sure exactly what counts as a "result of analysis", since you are clearly not interested in results specifically about category, e.g. the Baire Category Theorem. But perhaps the following example is relevant: Let a_n and b_n be sequences of real numbers, indexed by the natural numbers. Call them *similar* if one can be obtained from the other by translation and dilation, i.e. a_n = r X b_n + s for all n, for some fixed reals r and s. Consider the following statement: "Given a bounded sequence a_n of reals, every Borel set of reals that is not 'small' contains a sequence similar to This statement with 'small' interpreted as meager is a theorem of Erdos. The last I heard (several years ago) it is an open problem for 'small' = Lebesgue measure zero. James Hirschorn More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-September/011938.html","timestamp":"2014-04-18T23:52:23Z","content_type":null,"content_length":"3717","record_id":"<urn:uuid:7039113b-5773-4a72-aa38-0a911f8f0d51>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Version 24 (modified by jmstone, 4 years ago) (diff) Athena is a grid-based code for astrophysical magnetohydrodynamics (MHD). It was developed primarily for studies of the interstellar medium, star formation, and accretion flows. Athena has been made freely available to the community in the hope that others may find it useful. The current version (v4.0) implements algorithms for the following physics: • compressible hydrodynamics and MHD in 1D, 2D, and 3D, • ideal gas equation of state with arbitrary γ (including γ = 1, an isothermal EOS), • an arbitrary number of passive scalars advected with the flow, • self-gravity, and/or a static gravitational potential, • Ohmic resistivity, ambipolar diffusion, and the Hall effect, • both Navier-Stokes and anisotropic (Braginskii) viscosity, • both isotropic and anisotropic thermal conduction, • optically-thin radiative cooling. In addition, Athena allows for the following grid and parallelization options: • Cartesian or cylindrical coordinates, • static (fixed) mesh refinement, • shearing-box source terms, and an orbital advection algorithm for MHD, • parallelization using domain decomposition and MPI. A variety of choices are also available for the numerical algorithms, such as different Riemann solvers and spatial reconstruction methods. The code has been developed using the GNU development tools, maintaining strict adherence to ANSI standards, thus it should be possible to configure, compile, and run the code on any platform that supports these standards. Athena has been run on everything from a Mac laptop to a 25,000 processor Cray XT4. Learn More Equations Solved: What system of equations does the code actually solve? Documentation: Tutorial, User Guide, Programmer Guide, and research papers describing the numerical algorithms in Athena. Gallery: Images and links to research papers that include results from Athena. Tests: Suite of 1D, 2D, and 3D hydrodynamic and MHD problems used to validate Athena. Potentially useful to anyone developing algorithms for MHD. Downloads: Get the complete source code distribution for the latest public version of Athena. People: The list of collaborators developing Athena. Is Athena Galilean invariant? Well, for resolved solutions, yes. Athena with Trac: How Trac is being used to support the development of Athena.
{"url":"https://trac.princeton.edu/Athena/wiki/WikiStart?version=24","timestamp":"2014-04-19T04:20:00Z","content_type":null,"content_length":"11763","record_id":"<urn:uuid:4357a0f2-4f66-45a9-a8b2-6bdcc8ea53b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
history of computer, calculating machines, modern electronic calculator History of computer could be traced back to the effort of man to count large numbers. This process of counting of large numbers generated various systems of numeration like Babylonian system of numeration, Greek system of numeration, Roman system of numeration and Indian system of numeration. Out of these the Indian system of numeration has been accepted universally. It is the basis of modern decimal system of numeration (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Later you will know how the computer solves all calculations based on decimal system. But you will be surprised to know that the computer does not understand the decimal system and uses binary system of numeration for processing. We will briefly discuss some of the path-breaking inventions in the field of computing devices. 1. Calculating Machines It took over generations for early man to build mechanical devices for counting large numbers. The first calculating device called ABACUS was developed by the Egyptian and Chinese people. The word ABACUS means calculating board. It consisted of sticks in horizontal positions on which were inserted sets of pebbles. A modern form of ABACUS is given in Fig. 1.2. It has a number of horizontal bars each having ten beads. Horizontal bars represent units, tens, hundreds, etc. 2. Napier’s bones English mathematician John Napier built a mechanical device for the purpose of multiplication in 1617 A D. The device was known as Napier’s bones. 3. Slide Rule English mathematician Edmund Gunter developed the slide rule. This machine could perform operations like addition, subtraction, multiplication, and division. It was widely used in Europe in 16th 4. Pascal’s Adding and Subtractory Machine You might have heard the name of Blaise Pascal. He developed a machine at the age of 19 that could add and subtract. The machine consisted of wheels, gears and cylinders. 5. Leibniz’s Multiplication and Dividing Machine The German philosopher and mathematician Gottfried Leibniz built around 1673 a mechanical device that could both multiply and divide. 6. Babbage’s Analytical Engine It was in the year 1823 that a famous English man Charles Babbage built a mechanical machine to do complex mathematical calculations. It was called difference engine. Later he developed a general-purpose calculating machine called analytical engine. You should know that Charles Babbage is called the father of computer. 7 Mechanical and Electrical Calculator In the beginning of 19th century the mechanical calculator was developed to perform all sorts of mathematical calculations. Up to the 1960s it was widely used. Later the rotating part of mechanical calculator was replaced by electric motor. So it was called the electrical calculator. 8 Modern Electronic Calculator The electronic calculator used in 1960 s was run with electron tubes, which was quite bulky. Later it was replaced with transistors and as a result the size of calculators became too small. The modern electronic calculator can compute all kinds of mathematical computations and mathematical functions. It can also be used to store some data permanently. Some calculators have in-built programs to perform some complicated calculations. Fig-2-Vacuum tube, transistor, IC
{"url":"http://www.itsavvy.in/history-computer","timestamp":"2014-04-21T04:32:21Z","content_type":null,"content_length":"30483","record_id":"<urn:uuid:f85f595a-0a85-4afb-a9b3-2682a3d2bde9>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
The Machine Learning Track is intended for students who wish to develop their knowledge of machine learning techniques and applications. Machine learning is a rapidly expanding field with many applications in diverse areas such as bioinformatics, fraud detection, intelligent systems, perception, finance, information retrieval, and other areas. 1. Summary of Requirements Machine Learning track students must complete a total of 30 points and must maintain at least 2.7 overall GPA in order to be eligible for the MS degree in Computer Science. 1. Machine Learning track requires: - Breadth courses - Required Track courses (6pts) - Track Electives (6pts) - General Electives (6pts) 2. Students must take at least 6 points of technical courses at the 6000-level overall. One of the Track Electives courses has to be a 3pt 6000-level course from the Track Electives list (Section 3. If the number of points used to fulfill the above requirements is less than 30, then General Elective graduate courses at 4000 level or above must be taken so that the total number of credits taken is 30. 4. Students using previous courses to fulfill track requirements may complete the 30 graduate points by expanding their electives selected from (a) the list of required track courses; (b) the list of Track Elective courses; or (c) other graduate courses. Please use the Degree Progress Check to keep track of your requirements. 2. Breadth Requirement Students are required to satisfy the Breadth Requirement by taking 1 course from Group 1, 1 course from Group 2, 1 course from Group 3, and 1 more course from any of the three groups. Track courses taken at Columbia can also satisfy the breadth requirement. │ Group │ Courses │ │Group 1 (Systems) │All CS 41xx courses except CS 416x and CS 417x │ │ │All CS 48xx courses, and CS 4340, 4444, and 4460 │ │Group 2 (Theory) │All CS 42xx courses including CSOR W4231 │ │Group 3 (AI and Apps)│All CS 47xx courses, and CS 416x and CS 417x │ 3. Required Track Courses Students are required to complete two (2) of the following courses. Students who have taken equivalent courses in the past and received grades of at least a B may apply for waivers and take other CS courses instead. │Course ID │ Title │ │COMS W4252│Introduction to Computational Learning Theory │ │COMS W4771│Machine Learning │ │COMS W4772│Advanced Machine Learning │ 4. Elective Track Courses Students are required to take two courses from the following list, at least one of which must be a 6000-level course. Other courses on this list may be used as General Electives or to replace required track courses when the student has received a waiver. │ Course ID │ Title │ │COMS W4111 │Introduction to Databases │ │COMS W4252 │Introduction to Computational Learning Theory │ │COMS W4705 │Intro to Natural Language Processing │ │COMS W4731 │Computer Vision │ │COMS W4737 │Biometrics │ │COMS W4761 │Computational Genomics │ │COMS W4771 │Machine Learning │ │COMS W4772 │Advanced Machine Learning │ │COMS W4995 │Intro Social Networks │ │COMS E6111 │Advanced Database Systems │ │COMS E6253 │Advanced Topics in Computational Learning Theory │ │COMS E6717 (ELEN E6717)│Information Theory │ │COMS E6735 │Visual Databases │ │COMS E6737 │Biometrics │ │COMS E6901 │Projects in Computer Science │ │COMS E6998 │Search Engine Technology │ │COMS E6998 │Network Theory │ │COMS E6998 │Algorithmic Game Theory │ │COMS E6998 │Statistical Methods for NLP │ │COMS E6998 │NLP for the Web │ │COMS E6998 │Advanced Topic in Machine Learning │ │COMS E6998 │Machine Translation │ │COMS E6998 │Machine Learning for NLP │ │COMS E6998 │Intro/Distributed Data Mining │ │COMS E6998 │Analysis of social Info. Nets │ │COMS E6998 │Algorithms/Deal/Massive Data │ │COMS E6998 │Econ of Social Networks │ │COMS E6998 │CV and ML an Mobile Platforms │ │COMS E6998 │Data Science & Entrepreneurship │ │COMS E6998 │Fund of Speaker Recognition │ │COMS E6998 │Bayesian Analysis for NLP │ │COMS E6998 │Sublinear Time Algos Learning │ │COMS E6998 │Semantic Tech in IBM Watson │ │COMS E6998 │Cloud and Big Data │ │COMS E6998 │Digitally Mediated Storytelling │ │CSEE E6892 │Bayesian Models in Machine Learning │ │CSEE E6898 │Large-Scale Machine Learning │ │CSEE E6898 │Sparse Signal Modeling │ │IEOR E6613 │Optimization I │ │IEOR E8100 │Optimization Methods in Machine Learning │ │SIEO 4150 or STAT W4201│Probability and Statistics/Advanced Data Analysis │ │STAT W4240 │Data Mining │ │STAT W4242 │Introduction to Data Science │ │STAT W4249 │Applied Data Science │ │STAT G4400 │Statistical Machine Learning │ │STAT G6101 │Statistical Modeling and Data Analysis I │ │STAT G6104 │Computational Statistics │ 5. General Electives Students are required to complete at least 6 additional graduate points at, or above, the 4000 level; at least 3 of these points must be CS, the other 3 points may be non-CS/non-technical course approved by the track advisor. Candidates who wish to take a non-CS/non-Technical course should complete a non-tech approval form, get the advisor's approval, and submit it to Janine Maslov or Remi Moss. At most 3 points overall of the 30 graduate points required for the MS degree may be non-CS/non-technical. 6. Track Planning Please visit the Directory of Classes to get the updated course listings. Please also note that not all courses are offered every semester, or even every year. A few courses are offered only once every two or three years or even less frequently. For more information, please see the SEAS Bulletin CS course-offering schedule (This schedule can change due to unforeseeable circumstances; thus, it should only be used as a reference). Please note that the Institute of Data Science and Engineering (IDSE) courses such as COMS W4721 Machine Learning for Data Science do not count towards the MS degree in Computer Science. If you have any questions about this, please contact the Computer Science Student Services. 7. Track Advisor Please direct all questions concerning the Machine Learning track Prof. and Prof. . 8. Graduation Candidates preparing for graduation should submit a completed application for degree to the Registrar's Office and submit a track graduation form to CS Student Services. Last updated 1/28/2014.
{"url":"http://www.cs.columbia.edu/education/ms/machineLearning","timestamp":"2014-04-21T03:09:44Z","content_type":null,"content_length":"23894","record_id":"<urn:uuid:18ff0a83-2e8c-48a8-b252-b260ca4f7ae4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Seat Pleasant, MD Trigonometry Tutor Find a Seat Pleasant, MD Trigonometry Tutor ...I also have sight singing practice and completed 1 year of music theory. I have taught 44 years and have taught many students with ADD and ADHD. I am presently tutoring a student in Takoma Park with ADHD and a student in Silver Spring with Aspergers. 21 Subjects: including trigonometry, calculus, geometry, statistics ...I want my students to feel like math is their strength and not something holding them back. I love teaching test preparation because it helps my students achieve their dreams. I'm a full-time teacher with experience teaching high school Geometry, Algebra 2, and Trigonometry. 12 Subjects: including trigonometry, geometry, GRE, ASVAB ...I am very comfortable with using information technology in teaching and learning. I believe in learning from first principles and always ensure my students understand the basis of the lesson before building on it. I also use a lot of everyday examples to ensure that student can relate to the theories they are learning about. 11 Subjects: including trigonometry, calculus, geometry, algebra 1 My name is Festus. I had my high school and post high school education in Nigeria. I came to u.s in 1980 with an in-service scholarship award to study communication engineering at the university of Illinois Chicago campus where i graduated in march 1984. 7 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...I am a dedicated teacher and I always took the extra effort to spend time with students outside regular class hours to help them learn. It is from these individual sessions that I find the students learn the most. My tutoring style is strongly influenced by the positive experience that I gained from these personalized instruction with students. 16 Subjects: including trigonometry, calculus, physics, statistics Related Seat Pleasant, MD Tutors Seat Pleasant, MD Accounting Tutors Seat Pleasant, MD ACT Tutors Seat Pleasant, MD Algebra Tutors Seat Pleasant, MD Algebra 2 Tutors Seat Pleasant, MD Calculus Tutors Seat Pleasant, MD Geometry Tutors Seat Pleasant, MD Math Tutors Seat Pleasant, MD Prealgebra Tutors Seat Pleasant, MD Precalculus Tutors Seat Pleasant, MD SAT Tutors Seat Pleasant, MD SAT Math Tutors Seat Pleasant, MD Science Tutors Seat Pleasant, MD Statistics Tutors Seat Pleasant, MD Trigonometry Tutors Nearby Cities With trigonometry Tutor Berwyn Heights, MD trigonometry Tutors Bladensburg, MD trigonometry Tutors Brentwood, MD trigonometry Tutors Capitol Heights trigonometry Tutors Cheverly, MD trigonometry Tutors Colmar Manor, MD trigonometry Tutors Cottage City, MD trigonometry Tutors District Heights trigonometry Tutors Edmonston, MD trigonometry Tutors Fairmount Heights, MD trigonometry Tutors Glenarden, MD trigonometry Tutors Landover Hills, MD trigonometry Tutors Mount Rainier trigonometry Tutors North Brentwood, MD trigonometry Tutors Suitland trigonometry Tutors
{"url":"http://www.purplemath.com/Seat_Pleasant_MD_Trigonometry_tutors.php","timestamp":"2014-04-16T10:35:37Z","content_type":null,"content_length":"24723","record_id":"<urn:uuid:67c8eb3a-0e63-412e-b5b3-ce5c4fd6f3e5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
HELP with hard MATH problem Click here to go to the NEW College Discussion Forum Discus: What Are My Chances?: January 2004 Archive: HELP with hard MATH problem Okay, I need some help with this math problem I was assigned. It is hard to describe over this post but I will do it anyway. @= spider * = degrees ... = IGNORE them, it was the only way I could semi-draw the shapes. THEY SHOULDN"T BE THERE AND ARE NOT RELEVANT TO THE PROBLEM This "right angled spider" can't go backwards. It shoots web only in right angles (or forming right angles). (I will explain this since I cannot draw it) The spider starts down at the bottom and goes to the top as shown (200 ft.). At the top, the "right angeled spider" drops a web vertically down to the ground making a right angle at the lower right part of this triagnle formed. .@ /_____________________________ (ground) angle = 61* ............./| angle = 29* .........../..| (web dropped down) angle = 61* From here, I will explain again though it may be confusing: If you can imagine, the next web the spider shoots is from the 90* angle (in pic) towards the 200ft line (hypotenus)--thus forming a 90* angle with the hypotenus and the new web line (the 90* angle box would be above the new web line if this helps you understand). From here, the spider will drop down a web vertically like she did in the beginning. This process is repeated over and over. QUESTION: How far does the spider travel (in exact measurements or decimals (if it has one)) until it cannot go anymore? Ok, here goes: The first leg, using trig, has length 200*Sin(61) The second has length 200*sin61*cos61 The third has length 200*sin61*cos61*cos61 The fourth 200*sin61*cos61*cos61*cos61 And so on. Basically, toobtain the length of each leg, you multiply the previous one by cos61. Therefore, each leg length is a term of a geometric sequence of common ratio cos61. The sum is given by the formula: Firstterm(1-(common ratio)^n)/(1-common ratio) So total length is: 200sin61(1-(cos61)^n)/(1-cos61) There. Now find the limit as n goes to infinity. Since 0<cos61<1, the limit of cos61^n is 0 Hence, total length is 200sin61/(1-cos61) ^^u forgot the original 200 ft the first drop is 174.92ft and then every time after that to find the distance u multiply by cos61 n= infinity - i think since the spider wasnt given a size that it will keep going so the sum of the series is 174.92(1-cos61^infinity)/1-cos61 =339.525 add the original 200 and my final answer is 539.525 ft Oh that's right! Yeah, sorry about that Thanks Beero Beero--what does "b" stand for? Beero--what does "b" stand for again? It has been so long, lol Beero used b for the first term of his geometric sequence. In this case, b=200sin61=174.92 yea thats the formula i learned b=1rst number r=common ratio n=nth term Report an offensive message on this page E-mail this page to a friend
{"url":"http://www.collegeconfidential.com/discus/messages/757/47568.html","timestamp":"2014-04-19T14:52:33Z","content_type":null,"content_length":"16334","record_id":"<urn:uuid:81eabf72-58cd-4795-ae07-83dfa86af371>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Chula Vista SAT Math Tutor Find a Chula Vista SAT Math Tutor Currently, I am enrolled in a credential program to become a Math teacher. I have a passion for Math. I am bilingual; I speak Spanish fluently. I also took 3 semesters of Italian, so I am good at 11 Subjects: including SAT math, Spanish, calculus, geometry I have experience tutoring elementary and junior high school level mathematics and have taken the GRE, scoring in the 86th percentile in the Quantitative field. I've tutored Chinese college students on English and studied Mandarin Chinese at Peking University, one of China's top two Universities, e... 18 Subjects: including SAT math, Spanish, calculus, Chinese ...Additionally, I lived in France for five years, which helped develop very strong and natural French language skills. I also spent one year tutoring in Seattle, WA, working with special needs students pursuing their GEDs. I am an effective tutor because of my skill in assessing my student's needs, but also because of my ability to empathize with young learners. 14 Subjects: including SAT math, French, geometry, ESL/ESOL ...I studied photography at Princeton University and am fully knowledgeable in the fields of digital photography, as well as analog photography (35mm, medium and large format) and with a wide range of cameras. I also have been working as a darkroom technician at the Irvine Fine Arts Center for 3 months now. I can teach how to compose, light, edit, and print photographs. 34 Subjects: including SAT math, English, writing, Spanish ...They are puzzles that we can unravel together. I am willing tutor late at night! So if you want someone to meet you after 10 pm, I can be your guy (or I guess gal!).I am a math major and regularly use the skills that are learned in Algebra 1. 13 Subjects: including SAT math, chemistry, statistics, calculus
{"url":"http://www.purplemath.com/Chula_Vista_SAT_Math_tutors.php","timestamp":"2014-04-19T20:14:21Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:e7ae55ee-af7b-41a3-b92d-59cae81a8bb2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Coefficient | Define Coefficient At Dictionary.com noun 1. Mathematics . number or quantity placed (generally) before nd multiplying nother quantity, s 3 the expression 3x. 2. Physics. number that is ... Coefficient - Wikipedia, The Free Encyclopedia In mathematics, a coefficient multiplicative factor some term of series or ny expression ; it is usually number, but in a ny case does not ... What Do The Coefficients Indicate In A Chemical Equation Explore This Topic: in a mathmatical equation what is the a coefficient number or symbol front of lgebraic term in a n equation. Counting Atoms In Chemical Formulas - Birdville Isd / Overview . is shorthand ... When counting toms, we multiply a coefficient by the subscript of each of the elements that follows the coefficient in How To Count Atoms In A Chemical Formula How to Count in a chemical formula Equation SKILLS NEEDED: Recognize the dividual elements Recognize the subscript s the … Coefficient Of Determination Formula | Formulas@tutorvista.com Coefficient of Determination is one of the most important tools statistics which is widely used economics, physics, chemistry nd many more fields. Gini Coefficient - Wikipedia, The Free Encyclopedia The Gini (also known s the Gini dex or Gini ratio) (/ dÊ’ i n i /) is measure of statistical dispersion tended to represent the come distribution ... 2•stoichiometry: Chemical Arithmetic Formula Conventions chemical a Conventions (1 of 24) Superscripts used to show the charges on ions Mg 2+ the 2 means 2+ charge (lost 2 electrons) Chemistry: How To Identify, Count And Read Chemical Formulas Dec 11, 2009 · Identifying chemical formula s: The first step s requires you to know what the two following things re; the Section 2 Chemical Formulas And Equations Section 2 chemical formula nd Equations Key Concept chemical formula re used to show how re rearranged to form new … Chemical Equations 19 i). NEUTRALISATION OF ND BASES. cid is substance with "acid" its name, nd its usually starts with H. Examples: sulfuric cid, H
{"url":"http://www.labroda.com/coefficient-in-a-chemical-formula.htm","timestamp":"2014-04-17T04:06:39Z","content_type":null,"content_length":"37362","record_id":"<urn:uuid:fb0ea247-83ad-4761-9492-b2680e594b2c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Your Ideal Weight History of Weight Formulas The first commonly used formula for finding a person's ideal body weight was devised in 1871 by a French doctor idd Broca. In 1974 a doctor idd Devine published basically the same formula only using the metric system, for the purpose of calculating the dosage of medications for patients. Unfortunately the formula wasn't even based on a real population, and isn't a good standard for healthy weight for many people. Here is the basic formula: Devine Formula For men: Ideal Body Weight (in kilograms) = 50 + 2.3 kg per inch over 5 feet. For women: Ideal Body Weight (in kilograms) = 45.5 + 2.3 kg per inch over 5 feet. This translates to 110 pounds for the first 5 feet of height in men, and 100 pounds for the first 5 feet in women. Then, you are allowed 5 pounds per inch of height after 5 feet! This is an unacceptable body weight for many people to maintain, and in fact is so close to the lean body weight (weight of just the bones and organs with no fat) in short women that it leaves room for no body fat at all! Many websites use the Devine formula in their body weight calculators, so if you think the ideal weight you are hearing is impossibly low, you may be right. In 1984 two more formulas were published by a Doctor Robinson and a Doctor Miller. Robinson Formula For men: Ideal Body Weight (in kilograms) = 52 kg + 1.9 kg for each inch over 5 feet. For women: Ideal Body Weight (in kilograms) = 49 kg + 1.7 kg for each inch over 5 feet. This translates for men to roughly 115 pounds for the first 5 feet and another 4 pounds per inch. For women 108 pounds for the first five feet and another 4 pounds per inch. Miller Formula For men: Ideal Body Weight (in kilgrams) = 56.2 kg + 1.41 kg for each inch over 5 feet For women: Ideal Body Weight (in kilograms) = 53.1 kg + 1.36 kg for each inch over 5 feet. For men this is roughly 125 pounds for the first 5 feet and 3 pounds for every inch above that. For women it is 117 pounds for the first 5 feet and another 3 pounds for each inch. These last two formulas present the same problems for taller men as the Devine does for shorter women, that it puts you at a weight that is impossible and unhealthy. None of these formulas takes into account age, health conditions, or muscle mass. Since everyone is unique and has their own body type and needs, it is probably not wise to try to pinpoint a single weight and say that we should all be striving for that number. The Metropolitan Life Insurance company came out with a Height and Weight table in 1943 that gave a range of what they called "desirable" weights, or weights where you would find the lowest rate of mortality. These tables were revised in 1983. They are better because they offer a range rather than a specific target weight, and you select your range based on your frame size. However people tend to be subjective about choosing frame size, and once again the table gives an impossibly low weight range for some heights. You can take a look at the Met Life tables here. Many doctors now use BMI, or Body Mass Index, to calculate whether a person's weight is within a healthy range. The formula for BMI is: Metric system: weight (in kilograms) divided by [height (in meters) squared] English system: weight (in pounds) divided by [height (in inches) squared] x 703 Note that these tables are written for adults over 20 years old, not for growing children. There is a different BMI calculation for children, and the height and weight charts are also different for the younger age groups. The BMI doesn't account for muscle mass so that an athlete may register in the overweight group. Here is how the BMI number corresponds to weight: Below 18.5 is underweight 18.5 to 24.9 is normal 25 to 29.9 is overweight 30 and above is obese To summarize, there really isn't a foolproof way to determine your ideal weight. The best thing is to use a range for your height or age group such as listed in the MetLife tables, and then take into account your age, your activity level and muscle mass, and any other factors that make you uniquely you. At YOUR ideal weight you should feel healthy and comfortable with yourself and have a good level of energy. Check with your doctor or a dietitian if you are unsure what your weight should be, or if you have health problems that affect your weight and your diet. Calculating Your Caloric Needs The Harris-Benedict Equations were developed in the early 1900's by Francis G. Benedict, working at Nutrition Laboratory of the Carnegie Institution of Washington in Boston, Massachusetts. They were developed in order to give a benchmark for comparing the basal metabolism of people with certain diseases, but they still remain the most commonly used formula today for calculating energy needs. Here are the steps for finding the number of calories you need each day: Find your basic calorie needs using the Harris-Benedict Formula Multiply this number by an Activity Multiplier based on your activity level. Third: Add or subtract calories from this number based on how quickly you want to gain or lose weight. Harris-Benedict Formula for BMR This method takes into account your height, weight, age and sex. You can use the English or metric system to figure out your BMR. I tried both ways and the result differed by only one calorie, so use whichever you are more comfortable with. Convert your measurements from English to metric, or vice versa. For women: 655 + (4.3 x weight in pounds) + (4.7 x height in inches) - (4.7 x age in years) Or, if you want to use metric measurements: 655 + (9.6 X wt in kg) + (1.8 X ht in cm) -(4.7 X age in years) For men: 66 + (6.3 x weight in pounds) + (12.9 x height in inches) - (6.8 x age in years) Or, if you want to use metric measurements: 66 + (13.7 X wt in kg) + (5 X ht in cm) - (6.8 X age in years) You are going to figure out everything in parentheses FIRST. Then, do your adding and subtracting from left to right. For example with the women's calculations above you end up with 665 + total in parentheses + total in parentheses - total in parentheses. See some examples to help with the math. You can see from this formula how age affects your metabolism. That last number increases as you age, and you have to subtract that from the calories you are allowed to eat every day. So as you get older you must either eat less food, or increase your exercise in order to avoid a slow weight gain over the years. You can also see that men burn off WAY more calories than women. Life is so unfair. The Activity Multiplier Next you will calculate the extra energy you need for your activities. After finding the number of calories you need to just carry on, you will use an Activity Multiplier to take in to account the exercise you are getting, or your active lifestyle. If you are a real couch potato, the Activity Multiplier probably will not help you much. If you think the number you have so far is shockingly low, activity is your chance to bring it up to a level where you can eat a normal diet. You will multiply your BMR by a number based on your normal activity level. Here are the activity guidelines: • BMR x 1.2 Sedentary: You get virtually no activity, being a couch potato at home, and working at a desk job all day. • BMR x 1.375 Light activity: You exercise lightly from one to three times a week. • BMR x 1.55 Moderate activity: You exercise moderately three to five times a week. • BMR x 1.725 Frequent activity: You exercise moderately six to seven times a week, or heavily four or more times per week. • BMR x 1.9 Extreme activity: You are training for an event and exercise hard once or more a day, or you have an extreme physical job. Examples of light activity are walking slowly, easy bicycling, light housework. Moderate activities are a brisk walk, dancing, playing ping-pong, skating, or anything that warms you up. Heavy activity would be running, bicycling fast or on hills, swimming, high intensity games like basketball or soccer. Basically, working up a sweat. Boxing, rowing and mountain climbing are extreme calorie burning activities. So what if you go for a light walk 6 days a week? Instead of using the light activity rate you could bump it up to the moderate rate. In other words, use your own judgement if you don't fit exactly in these guidelines, but be honest with yourself. You will be able to tell soon enough if your results are working to help you maintain or lose weight. How to Use the Numbers If you want to lose weight you must eat less calories than this amount, and if you want to gain you must eat more. Your body uses 3500 extra calories to store a pound of fat, and must burn off that much extra to lose a pound. Eating only 500 calories less each day will make you lose about a pound a week. For two pounds a week you will need to adjust your intake by 1000 calories, etc. Decide how many pounds per week you want to lose or gain, then multiply that number by 500 and add or subtract it from your Daily Calorie Needs. This is the magic number that will help you achieve your goal. For example, lets say the 45 year old woman in the example exercises lightly about 3 times per week. She will multipy her daily caloric needs by an activity multiplier of 1.375. This means her energy needs are about 2439 calories each day. If this woman wants to lose a pound a week, she will need to subtract 500 calories, reducing her calorie intake to 1939 calories a day. Now that you know how many calories you need, how do you keep track of how many you eat? This requires a little bit of discipline, being mindful of everything that goes into your mouth, and taking the time to write your information down. It gets easier as you remember information from day to day and learn the feeling of eating too much or too little. But that is a story for another day soon. Examples to Help with Math A 45 year old woman weighs 140 pounds, and is 5"5' (65 inches) tall. 655 + (4.3 x ) + (4.7 x ) - (4.7 x 655 + 602 + 305.5 - 211.50 1774 calories per day A 20 year old man weighs 90 kilos, and is 183 centimeters tall. 66 + (13.7 x ) + (5 x ) - (6.8 x 66 + 1233 + 915 - 136 2078 calories per day Convert measurements from English to metric or metric to English • Multiply centimeters by .3937 to get inches. • Multiply inches by 2.54 to get centimeters • Multiply kilograms by 2.2 to get pounds • Multiply pounds by .45 to get kilograms Does your weight loss diet measure up to nutritional standards? Here are five questions to ask about your diet. The information on this page is not meant to be used in treatment of medical conditions. Please seek the advice of a physician about any medical condition or symptom. Those with medical conditions should consult a medical professional about the appropriateness of taking dietary supplements or diet therapy, and how these methods will interact with their medications.
{"url":"http://www.thirdplanetfood.com/weight.htm","timestamp":"2014-04-20T06:22:40Z","content_type":null,"content_length":"19919","record_id":"<urn:uuid:0e05ce7d-ea6f-4493-a248-6867874b13a6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the infimum of the Ky Fan metric achieved? up vote 9 down vote favorite Consider the probability space $(\Omega, {\cal B}, \lambda)$ where $\Omega=(0,1)$, ${\cal B}$ is the Borel sets, and $\lambda$ is Lebesgue measure. For random variables $W,Z$ on this space, we define the Ky Fan metric by $$\alpha(W,Z) = \inf \lbrace \epsilon > 0: \lambda(|W-Z| \geq \epsilon) \leq \epsilon\rbrace.$$ Convergence in this metric coincides with convergence in probability. Fix the random variable $X(\omega)=\omega$, so the law of $X$ is Lebesgue measure, that is, ${\cal L}(X)=\lambda$. Question: For any probability measure $\mu$ on $\mathbb R$, does there exist a random variable $Y$ on $(\Omega, {\cal B}, \lambda)$ with law $\mu$ so that $\alpha(X,Y) = \inf \lbrace \alpha(X,Z) : {\cal L}(Z) = \mu\rbrace$ ? 1. By Lemma 3.2 of Cortissoz, the infimum above is $d_P(\lambda,\mu)$: the Lévy-Prohorov distance between the two laws. 2. The infimum is achieved if we allowed to choose both random variables. That is, there exist $X_1$ and $Y_1$ on $(\Omega, {\cal B}, \lambda)$ with ${\cal L}(X_1) = \lambda$, ${\cal L}(Y_1) = \mu$, and $\alpha(X_1,Y_1) = d_P(\lambda,\mu)$. But in my problem, I want to fix the random variable $X$. 3. Why the result may be true: the space $L^0(\Omega, {\cal B}, \lambda)$ is huge. There are lots of random variables with law $\mu$. I can't think of any obstruction to finding such a random 4. Why the result may be false: the space $L^0(\Omega, {\cal B}, \lambda)$ is huge. A compactness argument seems hopeless to me. I can't think of any construction for finding such a random variable. How is the fact that you want to fix the rv $X$ relevant? Isn't it true that for any two rvs $X_1$ and $X_2$ with the same law, there is an isomorphism of $(0,1)$ taking $X_1$ to $X_2$? – Ori Gurel-Gurevich Oct 26 '10 at 20:20 @Ori: I'm not quite sure what kind of isomorphism you have in mind. The random variables $X_1(\omega)=2\omega-\lfloor 2\omega\rfloor$ and $X_2(\omega)=3\omega-\lfloor 3\omega\rfloor$ are both uniform (0,1) random variables, but neither can be written as a function of the other. It's certainly possible that an easy transformation or observation solves the problem. I'd be glad to hear about it! – Byron Schmuland Oct 27 '10 at 1:26 You're right, I didn't understand the question at first. The way I understand it now I would almost say it is not a question in probability as it depends on the representation of the random variable in question. – Ori Gurel-Gurevich Oct 27 '10 at 2:47 In point 2) are you appealing to Strassen theorem? Don't you need the laws of both X and Y to be tight? – Ngoc Mai Tran Feb 25 '11 at 8:13 add comment 2 Answers active oldest votes Because what follows doesn't fit in a comment, I write it here as an answer; but they are merely comments. After computing the minimizer for several simple distributions, my impression is that the answer to this question is yes, and there will be many minimizers. Intuitively, it seems possible to build an optimizer as follows: we are given the law $\mu$ and we would like to find a function $f$ such that 1) the distribution of $f$ is $\mu$ and 2) $\ alpha(f,X)$ is minimum. Let $\epsilon > 0 $ be this minimum. Let $F(x) = \mu( (-\infty, x])$, i.e., $F$ is the distribution function associated with $\mu$. Let $G$ be the inverse function of $F$: $G(x) \doteq \inf\{y: F(y) \ge x \}$. By its definition $G$'s distribution is $\mu$. Draw the graphs of the functions $l(x) = x + \epsilon$ and $u(x) = x -\epsilon$ around the graph of up vote the function $X(x) = x$. To get the minimizer, one cuts the graph of $G$ into $n$ small pieces with lines parallel to the $x$ axis and shifts around the pieces along these lines so that they 2 down lie between the graphs of $l$ and $u$ as much as possible. As the number of pieces increase and their size decreases you would expect this to converge to a function that is the desired vote minimizer. The result will depend on the particulars of this process. As to non-uniqueness: suppose $f$ is a minimizer. Denote with $E$ the subset of $[0,1]$ over which $f$ differs from $X$ by at least $\epsilon$. The values that $f$ takes over $E$ can be freely permuted without affecting the distribution and the distance between $f$ and $X$. So there will be infinitely many minimizers, when there is one. Thanks for looking at my problem; I thought it was dead. I agree that the minimizer likely exists, and will try to pursue your strategy. The part where we take the limit has me a bit worried, though. – Byron Schmuland Sep 12 '10 at 16:45 add comment This probably helps not at all, but I saw you were interested in Ky-Fan metric, and friends of mine have looked at these in a noncommutative setting in which there are some "extreme up vote 0 value" properties. Maybe there's something useful in there for you: http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.4239v3.pdf down vote This paper looks interesting, but beyond me. Thanks for the tip, though. – Byron Schmuland Oct 27 '10 at 1:39 No problem! Sorry I couldn't say anything more constructive. – Jon Bannon Oct 27 '10 at 18:11 add comment Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/35695/is-the-infimum-of-the-ky-fan-metric-achieved/43716","timestamp":"2014-04-16T11:10:31Z","content_type":null,"content_length":"63259","record_id":"<urn:uuid:3801dcbb-7f55-4dd2-a946-a2283e707a5d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
math please check my work Posted by anna on Wednesday, April 14, 2010 at 8:07pm. a license plate has 2 letters followed by 4 digits.how many license plates are possible if the digits and letters can be repeated? i think it is i(the 4th one) am i correct thank you for helping me • Correct - MathMate, Wednesday, April 14, 2010 at 8:52pm Your answer is correct: Number of choices for first letter = 26 for second letter = 26 for first digit = 10 for second digit for third digit = 10 for fourth digit = 10 Total number of possible outcomes = 6,760,000 In the future, if you are suggesting answers for checking, it would be beneficial if you explain how you got the answer. In maths, it's how you get the answer that counts, not just the answer. Related Questions math check my work - a license plate has 2 letters followed by 4 digits.how many... College probability and statistics - The current license plates in New York ... Computers - In Python - A state creates license plates, each containing two ... survey of math - A license plate is to consist of 2 letters followed by 3 digits... Math - A state creates license plates that each contain two letters followed by ... statistics - A license plate is to have 3 digits followed by 2 uppercase ... math - A state's license plate contains 3 letters followed by 3 digits. How many... algebra - In Utah, a license plate consists of 3 digits followed by 3 letters. ... math - suppose a license plate for a car has 4 letters followed by 1 number and ... Computer Programming - I am trying to put the following in Python Code. I know ...
{"url":"http://www.jiskha.com/display.cgi?id=1271290029","timestamp":"2014-04-16T10:42:24Z","content_type":null,"content_length":"8914","record_id":"<urn:uuid:e1688d73-7954-4db2-9d33-6dd217dfd87c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Binary Decision Diagram binary decision diagram ), like a negation normal form (NNF) or a propositional directed acyclic graph (PDAG), is a data structure that is used to represent a Boolean function . A Boolean function can be represented as a rooted, directed, acyclic , which consists of decision nodes and two terminal nodes called 0-terminal and 1-terminal. Each decision node is labeled by a Boolean variable and has two child nodes called low child and high child. The edge from a node to a low (high) child represents an assignment of the variable to 0 (1). Such a is called 'ordered' if different variables appear in the same order on all paths from the root. It is called 'reduced' if the graph is reduced according to two rules: In popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram (ROBDD in the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is that it is canonical (unique) for a particular functionality. This property makes it useful in functional equivalence checking and other operations like functional technology A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the represented Boolean function is true. As the path descends to a low child (high child) from a node, then that node's variable is assigned to 0 (1). BDDs are extensively used in CAD software to synthesize circuits (logic synthesis) and in formal verification. There are several lesser known applications of BDD, including Fault tree analysis, Bayesian Reasoning and Product Configuration. Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented by replacing each node with a 2 to 1 multiplexer; each multiplexer can be directly implemented by a 4-LUT in a FPGA. Unfortunately, it is not so simple to convert from an arbitrary network of logic gates to a BDD (unlike the and-inverter graph). The left figure below shows a binary decision tree (the reduction rules are not applied), and a truth table , each representing the function f (x1, x2, x3). In the tree on the left, the value of the function can be determined for a given variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, while solid lines represent edges to a high child. Therefore, to find (x1=0, x2=1, x3=1), begin at x1, traverse down the dotted line to x2 (since x1 has an assignment to 0), then down two solid lines (since x2 and x3 each have an assignment to one). This leads to the terminal 1, which is the value of f (x1=0, x2=1, x3=1). The binary decision tree of the left figure can be transformed into a binary decision diagram by maximally reducing it according to the two reduction rules. The resulting BDD is shown in the right The basic idea from which the data structure was created is the Shannon expansion . A switching function is split into two sub-functions (cofactors) by assigning one variable (cf. if-then-else normal form ). If such a sub-function is considered as sub-tree, it can be represented by a binary decision tree . Binary decision diagrams (BDD) were introduced by Lee, and further studied and made known by Akers and Boute. The full potential for efficient algorithms based on the data structure was investigated by Randal Bryant at Carnegie Mellon University: his key extensions were to use a fixed variable ordering (for canonical representation) and shared sub-graphs (for compression). Applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations). By extending the sharing to several BDDs, i.e. one sub-graph is used by several BDDs, the data structure Shared Reduced Ordered Binary Decision Diagram is defined. The notion of a BDD is now generally used to refer to that particular data structure. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, the actual operations are performed directly on that compressed representation, i.e. without decompression. Variable ordering The size of the BDD is determined both by the function being represented and the chosen ordering of the variables. For a boolean function $f\left(x_1,ldots, x_\left\{n\right\}\right)$ then depending upon the ordering of the variables we would end up getting a graph whose number of nodes would be linear (in n) at the best and exponential at the worst case. Let us consider the Boolean function $f\left(x_1,ldots, x_\left\{2n\right\}\right) = x_1x_2 + x_3x_4 + dots + x_\left\{2n-1\right\}x_\left\{2n\right\}$ . Using the variable ordering $x_1 < x_3 < dots < x_\left\{2n-1\right\} < x_2 < x_4 < dots < x_\left\{2n\right\}$ , the BDD needs nodes to represent the function. Using the ordering $x_1 < x_2 < x_3 < x_4 < dots < x_\left\{2n-1\right\} < x_\left\{2n\right\}$ , the BDD consists of It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem of finding the best variable ordering is NP-hard. For any constant c>1 it is even NP-hard to compute a variable ordering resulting in an OBDD with a size that is at most c times larger than an optimal one. However there exist efficient heuristics to tackle the problem. There are functions for which the graph size is always exponential — independent of variable ordering. This holds e. g. for the multiplication function (an indication as to the apparent complexity of factorization ). Researchers have of late suggested refinements on the BDD data structure giving way to a number of related graphs: BMD (Binary Moment Diagrams), ZDD (Zero Suppressed Decision Diagram ), FDD (Free Binary Decision Diagrams), PDD (Parity decision Diagrams), etc. Logical operations on BDDs Many logical operations on BDDs can be implemented by polynomial-time graph manipulation algorithms. See also This is a crude way to build a BDD in C like language. Declare the data structure as follows and then proceed accordingly. /* The basic data structure */ struct vertex char *φ; struct vertex *hi, *lo; /* The interface to the Unique Table */ struct vertex *old_or_new(char *φ, struct vertex *hi, *lo) if(“a vertex v = (φ, hi, lo) exists”) return v; else { v <- “new vertex pointing at (φ, hi, lo); return v; Data Structure for Building the ROBDD struct vertex *robdd_build(struct expr $f$, int i) struct vertex *hi, *lo; struct char *φ; if(equal(f, '0')) return v0; else if (equal(f, '1')) return v1; else { φ ← п(i); hi ← robdd_build($f \left(xi = 1\right)$, i+1); lo ← robdd_build($f \left(xi = 0\right)$, i+1); if(lo == hi) return lo; return old_or_new(φ, hi, lo); Available Packages External links
{"url":"http://www.reference.com/browse/Binary+Decision+Diagram","timestamp":"2014-04-18T19:02:17Z","content_type":null,"content_length":"93016","record_id":"<urn:uuid:0c3e7113-15d9-4a6e-8c42-9a4cfda0c7cd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Fair Lawn ACT Tutor Find a Fair Lawn ACT Tutor ...Philosophers distinguish correct reasoning from incorrect reasoning by means of logic, so introductory logic courses are required in most philosophy BA programs. While completing my undergrad degree in philosophy, I took several courses in logic beyond the introductory level and received A grades in all. In my last year, I tutored introductory logic to beginning-level students. 34 Subjects: including ACT Math, English, reading, writing ...I enjoy the time that I spend with my students, and I take considerable pride in their accomplishments. I am committed to my students, and I am always available to them via text and email. Test prep is a means to an end and my greatest pleasure comes when I can help a student achieve his or her goal. 9 Subjects: including ACT Math, SAT math, SAT reading, GMAT ...I was a math major at Washington University in St. Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond. 26 Subjects: including ACT Math, calculus, statistics, physics ...I received by Bachelors in Math from Montclair State University. Now I an getting my Masters in Education from Montclair State University. I have tutored students before in Mathematics and success in math is all about practice. 12 Subjects: including ACT Math, calculus, algebra 2, SAT math ...I spend extra time getting to know each student beyond the numbers. By getting to know the whole person, I avoid bumps in the road and am able to smoothly navigate a pathway to success. My students obtain optimal outcomes from their efforts. 52 Subjects: including ACT Math, reading, English, writing Nearby Cities With ACT Tutor Elmwood Park, NJ ACT Tutors Fairlawn, NJ ACT Tutors Garfield, NJ ACT Tutors Glen Rock, NJ ACT Tutors Hackensack, NJ ACT Tutors Hawthorne, NJ ACT Tutors Lodi, NJ ACT Tutors North Haledon, NJ ACT Tutors Paramus ACT Tutors Paterson, NJ ACT Tutors Radburn, NJ ACT Tutors Ridgewood, NJ ACT Tutors Saddle Brook ACT Tutors Woodcliff, NJ ACT Tutors Woodland Park, NJ ACT Tutors
{"url":"http://www.purplemath.com/Fair_Lawn_ACT_tutors.php","timestamp":"2014-04-16T10:45:40Z","content_type":null,"content_length":"23633","record_id":"<urn:uuid:885a7928-ddbf-437c-a2a0-9e5fd23c2c31>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
A non-linear lower bound for planar epsilon-nets Seminar Room 1, Newton Institute After a brief description of the notion of epsilon-nets for range spaces and of the main known results about them, I will show that the minimum possible size of an epsilon-net for point objects and line (or rectangle)-ranges in the plane is (slightly) bigger than linear in 1/epsilon. This settles a problem raised by Matousek, Seidel and Welzl in 1990. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/DAN/seminars/2011011115301.html","timestamp":"2014-04-19T17:09:52Z","content_type":null,"content_length":"5839","record_id":"<urn:uuid:a66761f7-02d4-4725-b4cc-2a8d83972cec>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Events from Fall 2009 The 70th William Lowell Putnam Mathematical Competition December 5, 2009 Ryan Ward, Ben Sokolowsky, Mike Lengel, Joe Ruby, Steven Duff, Will Kanegis took the challenge of this grueling and fun competition, ably supported by Professors Greg Adams and Steven Wang. Student Colloquium: Thursday, November 19, 12:00 noon in 268 Olin Science "I Know What You Did Last Summer," Presented by Bucknell Students Dennis Fillebrown ’10 Ariel Kniss ’09 Lucas Mentch ‘10 Aaron Meyers ’10 Alyssa Okita ’10 Julie Sullivan ’10 Mary Wilson ‘10 Abstract: Do you want to know about summer options for undergraduates with mathematical interests? Seven students will share information about the research experiences and internships they had last summer, providing practical information, opinions, and advice for you! November 18 is RH Day! A Special Student Colloquium Presentation: Wednesday, November 18, 4:00pm in 268 Olin Science "The Riemann Hypothesis", David Farmer (American Institute of Mathematics) This year marks the 150th anniversary of the Riemann Hypothesis, the biggest unsolved problem in mathematics. November 18 is "RH Day." The Riemann Hypothesis will be celebrated by more than two dozen lectures all over the world, including this talk at Bucknell. During the talk, I will explain exactly what the Riemann Hypothesis says, why mathematicians think it is an important problem, and give some reasons why it is so difficult to prove. For more information about the 150th anniversary celebration of the Riemann Hypothesis, visit the AIM page Student Colloquium: Thursday, November 5, 12:00 noon in 268 Olin Science "At the crossroads of topology, algebra, and combinatorics," Matt Miller (Bucknell University) Abstract: Mathematics is sometimes portrayed as a collection of disconnected areas of study. On the other hand, solutions to mathematical problems often require tools from many different areas of mathematics. In this talk we will discuss one way in which topology, algebra, and combinatorics interact in my current research. We will describe some of the connections between subspace arrangements, a general notion of multiplication, and hypergraphs using pictures and specific examples. Economics and Mathematics Alumni Career Panel: Monday, November 2, 5:00 pm, Gallery Theatre, ELC 301 Panelists who majored in Economics and Mathematics will share their work experiences to help you understand the job and internship search process and answer any questions you may have. Refreshments will be served afterward. The panelists are: • Victoria Boyarsky ’04, ASA, MAAA Actuarial Analyst, United Concordia Companies, Inc., Carlisle, PA • Sam Camens ‘07 Consultant, Towers Perrin, New York, NY • Jeff Cohen ‘02 Economics and Mathematics Teacher, Mercersberg Academy, Mercersberg, PA • Chris Ellis ’00, Ph.D. Professor, Political Science Department, Bucknell University • Erin Hirschbeck Mallonee ’01, MS Research Health Policy Analyst, RTI International, Rockville, MD • Ben Wellington ’02, Ph.D. Researcher, Two Sigma Investments, New York, NY This event is co-sponsored by the Economics and Mathematics departments and Alumni Relations and Career Services Questions? Please contact the Career Development Center at 7-1238 or cdc@bucknell.edu Colloquium: Monday, November 2, 12:00 noon, Rooke Chemistry 018 "Multitext Parsing: A Statistical Approach to Language Translation," Ben Wellington '02 (Two Sigma Investments) Abstract: As the amount of freely available data has exploded in recent years, advances in natural language processing (NLP) have increasingly relied on statistical techniques. This type of research involves collecting statistics from a training data set and then applying these statistics to new instances of the task. This talk will give an overview of statistical techniques in two main areas of NLP: parsing (finding the correct phrase structure for a sentence), and language modeling (building a statistical model of sentences in a language). A discussion will follow on a novel way to combine these two areas to perform machine translation (finding the best translation from one human language to another). The talk will show that with a proper grammar formalism, translation can be seen simply as a generalization of parsing. Distinguished Visiting Professor, October 19-23 Kim Ruane, Tufts University Hosted by Adam Piggott. Professor Ruane will give two faculty colloquia "To infinity...and beyond", Wednesday, October 21, 4:00pm, in Olin 372 Suppose F is a finitely generated free group with basis S and let f be an automorphism of F. It is well-known that the subgroup of F that contains the elements of F that are fixed by f is a finitely generated subgroup of F - in fact, it is only recently that the rank of this subgroup was shown to be bounded by the rank of the ambient free group. The first proof of the finite generation was given by S. Gersten. His proof is rather long and combinatorial. We would like to present a geometric proof of this fact that was first given by D. Cooper. The idea is that we view F acting by isometries on its Cayley graph which is a metric simplicial tree. The automorphism f induces a map from the tree to itself that is well-behaved with respect to the geometry of the tree. To make this last statement precise and helpful, we will compactify the Cayley graph by adding on a boundary. Then we see that f can be extended to a unique homeomorphism of the resulting boundary. This will allow us to answer the finite generation question by studying this homeomorphism. The proof is short and elegant, but more importantly this proof motivates some of the main themes of geometric group theory. The ideas in this proof were substantially generalized to yield analogous theorems about Gromov hyperbolic groups. I will discuss these generalizations near the end of the talk. "Using the boundary", Thursday, October 22, 4:00pm, in Olin 372. In this talk, I will define the visual boundary of a proper geodesic metric space in a more formal way than in my first talk. I will give several examples that arise naturally in the study of nonpositively curved (i.e. CAT(0)) groups. A CAT(0) group is a group G that acts geometrically on a CAT(0) metric space X. The prototypical example is the fundamental group of a compact nonpositively curved Riemannian manifold acting on its universal cover. In this case, the visual boundary is a sphere. There is a natural extension of the action of G on X to the visual boundary as in my first talk. We will analyze this action by first understanding how a single element g in G acts on the boundary - in particular, we can describe the fixed point set of g in the boundary. We will then use this to characterize those elements of G that are "virtually" central as those that act like the identity on the boundary. Student Colloquium: Thursday, October 22, 12:00 noon in 268 Olin Science "Computer Forensics: I Know What You Do Online," Scott Inch (Bloomsburg University) Abstract: Your computer stores a massive amount of data about your behavior without your knowledge. Learn how computer forensics can be used to acquire and analyze this data for use in criminal investigations. Examples from real cases will be shown. Distinguished Visiting Professor, October 5-9 Brett Wick, Georgia Institute of Technology Hosted by Julien Giol. Professor Wick will give a talk in two parts on "The Corona Problem in One and Several Variables." Colloquium (Part I): Wednesday, October 7, 4:00pm, in Olin 372 Seminar (Part II): Thursday, October 8, 4:00pm, in Olin 372 Abstract: Carleson's Corona Theorem from the 1960's has served as a major motivation for many results in complex function theory, operator theory and, harmonic analysis. In its simplest form, the result states that for two bounded analytic functions, f[1] and f[2], on the unit disc with no common zeros, it is possible to find two other bounded analytic functions, g[1] and g[2], such that f[1] g[1]+f[2]g[2]=1. Moreover, the functions g[1] and g[2] can be chosen with some norm control. In this series of talks we will discuss an exciting new generalization of this result to certain function spaces on the unit ball in several complex variables. In the first talk (Faculty Colloquium), we will discuss some of the problem's background and some potential generalizations of this famous result. Particular attention will be paid to connections that the Corona Problem has with other areas of mathematics. The second talk (Research Seminar) will focus on the technical aspects behind the proofs of these theorems. Student Colloquium: Thursday, September 3, 12:00 noon in 268 Olin Science "A life and death application of mathematics," Peter McNamara (Bucknell University) Abstract: While you, Sarah Palin and Dick Cheney are out on a hunting trip, an argument ensues as to why the Republicans lost the last election. Soon enough, you find yourselves in a three-way shootout. Unfortunately for you, you only hit your target a third of the time, Sarah Palin hits her target half the time, and Dick Cheney never misses. The good news is that you get to shoot first, while Dick Cheney goes third. Emotions and politics aside, where should you aim your first shot? We will answer this question and determine your chances of being the last one standing. Student Colloquium: Thursday, September 17, 12:00 noon in 268 Olin Science Abstract: Can you imagine a four-dimensional object? Not with time as a fourth dimension, but another space dimension like the three we normally perceive? Learn some tips and tricks for visualizing higher dimensional objects described and inspired by Flatland. Click here to go back to the top of the page Math Meets Poetry Contest The Mathematics Department invites you to submit your funny, profound, topical or just plain ridiculous limericks, haiku or other poetry that is mathematically related or inspired. Send your entry to adam.piggott@bucknell.edu by Wednesday September 23. The contest is open to all members of the Bucknell community. Click here to see the entries, including the favorites of the judges. Math, Meet & Treat (MAA Social Event): Thursday, September 24, 8:00 pm in 383 Olin Science Come join the fun, games and food with others interested in Mathematics. This event is organized by the MAA Club-Bucknell University Students. Student Colloquium: Thursday, October 1, 12:00 noon in 268 Olin Science "Finding Mathematics in Poetry," JoAnne Growney A Brief Bio of Speaker: JoAnne Growney is co-editor of "Strange Attractors: Poems of Love and Mathematics", (published by A K Peters, 2008) and author of "My Dance is Mathematics", (published by Paper Kite Press, 2006) and the soon to be released "Angles of Light" (published by Finishing Line Press, October 2009). She was a professor of mathematics at Bloomsburg University before moving to Maryland where her primary activity is poetry. Abstract: Students (and possibly others) will join guest poet-mathematician JoAnne Growney for readings (and occasional commentary) of poems with mathematical connections. Contributions from audience members will be welcomed as time permits.
{"url":"http://www.bucknell.edu/x58184.xml","timestamp":"2014-04-17T18:31:08Z","content_type":null,"content_length":"25446","record_id":"<urn:uuid:88b0464a-4ee3-494c-b815-71078217ad11>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
LANS Publications "Asymptotic Expansions for Oscillatory Integrals Using Inverse Functions" J. N. Lyness and J. W. Lottes BIT, vol. 49, no. 2, , pp. 397-417. Also Preprint ANL/MCS-P1568-1108 Preprint Version: [pdf] We treat finite oscillatory integrals of the form {a int b} F(x) exp[ikG(x)]dx in which both F and G are real on the real line, are analytic over the open integration interval, and may have algebraic singularities at either or both interval end points. For many of these, we establish asymptotic expansions in inverse powers of k. No appeal to the theories of stationary phase or steepest descent is involved. We simply apply theory involving inverse functions and expansions for a Fourier coefficient {a int b} phi(t)exp(ikt)dt. To this end, we have assembled several results involving inverse functions. Moreover, we have derived a new asymptotic expansion for this integral, valid when phi(t) = sum a sub j t sup {sigma sub j}.
{"url":"http://www.mcs.anl.gov/research/LANS/publications/index.php?p=pub_detail&id=345","timestamp":"2014-04-24T20:42:34Z","content_type":null,"content_length":"4655","record_id":"<urn:uuid:b08cfb76-c3c7-47ce-8243-c7c9b922cc91>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Limitations to mathematical knowledge, in Logic Colloquium '80 , 1990 "... A critical review of randomness criteria shows that no-go theorems severely restrict the validity of actual "proofs" of undecidability. It is suggested to test microphysical undecidability by physical processes with low extrinsic complexity, such as polarized laser light. The publication and distrib ..." Cited by 20 (16 self) Add to MetaCart A critical review of randomness criteria shows that no-go theorems severely restrict the validity of actual "proofs" of undecidability. It is suggested to test microphysical undecidability by physical processes with low extrinsic complexity, such as polarized laser light. The publication and distribution of a sequence of pointer readings generated by such methods is proposed. Unlike any pseudorandom sequence generated by finite deterministic automata, the postulate of microscopic randomness implies that this sequence can be safely applied for all purposes requireing stochasticity and high complexity. - FOUNDATIONS OF PHYSICS, VOL. 25, NO. 11 , 1995 "... Inasmuch as physical theories are formalizable, set theory provides a framework for theoretical physics. Four speculations about the relevance of set theoretical modeling for physics are presented: the role of transcendental set theory (i) hr chaos theory, (ii) for paradoxical decompositions of soli ..." Cited by 8 (7 self) Add to MetaCart Inasmuch as physical theories are formalizable, set theory provides a framework for theoretical physics. Four speculations about the relevance of set theoretical modeling for physics are presented: the role of transcendental set theory (i) hr chaos theory, (ii) for paradoxical decompositions of solid three-dimensional objects, (iii) in the theory of effective computability (Church-Turhrg thesis) related to the possible "solution of supertasks," and (iv) for weak solutions. Several approaches to set theory and their advantages and disadvatages for" physical applications are discussed: Cantorian "naive" (i.e., nonaxiomatic) set theory, contructivism, and operationalism, hr the arrthor's ophrion, an attitude of "suspended attention" (a term borrowed from psychoanalysis) seems most promising for progress. Physical and set theoretical entities must be operationalized wherever possible. At the same thne, physicists shouM be open to "bizarre" or "mindboggling" new formalisms, which treed not be operationalizable or testable at the thne of their " creation, but which may successfully lead to novel fields of phenomenology and technology. , 2010 "... Proving the chaoticity of some dynamical systems is equivalent to solving the hardest problems in mathematics. Conversely, one argues that it is not unconceivable that classical physical systems may “compute the hard or even the incomputable” by measuring observables which correspond to computationa ..." Cited by 2 (1 self) Add to MetaCart Proving the chaoticity of some dynamical systems is equivalent to solving the hardest problems in mathematics. Conversely, one argues that it is not unconceivable that classical physical systems may “compute the hard or even the incomputable” by measuring observables which correspond to computationally hard or even incomputable problems. , 2008 "... Different types of physical unknowables are discussed. Provable unknowables are derived from reduction to problems which are known to be recursively unsolvable. Recent series solutions to the n-body problem and related to it, chaotic systems, may have no computable radius of convergence. Quantum unk ..." Add to MetaCart Different types of physical unknowables are discussed. Provable unknowables are derived from reduction to problems which are known to be recursively unsolvable. Recent series solutions to the n-body problem and related to it, chaotic systems, may have no computable radius of convergence. Quantum unknowables include the random occurrence of single events, complementarity and value indefiniteness. "... On the length of proofs of finitistic consistency statements in first order theories t ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1355930","timestamp":"2014-04-21T01:48:42Z","content_type":null,"content_length":"21143","record_id":"<urn:uuid:5ddf4fea-64d9-4ec8-87c6-0d6f9722be80>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
2003-2004 UAF Catalog: Applying for Admission English and Mathematics On the basis of test scores, you may be required to take developmental English and/or mathematics. These courses are designed to help you achieve competency necessary to succeed in college-level courses. Generally, you will be placed in ENGL 111X if your enhanced ACT (EACT) English score is 17 (or SAT Verbal score is 440) or above. Mathematics course placement will vary according to the type of degree you are planning to pursue and the corresponding math course(s) needed (see the requirements for your degree program for more detail). Enhanced ACT (EACT), SAT, COMPASS or ASSET test scores and your previous mathematical background are used to determine your math placement. Minimum test scores for placement into math courses are listed below: MATH 200X, 205, 262X OR 272X: EACT-26 / SAT-600 COMPASS (College Algebra - 56; with a Trigonometry score of 41 needed) MATH 107X, 108*, 161X: EACT-23 (*26 for 108) / SAT-540 (*600) (EACT math subscores should both be 13 or higher) COMPASS (Algebra-76 or College Algebra-50) ASSET (College Algebra-23 for 107X, 41 for 108X) MATH 131X: EACT-22 (subscores 13 or better) / SAT- 520 COMPASS (College Algebra-31) ASSET (Intermediate Algebra-41 or College Algebra-41) DEVM 105: EACT-20 (with subscores of 12-13),/SAT-480 COMPASS (Algebra - 50 or College Algebra - 31) ASSET (Intermediate Algebra - 23 or Elementary Algebra - 41) DEVM 060: EACT-16 (with subscores of 8-11)/ SAT-390 COMPASS (Pre-Algebra - 54 or Algebra - 26) ASSET (Elementary Algebra - 23 or Numerical Skills - 41) For scores below the values given here, please consult with Developmental Math Department. It is best to consult with your advisor or faculty in the English or math department(s) if you have questions regarding the appropriate course placement. You may enroll in the level of a language at which you are competent, based on your prior experience. There is no foreign language placement test. If there are any questions about the appropriate level, you should contact the Foreign Languages Department.
{"url":"http://www.uaf.edu/catalog/catalog_03-04/undergrad/placement.html","timestamp":"2014-04-17T17:26:40Z","content_type":null,"content_length":"9195","record_id":"<urn:uuid:c94cf393-fe28-490d-9cf6-bd3304f7e4f2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2009 [00243] [Date Index] [Thread Index] [Author Index] newbie: explicit function works, "function object" doesn't • To: mathgroup at smc.vnet.net • Subject: [mg96207] newbie: explicit function works, "function object" doesn't • From: Tom Roche <tlroche at gmail.com> • Date: Mon, 9 Feb 2009 05:35:29 -0500 (EST) summary: I've got a function that I can Manipulate, but I can't Manipulate the things to which the function is assigned. I'm assuming this is due to a syntax error. As advertised by the Subject:, I'm new to Mathematica (specifically M7 running on a linux cluster) so please excuse any lapses in terminology. I'd also appreciate pointers to specific docs rather than just an RTFM. Also feel free to refer to my notebook @ I'm trying initially to model a linearized pendulum theta''[t] == -omega^2 theta[t] (where omega=g/l) with the simple initial conditions soln = DSolve[{linearPendulum, theta'[0] == v0, theta[0] == theta0}, theta[t], t] I get the solution theta[t] -> (omega*theta0*Cos(omega*t) + (v0*Sin(omega*t)))/omega However, when I try to Manipulate[Plot[soln...]], I get ... nothing. Not an error, and I do get the appropriate UI, but I don't get a plot. >From what I've read, it seems Manipulate wants an expression like the RHS of 'soln', which I'll refer to here as a "function." (Please correct my usage as needed.) I got a "function object" with ReplaceAll happysoln[t_] = theta[t] /. soln[[1]] similar to a working example I've seen. But Manipulate[Plot[happysoln...]] also fails silently, like the previous. However, when I manipulate the "function" directly Manipulate[Plot[(omega theta0 Cos[omega t] + v0 Sin[omega t])/omega, {t, -2 Pi, 2 Pi}], {{omega, 1}, 0, 5}, {{theta0, 1}, 0, 5}, {{v0, 1}, 0, 5}] it works as designed: I get the UI with the plot, which I can twiddle with the sliders. Why can I Manipulate the function, but not the containers of the function (i.e. soln, happysoln)? More importantly, with what syntax can I Manipulate those containers? TIA, Tom Roche <Tom_Roche at pobox.com> • Follow-Ups:
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Feb/msg00243.html","timestamp":"2014-04-20T03:30:41Z","content_type":null,"content_length":"27270","record_id":"<urn:uuid:570ebd6c-6c6a-4677-9a1c-873f0ae9f75d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: foreach question Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: foreach question From David Torres <torresd@umich.edu> To statalist@hsphsun2.harvard.edu Subject RE: st: foreach question Date Sun, 22 Aug 2010 17:09:39 -0400 I originally did have the date in long format (see my yet unanswered question of yesterday). I'm using the NLSY97 employment data. I've reshaped wide to long using the string option because I have several jobs per year for several years of data. The year_jobnum is my string variable. id year_jobnum stjob endjob employerid 1 1997_1 01Jan1996 06July1997 9701 1 1997_2 1 1997_3 1 1998_1 01Jan1996 25June1998 9701 1 1998_2 10June1997 19Aug1997 9801 1 1998_3 2 1997_1 15Sept1996 30June1997 9713 2 1997_2 2 1997_3 2 1998_1 22Jan1997 15July1998 9820 2 1998_2 2 1998_3 3 1997_1 3 1997_2 3 1997_3 3 1998_1 11Oct1997 30July1998 9816 3 1998_2 30May1997 25Aug1997 9846 3 1998_3 I hope this is more challenging for you, sir. And, again, I appreciate your assistance. While currently the data are odered in each year according to end date of job (most recent or current employer is listed in slot 1), I want to reorder according to begin job date. This seems the easiest way to backfill data for missing years since jobs held since date of last interview are asked in all rounds, meaning that income information can be gathered for missed years as long as a job was worked. Quoting Martin Weiss <martin.weiss1@gmx.de>: This new info makes my -reshape- solution more attractive. I seriously doubt that you want to work in wide format with this kind of data. Take a look at the long data, and see whether it facilitates your analysis: inp str10(Startjob1 Startjob2 Endjob1 Endjob2) 01Jan1997 05June1997 24Dec1997 02Sept1997 gen byte id=1 reshape long Startjob Endjob, i(id) j(number) drop id list, noo -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of David Torres Sent: Sonntag, 22. August 2010 22:39 To: statalist@hsphsun2.harvard.edu Subject: Re: st: foreach question To complicate things a bit more, what if date1 and date2 and date3 and date4 represent, respectively, begin and end dates for employment. Startjob1 Startjob2 Endjob1 Endjob2 01Jan1997 05June1997 24Dec1997 02Sept1997 Is there a way to sort both together so the correct start and end dates remain together? The data I am using are currently sorted on endjob dates, such that the most recent employment (i.e., at time of interview) is listed in job#1 slot. As you can see, however, sometimes a start date may precede another start date, while the end dates are actually reversed. Thanks in advance. Quoting Eric Booth <ebooth@ppri.tamu.edu>: I think this is what you are asking: inp str12(Date1 Date2 Date3 Date4) 23July1997 01Jan1997 12Sept1997 03Feb1997 05July1997 04July1997 03July1997 02July1997 foreach v in `r(varlist)' { g `v'2 = date(`v', "DMY") format `v'2 %td drop `v' rename `v'2 `v' g id = _n reshape long Date, i(id) j(n) sort Date g new = _n drop n id g id = 1 reshape wide Date, i(id) j(new) drop id ~ Eric Eric A. Booth Public Policy Research Institute Texas A&M University Office: +979.845.6754 On Aug 22, 2010, at 2:54 PM, David Torres wrote: Is there a way to sort or reorder dates using the foreach command--perhaps using the egen function in the loop or something? How do I get: Date1 Date2 Date3 Date4 23July1997 01Jan1997 12Sept1997 03Feb1997 to be ordered thusly: Date1 Date2 Date3 Date4 01Jan1997 03Feb1997 23July1997 12Sept1997 David Diego Torres * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-08/msg01088.html","timestamp":"2014-04-20T16:12:11Z","content_type":null,"content_length":"14403","record_id":"<urn:uuid:1fb202ee-e7df-4180-b9c0-1f3d9d36d6e6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
"How to learn math and physics" - the title is deliberately provocative. Everyone has to learn their own way. I don't know how you should learn math and physics. But presumably you came here looking for advice, so I'll give you some. My advice is aimed at people who are interested in fundamental theoretical physics and the math that goes along with that. (By "fundamental" physics I mean the search for the basic laws concerning matter and the forces of nature.) If you want to do experiments instead of theory, or other kinds physics like condensed matter physics and astrophysics, or math that has nothing to do with physics, my advice will be of limited use. You should still learn the basics I mention here, but after that you'll have to look elsewhere for suggestions. Learning math and physics takes a whole lifetime. Luckily, it's a lot of fun... if you have a reasonably patient attitude. A lot of people read pop books about quantum mechanics, black holes, or Gödel's theorem, and immediately want to study those subjects. Without the necessary background, they soon become frustrated - or worse, flaky. It can be even more dangerous if you want to plunge into grand unified theories, or superstrings, or M-theory. Nobody knows if these theories are true! And it's hard to evaluate their claims until you know what people do know. So, especially when it comes to physics, I urge you to start with slightly less glamorous stuff that we know to be true - at least as a useful approximation, that is - and then, with a solid background, gradually work your way up to the frontiers of knowledge. Even if you give up at some point, you'll have learned something worthwhile. This webpage doesn't have lots of links to websites. Websites just don't have the sort of in-depth material you need to learn technical subjects like advanced math and physics - at least, not yet. To learn this stuff, you need to read lots of books. I will list some of my favorites below, and also some you can get free online. But, you can't learn math and physics just by reading books! You have to do lots of calculations yourself - or experiments, if you want to do experimental physics. Textbooks are full of homework problems, and it's good to do these. It's also important to make up your own research topics and work on those. If you can afford it, there's really nothing better than taking courses in math and physics. The advantage of courses is that you get to hear lectures, meet students and professors, and do some things you otherwise wouldn't - like work your butt off. It's also crucial to ask people questions and explain things to people - both of these are great ways to learn stuff. Nothing beats sitting in a cafe with a friend, notebooks open, and working together on a regular basis. Two minds are more than twice as good as one! But if you can't find a friend in your town, there are different ways to talk to people online. In all cases, it's good to spend some time quietly getting to know the local customs before plunging in and talking. For example, trying to start a rambling discussion on a question-and-answer website is no good. Here are some options: There are also lots of interesting blogs, and lots of free math books online. Finally, it's crucial to admit you're wrong when you screw up. We all make tons of mistakes when we're learning stuff. If you don't admit this, you will gradually turn into a crackpot who clutches on to a stupid theory even when everyone else in the world can see that it's wrong. It's a tragic fate, because you can't even see it's happening. Even bigshot professors at good universities can become crackpots once they stop admitting their mistakes. To avoid looking like a fool, it's really good to get into the habit of making it clear whether you know something for sure, or are just guessing. It's not so bad to be wrong if you said right from the start that you weren't sure. But if you act confident and turn out to be wrong, you look dumb. In short: stay humble, keep studying, and you'll keep making progress. Don't give up - the fun is in the process.
{"url":"http://math.ucr.edu/home/baez/books.html","timestamp":"2014-04-21T04:33:35Z","content_type":null,"content_length":"32743","record_id":"<urn:uuid:fd61aadc-47c5-407d-8f71-8cfd0e76afe3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Decimal Word Problem A decimal is a fraction expressed with the use of a point. The point shows that all the digits to the right of it have a value of less than one and, more specifically, each of those has a value of 1/ 10th of the digit to its left.
{"url":"http://www.mathworksheetscenter.com/mathskills/wordproblems/DecimalWordProblems/","timestamp":"2014-04-21T01:58:54Z","content_type":null,"content_length":"14113","record_id":"<urn:uuid:f1cbf357-36e4-428d-8aa9-9cfcc3cc9ae5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Colour metric A personal note Colour perception is a difficult and little understood problem, which seems to defy even the most ingenious mathematical expressions. When researching the implementation of colour quantization algorithms, I stumbled more than once on theoretical discussions that were difficult to follow, and sometimes nearly impossible to verify (or to reject, for that matter). I am an engineer, not a scientist, and the adage for engineers is "through measurement to knowledge". The conclusions drawn in this paper are based on simple tests with real people. Somewhere in the paper I state that evaluating colour differences is subjective: when asked to pick the "closest" match for a specific colour from a small palette, the selections by the test persons differ. On the one hand, you may claim that the test setup must have been flawed. A good test should rule out subjectivity. On the other hand, you should also recognize that about one out of eight men is more or less colour-blind. As it turned out, individuals with a (slight) green-red deficiency were amongst my test persons, and their results count as much as everyone else's. Once, I asked on UseNet about the "equivalent gray level" of a colour. The replies were "30% red + 59% green + 11% blue", without exception, and without hesitation or a critical note. I started a simple paint program and drew two rectangles: a blue one with RGB values (0, 0, 255) and a green one with RGB values (0, 48, 0). According to the formula, that everyone that was kind enough to respond to me had given, these rectangles should have the same brightness, but they clearly had not. Foley [et al.] implies (p. 589) that the equation applies to linear RGB, which is contradicted in Poynton's "colorspace-faq". After gamma correction (the green cube gets RGB (0, 131, 0), blue cube stays at RGB (0, 0, 255), for a gamma of 2.5), the result was still debatable. I wondered whether anyone had done the same, simple, 10-second test before asserting the 30%-59%-11% rule. For me, this rule is how NTSC encodes luma/chroma channels, not how the human eye perceives brightness. I have difficulty to believe that the weighting was correct for the phosphors that were used at the time that NTSC was carved in stone, because the phosphors used in contemporary computer monitors have not changed that much from those used in the early days. I concluded a paper on the Microsoft Windows Palette Manager by insisting that you should experiment and verify your assumptions. I would like to conclude this note in the same spirit: do not take my word for granted, nor that of anyone else. Experiment! Compare! Through measurement to knowledge (H. Kamerlingh Onnes, 1882) • When you map a true colour (photographic) image to a reduced palette (say, 256 colours), every true-colour pixel must be mapped to the palette entry that comes closest to original colour. • Vector Quantization is a lossy compression technique that, when applied to images, quantizes multiple pixels at a time. It is thereby an extension of the straight colour quantization (or palette mapping) described in the previous point, and it has the same quality criterions. • Other lossy image compression techniques use a "quality criterion" —the difference between the original colour and the quantized colour should remain below some threshold. This requires that you can determine the difference between pictures, starting with the difference between pixels. Thus the questions "what is the closest colour?" and "how does one measure the distance between colours?" become relevant. This paper evaluates several common metrics of colour distance and presents new formula that is both simple and produces good results. If we can map a colour in an (abstract) orthogonal three-dimensional space, the distance between two colours is \( \left\| C_1 - C_2 \right\| \), where \( \left\| ... \right\| \) denotes the Euclidean distance. For a three-dimensional space (with dimensions R, G and B) the Euclidean distance between two points is calculated as follows: \[ \left\| {C_1 - C_2 } \right\| = \sqrt {(C_{1,R} - C_{2,R} )^2 + (C_{1,G} - C_{2,G} )^2 + (C_{1,B} - C_{2,B} )^2 } \] Graphic applications for the PC usually employ the Red-Green-Blue (RGB) colour space. This model maps well to the way the common Cathode Ray Tube (CRT) display works. These displays have three kinds of phosphors that emit red, green or blue light when they are struck by an electron beam. Another advantage of the RGB model is that it is a three-dimensional orthogonal space, precisely what we need for the Euclidean distance function. In this paper, I will abbreviate the term \( C_{1,R} - C_{2,R} \) to \( \Delta R \), and similarly for the green and blue components. The problem with RGB is that it is not always the easiest model to use (that is why printers usually use the CMYK model), and more fundamentally, it does not model the way in which we (humans) perceive colour. Specifically, colour perception is non-linear and not exactly orthogonal. The standard colour model is CIE XYZ colour space. All other models can be interpreted as different mappings or subsets of the CIE XYZ colour space. CIE is a complex model and, most of all, although it defines the space of all colours that we can distinguish, it does not provide a perceptually uniform colour space. Thus, the distance between two points in the CIE XYZ space has no relation to the relative closeness of these colours. After ten years of debate, the CIE could not reach agreement on a definition of a perceptual uniform colour space, and therefore applied its stamp of approval on two competing perceptual uniform colour models: CIE L^*a^*b^* and CIE L^*u^*v^* —but they, too, are regarded today as inadequate models for the perception of colour [Granger, 1994]. The CIE has more or less acknowledged this itself by taking a proposed modification by D. Alman into recommendation [Alman, 1993]. A specific flaw that several studies have identified is that CIE L^*a^*b^* progressively overemphasizes colour differences as the chroma of the colours increases. Both [Foley, 1990] and [Poynton, 1999] give formulae to convert from RGB to CIE XYZ and from there to CIE L^*u^*v^*. Alman's modification is a bit harder to come by, unless you can find the magazine. However, there is an easy improvement of CIE L^*u^*v^*, and one that has been suggested by several independent sources ([Granger, 1994], [Nemcsics, 1993]) is that the brightness L^* is proportional to \( \sqrt{Y} \) rather than to \( \sqrt[3]{Y} \) (square root instead of cube root). According to reports, Nemcsics came to this conclusion after experiments with 2500 observers in a "complex environment" —an environment where the eyes of the observers are not fully adapted to the luminance level. The video and television industry researched the issue of perceptually uniform colour models, to achieve high quality compression. Video channels have limited bandwidth; compactly coding the luminance and chrominance information is a requirement. But when you decide to throw some of the data away, you will wish to do so while retaining the best visual quality. Two well-known models that the video industry developed are "YIQ" and "YUV". YUV is used by the Betamax standard and PAL and SECAM (European television), as well as by a few computer graphic formats. NTSC (American television) uses YIQ; basically, YIQ is YUV with scaling factors optimized for a reduced bandwidth. For completeness, below are the matrices for transforming from gamma corrected R'G'B' to YUV and YIQ: \[ \left[ {\matrix{ Y \cr U \cr V \cr } } \right] = \left[ {\matrix{ {0.299} & {0.587} & {0.114} \cr { - 0.147} & { - 0.289} & {0.463} \cr {0.615} & { - 0.515} & { - 0.100} \cr } } \right]\left[ {\matrix{ {R'} \cr {G'} \cr {B'} \cr } } \right] \] R'G'B' to YUV \[ \left[ {\matrix{ Y \cr I \cr Q \cr } } \right] = \left[ {\matrix{ {0.299} & {0.587} & {0.114} \cr {0.595} & { - 0.274} & { - 0.322} \cr {0.211} & { - 0.523} & {0.312} \cr } } \right] \left[ {\matrix{ {R'} \cr {G'} \cr {B'} \cr } } \right] \] R'G'B' to YIQ By the way, the "Y" of YUV and YIQ is the gamma corrected "brightness" component of the colour; the "Y" of the CIE XYZ model is linear (= uncorrected) brightness. Both are related, but they are not the same. Other colour models have been developed with the goals of being easy to compute and making better use of the features and limitations of the human visual system. The document by E.M. Granger mentions the Guth ADT colour space, and Charles A. Poynton wrote in his "colorspace-faq": "Although it was not specifically optimized for this purpose, the non-linear R'G'B' coding [...] is quite perceptually To recapitulate, and to set out a direction towards a solution: • What we need is a formula that gives a "distance" between two colours. This distance will only be used in comparisons, to verify whether one colour, A, is closer to colour B or to colour C. • In a perceptually uniform colour space, the Euclidean distance function gives this distance. This is the most straightforward (and obvious) solution, but not the only solution. • There are three well-known colour models (CIE L^*u^*v^*, YUV and R'G'B') that all score "quite well" in perceptual uniformity (or so they say...). • The proper test for these and other formulae is to compare their choice of "the closest colour" to the colour that a person would pick. With the last point in mind, I wrote a small program that creates a palette with colours from an RGB cube that is cut into 64 small cubes (two bits for each red, green and blue component, giving \( 2 ^2 \times 2^2 \times 2^2 = 64 \) colours). The program displays one colour from the palette, e.g. colour 24, and the tester chooses the closest colour from the remaining 63 palette entries. The next step is to let the program automatically choose the closest colour using any of the aforementioned colour spaces. Note that the goal of my program is not to find the smallest noticeable difference in tint from one colour pad to the next; the colours that one of my "observers" may use to match to another colour are obviously different. My intent is to let them select the closest entry in order to gain insight in how people evaluate colour "closeness". The results Due to the small size of the test group that I used in this experiment, the results below should perhaps be considered anecdotal. • Not surprisingly, the choice of the "correct" colour is subjective. One person may find the proper brightness more important than the proper hue, others feel that the replacement colour should approximate the hue and saturation as best as possible. Therefore, you will not find a single formula that suits everyone. • Non-linear R'G'B' is only fair. In many cases, it selects colours that are too dark or too blue. • YUV is always better then non-linear R'G'B', but it is far from perfect. • CIE L^*u^*v^* makes many excellent choices, but in a few situations, it makes unacceptable errors. The modified CIE L^*u^*v^* from [Granger] performs much better (the document by E.M. Granger suggested to use a square root for the L^*, instead of the standardized cube root, [Nemcsics, 1993] comes to the same conclusion). But even with the modified lightness curve, L^*u^*v^* does not perform well in the range of pink colours (skin colours of the Caucasian race). • Several individuals suggested a weighted Euclidean distance in R'G'B', according to the formula: \[ \left\| {\Delta C} \right\| = \sqrt {3 \times \Delta R'^2 + 4 \times \Delta G'^2 + 2 \times \ Delta B'^2 } \] This function has practically the same result as YUV. Its simplicity and speed of calculation make it a better choice than YUV. • As explained in the section "gamma correction" below, the perception of brightness by the human eye is non-linear. From the experiments it appears that the curve for this non-linearity is not the same for each colour. The weighted Euclidean distance presented works quite well for the subset of colours where the "red" signal is 128 or more (on a scale of 0-255). For the other half of the full R'G'B' cube, this different weighting produced better results: \[ \left\| {\Delta C} \right\| = \sqrt {2 \times \Delta R'^2 + 4 \times \Delta G'^2 + 3 \times \Delta B'^2 } \] Although blue has a small contribution (about 10%) to the sensation of brightness, human vision has an extraordinarily good colour discrimination capability in blue colours [Poynton, 1999]. This might explain why colours with a large "blue" contribution need a different weighting than colours with little blue. A low-cost approximation The proposed algorithm (used by our products EGI, AniSprite and PaletteMaker) is a combination both weighted Euclidean distance functions, where the weight factors depend on how big the "red" component of the colour is. First one calculates the mean level of "red" and then weights the \( \Delta R' \) and \( \Delta B' \) signals as a function of the mean red level. The distance between colours C[1] and C[2] (where each of the red, green and blue channels has a range of 0-255) is: \[ \eqalign{ & \bar r = {{C_{1,R} + C_{2,R} } \over 2} \cr & \Delta R = C_{1,R} - C_{2,R} \cr & \Delta G = C_{1,G} - C_{2,G} \cr & \Delta B = C_{1,B} - C_{2,B} \cr & \Delta C = \sqrt {\left( {2 + {{\bar r} \over {256}}} \right) \times \Delta R^2 + 4 \times \Delta G^2 + \left( {2 + {{255 - \bar r} \ over {256}}} \right) \times \Delta B^2 } \cr} \] This formula has results that are very close to L^*u^*v^* (with the modified lightness curve) and, more importantly, it is a more stable algorithm: it does not have a range of colours where it suddenly gives far from optimal results. The weights of the formula could be optimized further, but again, the selection of the closest colour is subjective. My goal was to find a reasonable A C code snippet for the colour distance function typedef struct { unsigned char r, g, b; } RGB; double ColourDistance(RGB e1, RGB e2) long rmean = ( (long)e1.r + (long)e2.r ) / 2; long r = (long)e1.r - (long)e2.r; long g = (long)e1.g - (long)e2.g; long b = (long)e1.b - (long)e2.b; return sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8)); Gamma correction You may have noticed that the above paragraphs mention both RGB and non-linear R'G'B' colour spaces. The linear and non-linear RGB spaces are related through gamma-correction. The area of gamma correction is confusing, because the same term is used to describe several entirely different (but related) phenomena: • Within a certain range, the non-linearity of the human eye matches a power function. The \( 1 / \gamma \) fraction represents this power function, and its value is somewhere between 1/2 and 1/3 (the value depends, amongst others, on the viewing conditions —surround light, for example). As a compromise, much literature nowadays assumes a value between 1/2.2 and 1/2.5. • The non-linearity between the voltage applied to a grid of a CRT and the resulting lightness of the phosphor is a power function whose value is close to 2.5. Incidentally, the non-linearity of CRT displays is a function of the electrostatics of the cathode and the grid of an electronic gun. The phosphors themselves are quite linear, at least until an intensity of about 75% where saturation starts to set in. The recurring theme in talks and newsgroups that computer displays vary widely in gamma is almost always due to bad adjustment of the monitor (the "black-level" error) and the display's reflection of ambient light. • Video (television) and computer imaging copied the non-linearity, and the gamma symbol, from photography. The camera film is not linear, but it can be approximated (within a certain range) with a power function. Common computer displays have a non-linear relation between the light intensity (I) output by a phosphor and the voltage (V) that is applied to the grid of the CRT. The non-linearity of a CRT is usually defined as: \[ I = k \times V^\gamma \] for constants k and \( \gamma \). The value of \( \gamma \) is approximately 2.5 for all CRT displays; k may be assumed 1.0 for purpose of illustration. Note that this formula does not take the black-level error into account. When black becomes dark gray (the essence of the black-level error), the perceived error in gamma changes dramatically. Since computer monitors are often badly adjusted, a better approximation of the gamma correction formula would be to fix the gamma at 2.5 and to make the black-level error explicit: \[ I = k \times (V + \varepsilon )^{2.5} \] To put these formulae into a more "visual" perspective: when a typical computer monitor has "contrast" and "brightness" knobs, the "contrast" knob sets the k variable and the "brightness" setting adjusts the \( \varepsilon \) variable. The transformation of the linear RGB colour space to the non-linear R'G'B' space is known as gamma correction. The red, green and blue channels of a colour each go through the formula: \[ S' = S^{1 / \gamma} \] where S is the source signal (colour component) and S' is the corrected source; both S and S' are in the range 0.0 to 1.0. In practice, the palettes of 256-colour images that are to be displayed on a computer monitor have already been "corrected" for this non-linearity so that the image shows the correct colours without further processing. See Charles Poynton's FAQ and article for more fascinating reading on these topics, as well as the "proper" formula (standardized for video and HDTV) of gamma correction. When your pictures or screen lay-outs must look right on a typical computer display in a typical office however, you need to "tweak" and tune the brightness levels, rather than rigidly applying formulae: the black-level error and the reflection of ambient light make the issue of gamma correction on computer displays quite senseless. Alman, D.H.; "Industrial color difference evaluation"; Color Research and Application; No. 18 (1993); pp. 137-139 Foley, J.D., A. van Dam, S.K. Feiner, J.F. Hughes; "Computer Graphics, Principles and Practice"; second edition; Addison-Wesley; 1990; pp. 574-600. An overview of many colour models, with the focus on how they relate to each other and to the CIE XYZ model. On page 589, the book says: "The Y component of YIQ [...] is defined to be the same as the CIE Y primary". As the CIE Y (luminance) is linear, this implies that the Y channel of YIQ (video luma) is linear as well. This is explicitly contradicted by Poynton's colorspace-faq, item Granger, E.M.; "Is CIE L*a*b* Good Enough for Desktop Publishing?"; technical report, Light Source Inc.; 1994. Nemcsics, A.; "Color Dynamics"; Publisher Akadmiai Kiad, Budapest; 1993. Poynton, Charles A.; "Gamma and its Disguises"; Journal of the Society of Motion Picture and Television Engineers; Vol. 102, No. 12 (December 1993); pp. 1099-1108. Poynton, Charles A.; "Frequently Asked Questions About Colour" ("colorspace-faq"); 1999. Maintained and available on the Internet in text (ASCII), Postscript and Adobe Acrobat formats.
{"url":"http://www.compuphase.com/cmetric.htm","timestamp":"2014-04-20T14:35:10Z","content_type":null,"content_length":"28098","record_id":"<urn:uuid:bcd7322e-48c3-4b41-a3bc-84dbe0ff23a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Schiller Park Math Tutor Find a Schiller Park Math Tutor ...I received a 31 on the ACT math section. To help students maximize their scores on the ACT Science test, I help students enhance their critical thinking and interpretation skills. My approach helps them understand how to evaluate each question and use logic to determine the best answer. 11 Subjects: including ACT Math, SAT math, writing, GMAT ...With directed practice, a student can definitely improve his/her test results in a reasonable amount of time. My methods have proven to be very successful. I have a Masters degree in applied mathematics and most coursework for a doctorate. 18 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel ...My goal is to help the student learn the subject they need to master as thoroughly as possible. Because of this, I incorporate movies, youtube clips, images, models, and examples of my own work to help the student grasp the idea or concept that I am trying to teach. Besides teaching concepts, r... 15 Subjects: including trigonometry, algebra 1, reading, chemistry Hello. My name is Michal. I am currently a student at University of Illinois at Urbana-Champaign. 5 Subjects: including algebra 2, calculus, precalculus, Polish ...Recently, I conducted biomedical research at an Ivy League university in New York. I am currently a faculty member at a local university in Chicago. I am also patient, outgoing, and friendly. 29 Subjects: including algebra 1, algebra 2, trigonometry, geometry
{"url":"http://www.purplemath.com/schiller_park_il_math_tutors.php","timestamp":"2014-04-18T15:46:19Z","content_type":null,"content_length":"23598","record_id":"<urn:uuid:50598edf-50cb-41bc-8c0f-14b50f9f2bbd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Prime Numbers in Linear Patterns From Math Images (Difference between revisions) ← Previous diff Next diff → Line 42: Line 42: |AuthorName=Iris Yoon |AuthorName=Iris Yoon |Field=Algebra |Field=Algebra |InProgress=No |InProgress=No }} }} Revision as of 00:13, 8 December 2012 Prime numbers in table with 180 columns Field: Algebra Created By: Iris Yoon Website: [ ] Prime numbers in table with 180 columns Prime numbers marked in a table with 180 columns Basic Description Arranging natural numbers in a particular way and marking the prime numbers can lead to interesting patterns. For example, consider a table with 180 columns and infinitely many rows. Write positive integers in increasing order from left to right, and top to bottom. If we mark all the prime numbers, we get a pattern shown in the figure. We can see that prime numbers show patterns of vertical line segments. A More Mathematical Explanation Instead of studying a table with 180 columns, we will study a table with 30 columns, as shown in [[#1 Instead of studying a table with 180 columns, we will study a table with 30 columns, as shown in Image 1. First, create a table with 30 columns and sufficiently many rows. Write all positive integers starting from 1 as one moves from left to right, and top to bottom. Then, each row will start with a multiple of 30 added by 1, such as 1, 31, 61, 91, 121, ... . If we mark the prime numbers in this table we get Image 2. Theorem 1 All prime numbers appear on columns that have a 1 or a prime number on its top row. In other words, for every prime number $p$, either $p \equiv 1\pmod {30}$, or there exists a prime number $q$ less than 30 such that $p \equiv q \pmod {30}$. Proof. Given any prime number $p$, assume that $p$ is neither congruent to 1 (mod 30) nor $q$ (mod 30) for every prime $q$ less than 30. Then $p$ is congruent to $x$ (mod 30), where $x$ is some integer less than 30 that is not 1 and not a prime. Prime factorization of $x$ must contain one of 2, 3, and 5. (If the prime factorization of $x$ did not contain any of 2, 3, or 5, then the smallest possible value of $x$ will be $7 * 7 =49$, which is greater than 30). Thus, $x=2^a3^b5^c$, where $a,b,c >=0$, and at least one of $a,b,c$ is greater than 0. Since $p$ is congruent to $x$, we can write $p$ as $p=30*n+2^a3^b5^c$, where $n$ is an integer greater than or equal to 1. Then, $p=30*n+2^a3^b5^c=(2*3*5)n+2^a3^b5^c$. $p$ is then equal to one of $2(3*5*n+2^{a-1}3^b5^c)$ or $3(2*5*n+2^a3 ^{b-1}5^c)$ or $5(2*3*n+2^a3^b5^{c-1})$, which contradicts $p$ being a prime number. Thus, $p \equiv 1 \pmod {30}$ or $p \equiv q \pmod {30}$. However, the statement does not generalize to other integer modulo groups. For instance, when we look at a table with 60 columns, 49 is not a prime number, but the column containing 49 will contain other prime numbers, such as 109. Moreover, not all integers that are congruent to 1 (mod 30) or $q$ (mod 30), where $q$ is a prime number less than 30, are prime numbers. For instance, 49, which is congruent to 19 (mod 30), is not a prime number, but 49 still appears on the same column as 19. Let's call the columns that have a 1 or a prime number on its top row as prime-concentrated columns. One can observe that all composite numbers that appear on these prime-concentrated columns, say 49,77,91,119,121,133,143,161,169,..., have prime factors that are greater than or equal to 7. In other words, these composite numbers do not have 2, 3, or 5 as a prime factor. Theorem 2 Composite numbers that appear on prime-concentrated columns do not have 2,3, or 5 as a prime factor. Proof. Let $x$ be a composite number that appears on a prime-concentrated column, and assume that $x$ has at least one of 2, 3, or 5 as a prime factor. Since $x$ appears on the prime-concentrated column, $x$ can be written as $x=30n+k$, where $n$ is a positive integer and $k$ is a prime number smaller than 30. If $x$ had 2 as a prime factor, $k$ must also have 2 as a factor because 30 has 2 as a factor. This contradicts the fact that $k$ is a prime number. The same argument works for the case when $x$ has 3 or 5 as prime factors. Another pattern to notice is that the prime-concentrated columns seem symmetric about the column that contains 15, which leads to the following observation. Theorem 3 If $p$ is a prime number less than 30 and if $p$ is not equal to 2,3, or 5, then 30-p is a prime number. Proof. Let $p$ be a prime number less than 30 that is not equal to 2, 3, or 5. Let $q=30-p$. If $q$ were not a prime, then $q$ must have 2, 3, or 5 as a prime factor. Since $p=30-q$, $p$ will also be divisible by 2, 3, or 5, contradicting our condition that $p$ is a prime number. Such observation triggers one's interest to see whether the above statement is true for any multiples of 30, or for any number that is a product of the first few primes. However, this turns out not to be the case. Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Future Directions for this Page d) There seems to be infinitely many primes in each congruence class (i.e, there are infinitely many primes in each prime-concentrated columns) This is what Dirichlet's theorem on arithmetic progressions is saying e) would it be possible to generalize the above statements to any subgroup of the integers modded by the product of first n i.e, can we generalize above statements to the case where we create a table with more number of columns? If you are able, please consider adding to or editing this page! Have questions about the image or the explanations on this page? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"url":"http://mathforum.org/mathimages/index.php?title=Prime_Numbers_in_Linear_Patterns&curid=4882&diff=35068&oldid=35067","timestamp":"2014-04-20T19:33:57Z","content_type":null,"content_length":"36317","record_id":"<urn:uuid:d7108c71-6880-49f3-82de-493cffa45e47>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Function that would bounce off 2 sides of a pool table? How would you create a function that would bounce off of 2 sides of a pool table. like this imagine the coordinate axis on this pool table. we can make the lower-left pocket the origin. thus, to create the desired function you must create a piece-wise function consisting of three line for the first, you want a line segment connecting the origin to the point (8,2) for the second, you want a line segment connecting the point (8,2) to the point (1,4) for the third, you want a line segment connecting the point (1,4) to the point (0.5, 3.5) can you continue? (of course, if you wish, you could make the lower-center pocket the origin, whatever) I think I can create a formula for you. Then you can use it to create a computer program. What makes you think he was being sarcastic? There is no function for what you are looking for. What you need is a program that will take care of what the cue is doing when it runs into a His answer and offer seem fairly direct to me.
{"url":"http://mathhelpforum.com/math-topics/20589-function-would-bounce-off-2-sides-pool-table-print.html","timestamp":"2014-04-19T17:29:01Z","content_type":null,"content_length":"8791","record_id":"<urn:uuid:a47fba48-aaad-48c9-a724-2cdc537b3e28>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
A. Balance of forces and torques acting on the nematode body B. Bead models C. Effect of confinement on transverse and longitudinal hydrodynamic forces
{"url":"http://scitation.aip.org/content/aip/journal/pof2/25/8/10.1063/1.4816718","timestamp":"2014-04-20T16:56:13Z","content_type":null,"content_length":"106625","record_id":"<urn:uuid:60b2cae9-ce4a-43fa-884e-ec339d9adb9a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Logical Connectives Mathematics works according to the laws of logic, which specify how to make valid deductions. In order to apply the laws of logic to mathematical statements, you need to understand their logical Proofs are composed of statements. A statement is a delarative sentence that can be either true or false. Remark. Many real proofs contain things which aren't really statements --- questions, descriptions, and so on. They're there to help to explain things for the reader. When I say "Proofs are composed of statements", I'm referring to the actual mathematical content with the explanatory material removed. Example. "Calvin Butterball is a math major" is a statement. You'd need to know more about Calvin and math majors to know whether the statement is true or false. "Do you have a pork barbecue sandwich?" is not a statement --- it's a question. Likewise, "Eat your vegetables!" is not a statement --- it's an imperative sentence, i.e. an order to do something. "read it and see if it's a complete declarative sentence which is either true or false. This statement would read (in words): "One plus one equals two." You can see that it's a complete declarative sentence (and it happens to be a true statement about real numbers). On the other hand, "not a statement. It would be read "One plus one", which is not a sentence since it doesn't have a verb. (Things like " terms or expressions.) Since proofs are composed of statements, you should never have isolated phrases (like In terms of logical form, statements are built from simpler statements using logical connectives. The basic connectives of sentential logic are: (a) Negation ("not"), denoted (b) Conjunction ("and"), denoted (c) Disjunction ("or"), denoted (d) Conditional ("if-then" or "implication"), denoted (e) Biconditional ("if and only if" or "double implication"), denoted Later I'll discuss the quantifiers "for all" (denoted Remark. You may see different symbols used by other people. For example, some people use Example. Represent the following statements using logical connectives. (a) P or not Q. (b) If P and R, then Q. (c) P if and only if (Q and R). (d) Not P and not Q. (e) It is not the case that if P, then Q. (f) If P and Q, then R or S. Other words or phrases may occur in statements. Here's a list of some of them and how they are translated. Consider the word "but", for example. If I say "Calvin is here, but Bonzo is there", I mean that Calvin is here and Bonzo is there. My intention is that both of the statements should be true. That is the same as what I mean when I say "Calvin is here and Bonzo is there". In practice, mathematicians tend to a small set of phrases over and over. It doesn't make for exciting reading, but it allows the reader to concentrate on the meaning of what is written. For example, a mathematician will usually say "if Q, then P", rather than the logically equivalent "P whenever Q". The second statement is less familiar, and therefore more apt to be misunderstood. This is a good time to discuss the way the word "or" is used in mathematics. When you say "I'll have dinner at MacDonald's or at Pizza Hut", you probably mean "or" in its exclusive sense: You'll have dinner at MacDonald's or you'll have dinner at Pizza Hut, but not both. The "but not both" is what makes this an exclusive or. Mathematicians use "or" in the inclusive sense. When "or" is used in this way, "I'll have dinner at MacDonald's or at Pizza Hut" means you'll have dinner at MacDonald's or you'll have dinner at Pizza Hut, or possibly both. Obviously, I'm not guaranteeing that both will occur; I'm just not ruling it out. Example. Translate the following statements into logical notation, using the following symbols: S = "The stromboli is hot." L = "The lasagne is cold." P = "The pizza will be delivered." (a) "The stromboli is hot and the pizza will not be delivered." (b) "If the lasagne is cold, then the pizza will be delivered." (c) "Either the lasagne is cold or the pizza won't be delivered." (d) "If the pizza won't be delivered, then both the stromboli is hot and the lasagne is cold." (e) "The lasagne isn't cold if and only if the stromboli isn't hot." (f) "The pizza will be delivered only if the lasagne is cold." (g) "The stromboli is hot and the lasagne isn't cold, but the pizza will be delivered." The order of precedence of the logical connectives is: 1. Negation 2. Conjunction 3. Disjunction 4. Implication 5. Double implication As usual, parentheses override the other precedence rules. In most cases, it's best for the sake of clarity to use parentheses even if they aren't required by the precedence rules. For example, it's better to write Precedence would group P and Q anyway, but the first expression is clearer. It's not common practice to use parentheses for grouping in ordinary sentences. Therefore, when you're translating logical statements into words, you may need to use certain expressions to indicate • The combination "Either ... or ..." is used to indicate that everything between the "either" and the "or" is the first part of the "or" statement. • The combination "Both ... and ..." is used to indicate that everything between the "both" and the "and" is the first part of the "and" statement. In some cases, the only way to say something unambiguously is to be a bit wordy. Fortunately, mathematicians find ways to express themselves which are clear, yet avoid excessive linguistic Example. Suppose that C = "The cheesesteak is good." F = "The french fries are greasy." W = "The wings are spicy." Translate the following logical statements into words (with no logical symbols): "If the cheesesteak isn't good and the french fries are greasy, then the wings are spicy." If I say "It's not the case that the cheesesteak is good or the wings are spicy", it might not be clear whether the negation applies only to "the cheesesteak is good" or to the disjunction "the cheesesteak is good or the wings are spicy". So it's better to say "It's not the case that either the cheesesteak is good or the wings are spicy", since the "either" implies that "the cheesesteak is good" or "the wings are spicy" are grouped together in the or-statement. In this case, the "either" blocks the negation from applying to "the cheesesteak is good", so the negation has to apply to the whole "or" statement. "It's not the case that both the wings aren't spicy and the cheesesteak is good." As with the use of the word "either" in (b), I've added the word "both" to indicate that the initial negation applies to the conjunction "the wings aren't spicy and the cheesesteak is good". In this case, the "both" blocks the negation from applying to "the wings aren't spicy", so the negation has to apply to the whole "and" statement. The literal translation is "It's not the case that the french fries aren't greasy". Or (more awkwardly) you could say "It's not the case that it's not the case that the french fries are greasy". Of course, this is logically the same as saying "The french fries are gresy". But the question did not ask you to simplify the original statement --- only to translate it, which you should do Example. Here are some examples of actual mathematical text. (a) ([1], Theorem 25.11) In the semi-simple ring R, let You could express this using logical connectives in the following way. Let A = "R is a semi-simple ring". B = " C = "L is a minimal left ideal". D = " The statement can be translated as Notice that to determine the logical form, you don't have to know what the words mean! Mathematicians use the word "let" to introduce hypotheses in the statement of a theorem. From the point of view of logical form, the statements that accompany "let" form the antecedent --- the first part --- of a conditional, as statements A and B do here. (b) ([2], Proposition 14.11) Let X and Y be P = "X and Y are Q = " R = " The proposition can then be written in logical notation as Notice that you can often translate a statement in different ways. For example, I could have let A = "X is a B = "Y is a C = " D = " Now the proposition becomes As the last example shows, logical implications often arise in mathematical statements. Here's some terminology. If (a) P is the antecedent or hypothesis and Q is the consequent or conclusion. (b) The converse is the conditional (c) The inverse is the conditional (d) The contrapositive is the conditional Example. Find the antecedent and the consequent of the following conditional statement: Construct the converse, the inverse, and the contrapositive. The antecedent is " The converse is "If The inverse is "If The contrapositive is "If Later on, I'll show that a conditional statement and its contrapositive are logically equivalent. Example. Find the antecedent and the consequent of the following conditional statement: "If Calvin gets a hot dog, then Calvin doesn't get a soda." Construct the converse, the inverse, and the contrapositive. The antecedent is "Calvin gets a hot dog" and the consequent is "Calvin doesn't get a soda". The converse is "If Calvin doesn't get a soda, then Calvin gets a hot dog". The inverse is "If Calvin doesn't get a hot dog, then Calvin gets a soda". (Note that the literal negation of the consequent is "It is not the case that Calvin doesn't get a soda". But the two negations cancel out --- this is called double negation --- so I get "Calvin gets a soda".) The contrapositive is "If Calvin gets a soda, then Calvin doesn't get a hot dog". Different fields use different formats for citing sources. For instance, you may have seen books which referred to sources using footnotes. Mathematicians usually use numbers in square brackets (like "[1]" or "[2]") for citations. The numbers refer to the references, which are listed at the end of the paper or book. Among other things, it makes for less clutter on the text pages, and is easier to typeset. Here are the references which I cited above. [1] Charles W. Curtis and Irving Reiner, Representation theory of finite groups and associative algebras. New York: Interscience Publishers, 1962. [ISBN 0-470-18975-4] [2] Brayton Gray, Homotopy theory. New York: Academic Press, 1975. [ISBN 0-12-296050-5] Send comments about this page to: Bruce.Ikenaga@millersville.edu. Copyright 2009 by Bruce Ikenaga
{"url":"http://www.millersville.edu/~bikenaga/math-proof/logical-connectives/logical-connectives.html","timestamp":"2014-04-19T12:03:23Z","content_type":null,"content_length":"24055","record_id":"<urn:uuid:9eca9526-430d-4527-ba25-2c6e4d2f703c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
The sum of the third and fourth terms in a sequence of consecutive integers is 47. what is the sum of the first five... - Homework Help - eNotes.com The sum of the third and fourth terms in a sequence of consecutive integers is 47. what is the sum of the first five terms of the sequence. Let us say the integers start from integer n. Then the sequence is n, (n+1), (n+2)....... According to the given data; `(n+2)+(n+3) = 47` `2n+5 = 47` `n = 21` Sum of first five terms `= 21+22+23+24+25 = 115` ` ` So the sum of first five terms is 115. If we have to get the sum of large number of terms we can use arithmetic progression instead of the above method. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/sum-third-fourth-terms-sequence-consecutive-inte-455799","timestamp":"2014-04-18T15:42:18Z","content_type":null,"content_length":"25039","record_id":"<urn:uuid:d52dab60-f8ee-4c7b-9f74-c215c6fc9f17>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Answer There are two types of social research data: parametric and non-parametric. Here's details. - read more Parametric statistics is a branch of statistics which assumes that the data has come from a type of probability distribution and makes inferences about the parameters of the distribution. Most Share your answer: what is parametric measures? Question Analizer what is parametric measures resources
{"url":"http://www.askives.com/what-is-parametric-measures.html","timestamp":"2014-04-17T06:51:27Z","content_type":null,"content_length":"35524","record_id":"<urn:uuid:5921cdea-f8cd-4ece-b17c-766c6de66c5c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Doron Zeilberger Opinion 65: My Two Favorite Pedagogical Principles by Two of my Favorite Mathematicians By Doron Zeilberger Written: May 5, 2005. Truly great mathematicians (and scientists, of course) are also great teachers. Hence it it is not surprising that two of my heroes, representing completely different kinds of mathematics, have similar teaching philosophies. The first one, due to my colleague and hero Israel Gelfand, I will call the Gelfand Principle, which asserts that whenever you state a new concept, definition, or theorem, (and better still, right before you do) give the SIMPLEST possible non-trivial example. For example, suppose you want to teach the commutative rule for addition, then 0+4=4+0 is a bad example, since it also illustrates another rule (that 0 is neutral). Also 1+1=1+1 is a bad example, since it illustrates the rule a=a. But 2+1=1+2 is an excellent example, since you can actually prove it: 2+1=(1+1)+1=1+1+1=1+(1+1)= 1+2. It is a much better example than 987+1989=1989+987. The Gelfand Principle should also be used in research articles. It is much easier to follow a new definition or theorem after a simple example is first given. Even proofs would be easier to follow if they are first spelled out concretely for a special case. The other principle is closely related but of an even greater scope. It is due to another hero of mine, Bruno Buchberger, of Grobner bases fame, the pioneering giant of Computer Algebra. Buchberger, like Gelfand, is also very much involved in pedagogy, and his brainchild THEOREMA, developed with his RISC-Linz team, is a great computer-assisted learning tool for humans. Many traditional mathematicians dislike computers, and even amongst those that don't mind them, most of them want to ban them from the classroom. For example, counter-calculus-reformists George Andrews and Dick Askey want to completely banish even calculators from the curriculum, fearing that students who rely too heavily on calculators and computers will lose the basic feel for numbers. Bruno Buchberger agrees that they do have a point. Hence Buchberger introduced the White-Box Black-Box Principle, asserting, like Solomon, that there is time for everything under Heaven. There is time to see the details, and work out, with full details, using pencil and paper, a simple non-trivial example. But there is also time to not-see-the-details. Having mastered the algorithm or concept or whatever, the student (and researcher!) should be allowed to use the computer for other examples. I would insert another stage, that could be called "Gray Box", that consists of working out a more complicated example, interactively, using Maple or Mathematica as a calculator, but working out all the steps. Finally, I ask the students to program themselves the algorithm, which is a much better way of having them internalize it than have them do, by rote, many complicated examples by hand. Finally, once they programmed it, they are allowed to use the built-in implementation, if it exists. I applied this principle a few weeks ago when I taught the Buchberger algorithm to my Algorithmic Discrete Math class. First, they had to, during class, compute the Grobner basis of {x+y,x^2+y^2}, all by hand! Then, as homework, also by hand (no cheating please!) they were asked to find the Grobner basis of {x+y+z,x^2+y^2+z^2,x^3+y^3+z^3}. Then, they were asked, using Maple, to perform the same calculations step by step, computing S-polynomials and reducing. Then they were encouraged to try and program their own version, even though it is unlikely to be as efficient as the built-in Maple command. Finally, for ever after, they were given permission to use the built-in command gbasis, and use it as a complete Black Box, without worrying about the details, enabling them to do new Seeing all the details, (that nowadays can (and should!) be easily relegated to the computer), even if they are extremely hairy, is a hang-up that traditional mathematicians should learn to wean themselves from. A case in point is the excellent but unnecessarily long-winded recent article (Adv. Appl. Math. 34 (2005) 709-739), by George Andrews, Peter Paule, and Carsten Schneider. It is a new, computer-assisted proof, of John Stembridge's celebrated TSPP theorem. It is so long because they insisted on showing explicitly all the hairy details, and easily-reproducible-by-the-reader "proof certificates". It would have been much better if they would have first applied their method to a much simpler case, that the reader can easily follow, that would take one page, and then state that the same method was applied to the complicated case of Stembridge's theorem and the result was TRUE. For those poor people who are unable or unwilling to run the program themselves, they could have posted the computer output on their websites, but please, have mercy on the rain forest! You don't need 30 pages, and frankly all this EXPLICIT LANGUAGE of hairy computer output is almost Hence, both the Gelfand and the Buchberger principles are as useful when we teach our students (at all levels, K-grad_school), as when we try to teach our colleagues, in other words, write understandable research papers. But, last but not least, when we teach ourselves, in other words, do research. Added June 21, 2005: Read intriguing feedback by Tim Gowers and Volodia Retakh Added May 26, 2006: Read Gustavo Lacerda's fascinating feedback Doron Zeilberger's Opinion's Table of Content
{"url":"http://www.math.rutgers.edu/~zeilberg/Opinion65.html","timestamp":"2014-04-20T08:17:07Z","content_type":null,"content_length":"6237","record_id":"<urn:uuid:3a5fb39c-be08-4819-90ce-9a2f2b53991a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Linde-Hampson Anti-Machine: Self-heated Compressed Van-Der-Waals Gas as an Energy Carrier for Pneumatic Vehicles Institute of Semiconductor Physics of the NAS of Ukraine, Nauki Ave, Kiev, Ukraine In this paper, the problem of increasing the energy capacity of compressed air used as a mobile carrier of energy is considered. An idea of preliminary Joule-Thomson self-heating of the compressed air with the following heat redistribution to obtain the maximal work at maximal power is considered. A simple circle combination of a heat exchanger and a throttle device-Linde-Hampson anti-machine - is analyzed. Calculations performed in the framework of modified van der Waals gas model show essential increasing of accumulated specific energy of compressed air fuel to the range 0.5-1.0 MJ/kg which is comparable with electrochemical fuel energy. A concept of compressed air fuel production is discussed. At a glance: Figures Keywords: compressed air, energy capacity, Joule-Thomson process, heat exchanger, pneumatic vehicles, pollution free fuel American Journal of Energy Research, 2013 1 (3), pp 59-67. DOI: 10.12691/ajer-1-3-4 Received August 26, 2013; Revised September 11, 2013; Accepted September 25, 2013 © 2013 Science and Education Publishing. All Rights Reserved. Cite this article: • Glushko, E.Ya.. "Linde-Hampson Anti-Machine: Self-heated Compressed Van-Der-Waals Gas as an Energy Carrier for Pneumatic Vehicles." American Journal of Energy Research 1.3 (2013): 59-67. • Glushko, E. (2013). Linde-Hampson Anti-Machine: Self-heated Compressed Van-Der-Waals Gas as an Energy Carrier for Pneumatic Vehicles. American Journal of Energy Research, 1(3), 59-67. • Glushko, E.Ya.. "Linde-Hampson Anti-Machine: Self-heated Compressed Van-Der-Waals Gas as an Energy Carrier for Pneumatic Vehicles." American Journal of Energy Research 1, no. 3 (2013): 59-67. Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks 1. Introduction The compressed gases have found wide usage in various technical applications. In particular, compressed air remains irreplaceable in aviation and cosmonautics due to its good ecological properties and other features. Also, compressed air is used as a mobile carrier of energy in starters of heavy diesel engines and medicine. The ability of compressed air-to be a mobile carrier of energy, is in centre of our study. The question is how much mechanical work may be drawn out of this carrier? Recently an appreciable progress has been achieved in competitive transportation means which use the pneumatic fuel ^[1]. Nevertheless, relatively low energy efficiency of compressed air of the order of 0.1-0.2 MJ of mechanical energy per 1 kg ^[2, 3] seriously weakens its position as an effective carrier of energy for transport vehicles in comparison, for example, with electrochemical vehicles where energy capacities have magnitudes more than 0.5 MJ/kg ^[4]. Therefore the investigation of ways to increase energy capacity of compressed air up to magnitudes 0.5-1.0 MJ/kg and even more at working pressures 30-90 bar is very important. In this work, so-called negative Joule-Thomson effect is proposed to use for the van der Waals gas self-heating ^[5]. At relatively high pressures the throttling process is accompanied by temperature increasing due to negative work. Like the well-known Linde machine which uses a gas self-cooling during the throttling inside a heat exchanger at the condition of positive Joule-Thomson effect (dP/dT>0), an analog taken in area dP/dT<0 should cause self-heating of the gas. The latter phenomenon is not jet investigated properly in literature and our study discusses possible use of negative Joule-Thomson effect to create a novel mobile carrier of energy. Also general aspects of Linde’s anti-machine are considered from the point of view of increasing the energy capacity of compressed air fuel (CAF) to region of the order of 0.5-1.0 MJ/kg and more. 2. Energy and Enthalpy Maps of a Real Gas We consider compressed air as the carrier of energy. In a modified van der Waals approximation 1 kg of dense gaseous medium taken at pressure P and temperature T may be described by the phenomenological equation ^[5] where v is number of moles in 1 kg, K is modification constant (8/3 in classic case, 3.42 for N[2]), V, volume, R, universal gaseous constant, a and b are the Van der Waals constants. The internal energy is expressed by equation where C[v] is the specific heat capacity of the gas at the constant-volume process. Further we will refer to one more thermodynamic function of state, the enthalpy H = U + PV. This function plays an important role in our further consideration of the compressed air ability to store energy. It expresses the heat properties of a working body - dense gas, in a thermodynamic process. In the case a dense gas is throttled through a narrow channel or a system of narrow channels the enthalpy function H remains constant. The throttling or Joule-Thomson process is accompanied by work the gas does on its surrounding. Therefore the reliefs of intrinsic energy and enthalpy in PT-space allow to analyze the ways to extract energy accumulated by compressed gas for its following transformation into mechanical work. We define the mechanical work W as an integral along the corresponding line of the process. At that the negative magnitude of W corresponds to the case when the surrounding medium performs a work on the gas. It is of interest that negative work is peculiar to Joule-Thomson process ^[5, 6]. To evaluate the work in this case one should take the initial and final points of the integration curve at the ends of the chosen isenthalpic line, therefore W = (PV)[i]-(PV)[f]. As a result we have increasing internal energy U if the Joule-Thomson process is of negative kind. In this work, we will use the constants a and b averaged by the Kopp-Neumann rule for air mix of nitrogen and oxygen: As to the Joule-Thomson process, the phase diagram in РТ plane is divided on two connected regions of positive and negative coefficient K≠8/3 the demand T[μ][ = ][0](P) ^[5, 6]: where P[cr] and T[cr] are critical pressure and temperature. In Figure 1a, calculated constant density lines 10, 20, 30… 710 kg/m^3 (right and upper axis) for compressed are plotted. In the chosen model of Van der Waals gas the isodenses are direct lines. As it can be seen from figure, the constant density lines corresponding to magnitudes in a wide range beginning with 50 and up to 700 kg/m^3 lay close one to another in vicinity of the critical point Ĉ what explains the extremely weak compressibility and strong fluctuations in the critical point. Shown are line of vapour-liquid equilibrium ĈS connecting the critical point Ĉ and triple point S as well as the line of solid-liquid equilibrium SS’. Since our purpose is to study the energy stored in compressed gas we consider several important thermodynamic states in the PT plane. The atmosphere point A(1, 300) also may be treated as the final state of adiabatic expansion of hot compressed air inside the engine beginning with the initial point B(96.3, 1400) for K = 3.42 and B(223.5, 1400) at K = 8/3. If the final point of adiabatic expansion process is taken A(1, 350) then initial point is located at B(19.6, 1400) for K = 3.42 and B(130.8, 1400) at K = 8/3. The intermediate point B’(300, 1380) is a supposed final point of the Joule-Thomson process accompanied by heat extracting. In this work, we consider the case when all the process begins with state C(300, 700). The model contains an unstable two-phase zone (dP/dV)[T]>0, which is marked with dark gray color in the left corner of the diagram. The bold line connecting point C and the unstable two phase zone corresponds to isochoric heating of liquid air at average density 415 kg/m^3. The critical and triple points are marked with Ĉ and S, There are many studies devoted to inversion curves for various dense gases ^[7, 8, 9]. A good approximation to the experimental Joule-Thomson inversion curve for a few gases, including nitrogen, was found in ^[7] (curve 3) whereas the classic (K = 8/3) and modified (K = 3.42) versions of the Van der Waals equation give curves 1 and 2 in Figure 1a, correspondingly. Our estimations show that curve 2 becomes totally close to the experimental curve 3 if the transformation P[cr] à 1.3P[cr] is made in the framework of chosen van der Waals model. Figure 1. (Color online) The Van der Waals model. (a) Iso-density lines for the classic van der Waals gas: 10, 20, 30… 710 kg/m^3-labelled at right and upper axes, A, B, B’, C, chosen special points on the PT plane: A(1,300), B(96.3, 1400), B’(300, 1380), C(700, 300); 1, classic Joule-Thomson inversion curve (K = 8/3) , 2, Joule-Thomson inversion curve at K = 3.42 (air), 3, approximate Joule-Thomson inversion curve built by , Ĉ, critical point, S, triple point, SS’, line of solid-liquid equilibrium, ĈS, line of vapour-liquid equilibrium; dark gray rectangular, two-phase unstable zone; bold line connecting point C and the unstable zone corresponds to isochoric heating of liquid air. (b) Calculated P-T diagram for internal energy of compressed air. Parameter K = 3.42; isoenergetic curves: U[s] = 0.05·(s-1), s = 1, 2.15, max{U}<1.1 MJ/kg; U[A] = 0.217 MJ/kg, U[B] = 1.011 MJ/kg, special energy of prepared fuel at the initial point of adiabatic expansion, U[B’] = 0.989 MJ/kg, special energy of preliminary prepared fuel, U[C] = 0.148 MJ/kg, special energy of fuel in tank. In Figure 1b, the diagram of internal energy calculated by finite difference method for one kg of compressed air in PT plane is shown. The U(P,T) diagram ranges pressures up to 800 bar and temperatures up to 1500 K. The map of the internal energy dependence is described by 16 isoenergetic curves numbered beginning with zero specific energy up to 1.05 MJ/kg with a difference in approximately 0.05 MJ/kg: U[s][ ]= -0.05 + 0.05·s, s = 1,2..22. The parameter K in the modified van der Waals equation of state (1) to be 3.42. A motivation to choose the working point A, B, B’, C follows from Figure 1b, where the system of isoenergetic curves covers the actual range of parameters: A represents the final point of the adiabatic process (U[A] = 0.217 MJ/kg), B is the initial point of the adiabatic process (U[B] = 1.011 MJ/kg). Below, the role of special point B’ as an intermediate state of preliminary prepared compressed air fuel (U[B] = 0.989 MJ/kg) will be explained. At last, the thermodynamic state of compressed air in point C is chosen as an initial state of fuel before the reinforced process is started (U[C] = 0.148 MJ/kg). The diagram of internal energy is convenient to determine the possible mechanical work, which may be picked out the compressed air fuel in an adiabatic process. The difference between points А and В(1400 K, 1.011 MJ/kg) equals to 0.794 MJ/kg, what is of interest to consider air as a fuel. The jump of the internal energy between points A and C is about 0.069 MJ/kg whereas the needed minimal isothermic work to produce compressed air in point C is about 0.692 MJ/kg. The calculated amount of extracted heat in the isothermic process Aà C is about 0.383 MJ/kg. Figure 2. (Color online) Calculated air enthalpy P-T diagram. Isenthalpic curves: H = 0.95·s MJ/kg, s = 1,2…16; A, B, B’, C, chosen special points (Figure 1); a, Joule-Thomson inversion curve at K = 3.42, b, approximate Joule-Thomson inversion curve built by (Figure 1, curves 2, 3); lines c and d note the Joule-Thomson process possible pressure limits 300bar and 700 bar In Figure 2, the calculated by finite differences diagram of enthalpy for one kg of compressed air in PT plane is shown where isenthalpic curves s = 1,2…16 correspond to formula H = 0.95·s MJ/kg. The arc a is plotted by the Joule-Thomson inversion curve (3) taken at K = 3.42 whereas the arc b is the approximate Joule-Thomson inversion curve built by data ^[7] (also Figure 1, curve 3). Both these curves confine the area of positive inclination of the enthalpy contour lines (positive Joule-Thomson effect) within intervals of pressures P<9P[cr] ≈ 329 bar and temperatures 77≥Т≤697 K for the modified Van der Waals gas at K = 3.42. The throttling process taken inside these arcs leads to decreasing of temperature. The outside area may be used to increase the temperature in the process of throttling. The polynomial approximation to the experimental Joule-Thomson inversion curve ^[7] for gaseous N[2], Ar, and other gases (Figure 2, curve b, Figure 1a, curve 3). The lower and upper lines c and d depict possible pressure limits 700 bar and 300 bar in a throttling process. The chosen special points A, B, B’, C, were discussed above. Note that two ways to produce a kg of compressed air fuel i.e., to reach the position C on the pressure-temperature plane, exist. The first way is represented by an isochoric transfer of one kg of liquid air taken at 1 bar into the state C shown in Figure 1a by the bold line. This process needs an amount of heat about 0.15 MJ that should be added to energy expended during the liquefying process. To avoid wasting of energy one more demand should be fulfilled-the initial gas-liquid mixture density must exceed critical density 331 kg/m^3 of our van der Waals model with accepted Kopp-Neumann rule. As follows from Figure 1a, the shown isochoric line corresponds to density 415 kg/m^3. Figure 3. (Color online) Calculated specific energy and density of the van-der-Waals model of air at K = 3.42. 1, isothermal compression work (P[A] = 1bar) vs final pressure P[C] (left axis). 2, fuel density measured in g/cm^3(right axis) Further we will consider the second way which is based on direct compressing beginning with rarefied state of air at atmospheric pressure and room temperature. Since states A and C are taken at a coinciding temperature, the minimal work at the path Aà C may be reached in the isothermal process. In Figure 3, shown are the dependences of energy expense and density of final state of compressed air on the final state pressure P[C] by an isothermal process in interval ranged from initial pressure 1 bar to the final 1000 bar at 300 К (Figure 1, Figure 2, point A). The calculation gives for the chosen initial thermodynamic parameters of fuel P[C] = 700 bar and T[C] = 300 K the energy of production W = 0.692(0.5215 at k = 8/3) MJ/kg and fuel density ρ = 413 kg/m^3. It should be emphasized, that relative increasing of density at P[C] <300 bar (curve 2) is not significant whereas the CAF energy capacity (curve 1) seriously hoicks at small pressures and then transfers into a plateau: W(300) = 0.621 MJ/kg, W(1000) = 0.718 MJ/kg. In this work, a way to reinforce the real gas energy capacity using the negative Joule-Thomson effect is considered. The energy capacity i.e., work that a gas may perform in an adiabatic expansion process, depends on the location of point B on the PT plane. By technical conditions, the pressure should become atmospheric at the end of the process whereas the final temperature in general case not coincides with Т[C]. Moreover, the less is temperature T[A], the more is useful mechanical work obtained in the process. The bigger magnitudes of P[B] and P[B] correspond to bigger energy capacity of the compressed air. Due to the negative Joule-Thomson process performed in area limited by lines c and d in Figure 2 the compressed gas transfers from the point С to point B’ at the PT plane and then to point B. From this point of view, the required magnitudes P[C] begin with pressures of the order of 400 bar and higher in correspondence with curves a and b in Figure 2. It worth to note, that a simple isenthalpic process leads to a weak growth of temperature at these conditions. In Sections 3 and 4, nevertheless, we will show how the use of a heat exchanging procedure helps to extend the range of final temperatures T[B’]. The adiabatic work W(T[A], T[B]) done by one kg of a dense gas may be found both immediately from the P-T diagram for internal energy of compressed air shown in Figure 1b and as a process function The surface W(T[A], T[B]) calculated in axes T[A], T[B] is plotted in Figure 4. For the chosen interval of temperatures, the surface W(T[A], T[B]) is practically plain. The maximal work 0.77734 MJ/kg corresponds to point W[2](120, 1300), whereas the minimal zeroth one corresponds to nearest right point (300К, 300К). The most interesting for technical aims interval of pressures from 30 bar to 90 bar is highlighted by light gray. A conclusion can be made from the behavior of the highlighted band that to obtain specific work more than 0.6 MJ/kg one should use the regimes with outlet temperatures T[A] more than 300 K. The pressure of shown points W(T[A], T[B]) in the highlighted band increases along the direction of gradient, whereas magnitudes of the done work decrease along the band from left to right. Therefore, the most acceptable point of area T[B]≤1300 K should be acknowledged a point with maximal work in the highlighted band. Our calculation gives for this region the point W(300,1079.4) with pressure P[B] = 90 bar and potential work W = 0.559 MJ/kg. The nearest left point of the surface W(300,1300) = 0.7165 MJ/kg has P[B] = 172.5 bar. The most remote left point of the surface W[2 ]corresponds to P[B] = 4.086 Kbar that exhibits relatively strong pressure increasing along the surface gradient. The adiabatic work surface shown in Figure 4 being close to a plane may be approximated by the equation where T[0] = 300K, T[1] = 1300K , T[2] = 120K, W(T[1], T[0]) = W[1] = 0.7165 MJ/kg, W(T[1], T[2]) = W[2] = 0.84 MJ/kg. The expression (5) gives a very good approximation for nearest low temperature part of the adiabatic work surface at chosen parameters, in the middle of the shown surface in point W(240, 650) the accuracy is about 0.6%, whereas the most remote high temperature part of the surface gives value W[2] = 0.84 MJ/kg instead of the exact one 0.77734 MJ/kg (15%) and for the low temperature point W[3] = 0.12351 MJ/kg instead of the exact magnitude of potential work 0.12595 MJ/ kg (2%). As the matter of fact, the considered way gives minimal energy to produce one kg of compressed air fuel in point C(700, 300) about 0,693 MJ, at the same time the difference U[A]-U[C] ~ 0,069 MJ may be transformed into heat and returned. The same surface taken at K = 3.42 is more attractive in applied sense: though the both reference points W[1] and W[2] are close to previous ones, the technically important band of pressures 30<P<90 bar now begins from the very left corner of the surface, and it includes point W[1] = 0.721 MJ/kg. Therefore an opportunity arises to operate by work magnitudes more than 0.5 MJ/kg at temperatures T[A] ranged from 200 K to 300 K. Figure 4. (Color online) Calculated surface of specific work W depended on the adiabatic process limit temperatures T[A], T[B]. The transverse dotted lines s = 1, 2…7 correspond to energies W[s] = 0.1·s MJ/kg; K = 8/3 The relatively high pressure of the considered carrier of energy in state С allow the compressed gas to be transmitted into a state with higher internal energy which in turn may be converted into mechanical work. 3. Linde-Hampson’s Anti-Machine: to Reinforce the Compressed Air Fuel Here we consider a way to transform the compressed gas from a state most convenient for storing in a tank to a state suitable to produce maximal work at maximal power using the throttling process. During the Joule-Thomson process, the working body at point C(300, 700) may be transmitted into an arbitrary state B’ lying at the corresponding isenthalpic curve (see Figure 2). Motion along the isenthalpic curve beginning with point C at 700 bar to an appropriate point B’ at 300 bar causes the temperature shift about 23.4 К at K = 8/3 and 17.1 К at K = 3.42. Due to the negative work the system does under the van der Waals gas, the internal energy increases to about 0.046 MJ/kg at K = 8/3 and 0.042 К at K = 3.42. With the help of a heat exchanger which redistributes the heat between the input and output of the throttle the working body state may be transmitted into any point B’ of the Р-Т plane inside the pressure interval P<P[C]. In Figure 5a, the Linde-Hampson anti-machine design is schematically shown. It includes the heat exchanger and throttling device connected so that output of the throttling device creates the hot circle of exchanger whereas the cold exchanger branch is the input of the throttling device. The axis X shows chosen direction of increasing for gas coordinate x (L≥ x ≥0) both in the input circle before the throttle 2 and output circle after throttling. Just when the heat exchanger begins to act, the leg of isenthalpic curve СB’ begins to shift in side of a higher temperature. Therefore the energy gain may reach 1 MJ/kg and more. The time interval of this regime establishing will be evaluated below, it may reach seconds or dozens of seconds depending on the quality of the heat exchanger. It is worth noting that the energy capacity of the considered working body may exceed the minimal work spent initially to produce 1 kg of the compressed air fuel (point С). On the one side, the exceedence arises due to increasing of internal energy of a real gas in considered Joule-Thomson process and, on the other side, one more way exists of energy capacity rising - to chose the outlet temperature Т[А] lower than fuel temperature Т[С]. Doing so, one can formally reach the fuel performance factor bigger than 100%. One more circumstance is of interest: if the temperature Т[B’] is high enough, what is the nature of the additional energy reached in a throttle procedure accompanied by the heat exchange? This and other questions like principal restrictions of gain and maximal magnitudes at given initial parameters depend on taken model and needs more detailed investigation. 4. An Introduction to the Theory of Joule-Thomson Heat Exchanger In contrary to the well known Joule-Thomson cooler or Linde-Hampson machine ^[10, 11, 12, 13, 14, 15], the proposed anti-machine acts like a heater outside the inversion region which is marked by arcs in Figure 1a and Figure 2. One more difference is manifested in different pressure regimes for heater and refrigerator. The cooling effect arises at relatively low decompressing pressures whereas the Linde-Hampson anti-machine needs a jump of several hundred bar between the input and output chambers of throttling device As it follows from Figure 2, the heat exchanger increases temperature of compressed gas decreasing simultaneously its pressure to point B’. And the final stage of reinforcing procedure is one more throttling process decreasing pressure to the working pressure magnitude Р[B], which is also accompanied by a little increasing of temperature when the phase point moves from the position B’ to position B. In Figure 5a, the Linde-Hampson anti-machine design is schematically shown. It includes the two-loop heat exchanger united with a throttling device. The intake compressed air fuel enters the heat exchanger in point C(P[C], T[C]) and left the system in point B’(P[B’], T[B’]). The first, high pressure, loop is kept at pressure P[C] and the second one is at pressure P[B’]. Due to the difference P[C]– P[B’] the throttling process performed in area between lines c and d (Figure 2) leads to heating of the flowing gas depending on the heat exchanger parameters L and α. The throttle device 2 divides the high pressure input chamber P[C] = 700 bar with compressed gas fuel and the low pressure output chamber 1 at P[B’] = 300 bar. The heat exchange processes in the proposed simplified system may be described in time by two equations for temperature distributions T(x) and Ѳ(x) inside the input and output circuits CP[C’] and P [B’]B’ shown in Figure 5a: where L≥ x ≥0, the parameters of the equation (6): Here κ(x) is thermal conductivity coefficient, α(x) is the heat exchange coefficient measured in W/m^2K, S, ^[12, 13] therefore we undertake the simplified approach (6) to analyze the principal parameters of the Linde-Hampson anti-machine. Our estimation show that for a 10 kW power vehicle the average density magnitudes are ~0.4 g/cm^3, ~0.2 g/cm^3 and the corresponding mean velocities are of the order of 40 cm/s and 20 cm/s for sections S, 1 cm^2. The needed partial solution of equation (6) depends on the initial and boundary conditions. where δT[H](t) has been determining at every moment of time as the temperature jump caused by the Joule-Thomson process. To evaluate in general the parameters of the Linde’s anti-machine, further we develop an analytical approach neglecting the nonhomogeneity of coefficients in both equations (6), then after the transfer from temperatures. The general solution of (6) may be obtained using the discrete Fourier transform in linear space N = 2n + 1. The direct space is determined by coordinates whereas points It is easy to find the set of basic functions of the approach obeying the demands of completeness, orthogonality and normality. As a matter of fact, the parameters in equations (6) like a, b, ω have a nonhomogeneous nature due to the coordinate dependence of density in both branches of the exchanger. To evaluate in general the parameters of the Linde’s anti-machine, further we develop an analytical approach neglecting the nonhomogeneity of coefficients in both equations (6), then after the transfer from temperatures T(x, t), Ѳ(x, t) to Fourier images The general solution of (13) found for the constant coefficients κ, where the roots ν[1], ν[2] of the characteristic equation have the following view: Further we will take into account that the roots ν[j] have real and imaginary parts ξ[j] and ζ[j] , correspondingly (j = 1,2). Then the general solutions for T(x, t) and Ѳ(x, t) may be written in a real view where the real coefficients A[k] , B[k] which are present in the expansions (16) should be found from the first two boundary equations in (6). The coefficients of expansion T(x, t) and Ѳ(x, t) taken at t→∞ when the expressions in brackets go to zero. Approximate time of transfer to infinity may be taken as Figure 5. (Color online) (a) Approximate design of the Linde-Hampson anti-machine:1, heat exchanger: inletting cold high pressure gas, 2, throttling device and ambient hot low pressure gas. The fuel intake point, C(PC, TC), heated fuel point, B’(PB’, TB’), PC, pressure of the input loop, PB’ , pressure of outlet loop. (b) Calculated by (6) final (t>>τ) temperature distribution curves T(t, x), Ѳ (t, x): initial, 1, 2 and final, 1’, 2’(cut at T = 1500 K). PC = 700 bar, TC = 300 K, PB’ = 300 bar, TB’ = 1134 K, Tmax(x = 0) = 1937 K, α = 150 W/m2·K, L = 1.6 m, τ = 22.4 s, δTH = TC’- TB’ = 16.13 Figure 5. (Color online) (a) Approximate design of the Linde-Hampson anti-machine:1, heat exchanger: inletting cold high pressure gas, 2, throttling device and ambient hot low pressure gas. The fuel intake point, C(PC, TC), heated fuel point, B’(PB’, TB’), PC, pressure of the input loop, PB’ , pressure of outlet loop. (b) Calculated by (6) final (t>>τ) temperature distribution curves T(t, x), Ѳ (t, x): initial, 1, 2 and final, 1’, 2’(cut at T = 1500 K). PC = 700 bar, TC = 300 K, PB’ = 300 bar, TB’ = 1134 K, Tmax(x = 0) = 1937 K, α = 150 W/m2·K, L = 1.6 m, τ = 22.4 s, δTH = TC’- TB’ = 16.13 In Figure 5b, the initial (t = 0) and final (t>>τ) curves of temperature distribution Ѳ(x, t) and T(x, t) calculated by the equations (6) and solution (16) are plotted. The initial temperature jump δT[H] (0) = T[C] - T[B’] in (9) was found as 17.1 K, whereas the evaluation gives δT[H] = T[C’]- T[B’] = 16.13 K for final times t>>τ. The coefficient of heat transfer ‘α’ depends on several factors including the nature of fluids at both sides of the wall, wall surface geometry, fluid velocity and thermal conditions. Different sources give ‘α’ for dense gases at pressures of hundreds bar in a wide interval from hundreds to thousands of W/m^2K. Following ^[12] we have chosen for our estimating model α = 150 W/m^2·K. The calculation shows that at taken heat exchanger parameters and L = 1.6 m the temperature gradient in the high pressure and low pressure circuits is established near 502 K/m through the interval of time τ = 22.4 s. There exists a strong dependence of the final temperature distribution and the inlet temperature Ѳ(∞, L) on the heat exchanger parameters α, S and L. The corresponding combination of these parameters allows one to find a wide spectrum of work regimes. One more circumstance is important that though the negative Joule-Thomson effect at room temperatures has the lowest pressure limit 300 bar (or 400 bar by data ^[7]), the process of fuel reinforcing keeps its efficiency up to 100 bar. Therefore, the upper pressure limit, shown by line d in Figure 2, may also be lowered. In Figure 6, the intake temperature Ѳ[L] = Ѳ(L, ∞) dependence in P[C] ,P[B][’ ]axes is shown for the model K = 3.42. The most attractive technically region of 800K<Ѳ[L]<1200K is highlighted by light gray. The chosen temperature interval corresponds to possible mechanical work inside an interval from 0.5 to 0.8 MJ/kg at pressures not higher than 90 bar according to obtained data of the fuel energy capacitance. Figure 6. (Color online) Calculated by (13) the intake temperature Ѳ[L] = Ѳ(L, ∞) dependence in the P[C]-P[B’] plane. K = 3.42, α = 150 W/m^2·K, L = 1.6 m, surface is cut at Ѳ[L] = 2500 K; light gray (green online) band corresponds to 800 K< Ѳ[L ]<1200 K. To explain the nature of energy reinforcing during the Joule-Thomson process one should take into account the inevitable decreasing of initial pressure Р[С] in tank with time. As the matter of fact, the initially forced gain turns by corresponding loss at final stages of the process when the fuel pressure in a tank is sufficiently decreased. Nevertheless if knowingly stop the process remaining some amount of ballast fuel in the tank, then the proposed way то allows significantly increase the energy capacity of compressed air. Above the main idea of reinforcing the fuel parameters for a pneumatic thermodynamic engine was considered. Like a steam engine, this thermodynamic engine produces mechanical work due to the energy of compressed gas without using oxidation or any other chemical process. 5. General Scheme of CAF Circulation Note that mentioned above way to prepare the initial state C using an isochoric transfer of one kg of liquid air beginning with 1 bar pressure at density 0.415 g/cm^3 (Figure 1a, bold line) needs an amount of heat about 0.15 MJ. It may be taken from the heat exchanger. As a result the working point C due to the correspondingly modified heat exchanger will shift to lower temperatures for approximately 200 K. For example, B(96.3, 1400) à B(61.0, 1200) and specific adiabatic work decreases from 0.793 MJ/kg to 0.649 MJ/kg. Here we remain outside the scope of our study technical questions like pressure keeping systems or ways to transport liquid air to state C in an isochoric process. In Figure 7, shown are six types of pressure regimes in a pneumo-vehicle based on liquid air fuel. The process of fuel reinforcing inside the engine begins from liquid air in a tank (cycle 1), which intakes (cycle 6) into the circuit of high pressure 700 bar in the isochoric process with heating due to the energy transfer from the high temperature cycle 3 to cycle 4. The increasing of temperature during the process 6 needs about 0.15 MJ/kg. Therefore the output temperatures T[B] and T[B’] of cycle 4 should be decreased to approximately 200 К from 1200 K to 1000 K. The latter in turn is the temperature of the fuel intake in the engine cylinder to produce mechanical work. As was mentioned above the intake pressure should be chosen in an approximate interval from 30 to 90 bar. The heat exchanger output 300 bar pressure cycle 3 (curve 2 in Figure 5a) is characterized by the temperature fall from about 2100 K to 1200 K due to heat flows out of the cycle 3. The pre-injection pressure cycle 5 corresponds to state B shown in Figures 1 and Figure 2. In general the parameters of state B should be chosen in technically valuable band of pressures 30-90 bar corresponding the internal combustion and diesel engines. In time when mechanical work is performed the working body state transforms from point B to point A. The cycle 3 to cycle 5 temperature drop 1200à 1000 is caused by a need to supply the isochoric liquid-gas transfer of the cycle 6. Also, cycle 2 is shown in Figure 7 servicing the pneumo-system of the vehicle: doors, windows, windshield viper blade and conditioning. Figure 7. (Color online) Six cycles of pressure in a pneumo-vehicle. 1, liquid air in tank; 2, servicing pneumo-system; 3, heat exchanger’s output pressure cycle; 4, heat exchanger’s input pressure cycle; 5, pre-injection pressure cycle; 6, fuel heating isochoric cycle An important point of our study is technical perspective of air as the energy carrier and energy accumulator called to replace gasoline filling stations and oil bases. In a more wide treatment other gases or fluids also may be considered as carriers of mobile energy. Of course, additional amount of energy is needed to prepare compressed gas or steam fuel and new kinds of sources should replace in future the niche of traditional sources. Therefore, the considered conception implies an idea of restructuring in power engineering so that electric energy partially is used in compressors to produce compressed air and partially returns to the power system through a new kind of electric power stations transforming heat which accompanies the compressed air fuel production into electrical energy. This new kind of power stations may be inscribed in existing systems of accumulating power stations. Many unsolved problems exist including the investigation of unknown phase diagrams of gaseous mixtures at high pressures. Figure 8. (Color online) A concept of oxygen depleted air circulation: Industry-Fuel Station-Vehicle-Atmosphere. A preliminary concept of energy circulation in a world of compressed air fuel is proposed in Figure 8. The process is started from industry of liquid air consuming electric energy at off-peak stages on one part and returning energy into the power net at the peak of load on the other part (doubled arrow). Some safety aspects dictate to use oxygen depleted air as a precursor component for liquefying. As the liquid air has relatively high density about 808 kg/m^3 at atmospheric pressure, the supplying of road fuel stations should not be too different from that for existing gasoline fuel stations. Further, there are two alternative ways to produce compressed air. First one-the high pressure removable tanks are preparing by heating immediately at the fuel station with the following incorporating into a vehicle. Another way demands the compression of air by heating of liquid fuel inside a vehicle using a part of increased internal energy during the Joule-Thomson process. The both ways lead to state C of compressed air with parameters 700 bar and 300 K suitable to reinforce the fuel to obtain finally more mechanical work at bigger power. The internal energy and temperature of a dense gas during the Joule-Thomson process increase whereas the pressure is decreased stepwise. We accept that state B of reinforced fuel should occupy the pressure region 30-90 bar at temperatures about 1000 K. At the final stage of energy circle the mechanical work is done by engine and exhausted air is returned to atmosphere in a temperature interval from 120 to 300 K. 6. Discussion We have considered theoretically some thermodynamic aspects of mobile energy-pollution free energy carrier for vehicles ^[9, 16]. It was shown that self-heating of the compressed air by means of Linde-Hampson anti-machine significantly increases energy capacity of compressed air and to use it as a mobile carrier of energy. The calculations performed in framework of modified van der Waals gas model gave for accumulated specific energy of compressed air fuel magnitudes lying inside the acceptable region 0.5-1.0 MJ/kg. Also important subject of compressed air fuel production was touched upon. Another important problem is technical performance of compressed air engines. This question and others need to be considered in further investigation. Our evaluations were based on van der Waals equation of state for a dense gas. At the same time, the Reddlich-Quong approach and some others give the same order of values ^[17]. No doubt that the constants like C[v], a, b in all these approaches need to be investigated in detail for actual pressures, temperatures and densities. Also, the fact that used gas may be a two- or more component mixture may be important, especially at high densities. 7. Conclusions The energy of hydrocarbon fuel is the form of heat which the ancient Carbon epoch has accumulated by plants and animal kingdom due to the Sun energy, photosynthesis and atmospheric gases. Firing gas, carbon and mineral oil we seem try to return the Earth’s atmosphere back into early times. However the only result of this energy strategy is pollution without any perspective in the future. The considered conception of the free of pollution energy carrier promises to decrease dangerous processes started by human being. Moreover, giant resources destroying the modern natural equilibrium may be redirected into other spheres of material production. Statement of Competing Interests I declare that I have no significant competing financial, professional or personal interests that might have influenced the performance or presentation of the work described in this manuscript. [1] Sullivan, M., “World's First Air-Powered Car: Zero Emissions by Next Summer,” Popular Mechanics, 1. Jun.2007. [2] Creutzig, F., Papson, A., Schipper, L. and Kammen, D.M, “Economic and environmental evaluation of compressed-air cars,” Environ. Res. Lett. 4. 044011-20. Nov.2009. doi: [3] Bossel, U., “Thermodynamic Analysis of Compressed Air Vehicle Propulsion,” Journal of KONES Internal Combustion Engines, 12. 3-4. Apr.2005. [4] Eberle, U. and von Helmolt, R, “Sustainable transportation based on electric vehicle concepts: a brief overview,” Energy Environ. Sci., 3. 689-699. May.2010. [5] Landau, L.D, Lifshitz, E.M. Statistical Physics, Part 1. Vol. 5 (3rd edition) Butterworth–Heinemann, NY, 1980. [6] Atkins, P.W, Physical Chemistry, 5th ed., Freeman, 1994, 104-108. [7] Gunn, R. D., Chueh, P. L. and Prausnitz, J. M, “Inversion Temperatures and Pressures for Cryogenic Gases and Their Mixtures,” Cryogenics, 6. 324329. Dec.1966. doi: [8] Hendricks, R. C., Peller, I. C. and Baron A. K, “Joule-Thomson inversion curves and related coefficients for several simple fluids,” NASA Technical Note, D-6807. 1-62. Jul.1972. [9] B. Haghighi, M.R. Laee, M.R. Husseindokht and N.S. Matin, Prediction of Joule-Thomson Inversion Curves by the use of Equation of State, J. Ind. Eng. Chem. 10, 316-320. Feb.2004. [10] Barron, R.F, Cryogenic Heat Transfer, Taylor&Francis, London, 1999. [11] Mills, A.F, Heat and Mass Transfer, Richard D. Irwin, Inc., Chicago, 1995. [12] Rajput, R. K., Engineering Thermodynamics, 3rd Edition, Engineering Series, New Deli: Laxmi Publ. LTD, pp. 812-817, 2007. [13] Ng, R.G., Xue, H., Wang, J.B, “Experimental and Numerical study on a Miniature Joule-Thomson cooler for Steady-State Characteristics,” International Journal of Heat and Mass Transfer, 45. 609-618. Apr.2002. doi: [14] Maytal, B.Z, “Maximizing production rates of the Linde–Hampson machine,” Cryogenics, 46. 49-54. Jan.2006. doi: [15] Matsumoto, K. and Sano, H, “On output tracking control of a parallel-flow heat exchanger equation with diffusive terms,” JP Journal of Heat and Mass Transfer, 6. 213-222. Oct.2012. [16] L. R. Brown, Plan B 2.0: Rescuing a Planet Under Stress and a Civilization in Trouble, NY: W.W. Norton & Co., 2006. [17] Guardone, A., Vigevano, L. and Argrow, B. M, “Assessment of thermodynamic models for dense gas dynamics,” Phys. Fluids, 16. 3878-3887. Sept.2004. doi:
{"url":"http://pubs.sciepub.com/ajer/1/3/4/index.html","timestamp":"2014-04-20T06:17:33Z","content_type":null,"content_length":"124812","record_id":"<urn:uuid:1e691c5e-18d0-4a6c-881f-d776888a1604>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying complex fraction October 30th 2010, 08:48 PM #1 Mar 2010 Simplifying complex fraction I know I shoudl know this, but it's late and for some reason I am drawing a blank. Any help would be appreciated. Thanks How does $\frac{\frac{3}{x}}{1+\frac{4}{x^2}}$ simplify to $\frac{3x}{x^2+4}$ Hello, dbakeg00! $\text{How does }\,\dfrac{\frac{3}{x}}{1+\frac{4}{x^2}}\,\text{ simplify to }\,\dfrac{3x}{x^2+4}$ Multiply by $\dfrac{x^2}{x^2}$ . . $\displaystyle \frac{x^2}{x^2}\cdot \frac{\frac{3}{x}}{1+\frac{4}{x^2}} \;=\;<br /> \dfrac{x^2\left(\frac{3}{x}\right)}{x^2\left(1 + \frac{4}{x^2}\right)} \;=\;\frac{3x}{x^2+4}$ Thanks for the help. I know I need to go to bed when I start struggling with something like this! October 30th 2010, 09:06 PM #2 Super Member May 2006 Lexington, MA (USA) October 30th 2010, 09:11 PM #3 Mar 2010
{"url":"http://mathhelpforum.com/algebra/161583-simplifying-complex-fraction.html","timestamp":"2014-04-21T07:05:16Z","content_type":null,"content_length":"35626","record_id":"<urn:uuid:0677e033-2e06-4d93-bd72-b72b39e03e2a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Equivalence Relations Date: 05/06/2003 at 17:32:13 From: Kerri Subject: Equivalence Relations I have a question about Equivalence relations. Determine with proof which of the three equivalence relation properties hold for the following relation. (a) Let R be the relation on the set A={1,2,3,4} defined by R = {(1,1),(1,2),(1,3),(1,4),(2,1),(2,2),(3,1),(3,3),(4,1)} (b) Let R be the relation on the natural numbers defined by nRm if and only if n divides into m with zero remainder. How would you prove the equivalence relation with ordered pairs such as in (a)? Date: 05/06/2003 at 23:57:16 From: Doctor Samus Subject: Re: Equivalence Relations Hi Kerri, To start, we should list the three different properties of an equivalence relation R on a set A: I) Reflexivity: xRx for all x in A II) Symmetry: If xRy then yRx for x and y in A III) Transitivity: If xRy and yRz then xRz for x, y and z in A To determine which properties hold, we must look at the elements in the relation and see which properties they satisfy. Let's look at a) and see which properties hold: I) Doesn't hold, since 4 is in A, but (4,4) is not in R II) Holds, since for every (x,y) in R, (y,x) is also in R III) Doesn't hold, since (2,1) and (1,3) are in R, but (2,3) is not So for (a), only property II (symmetry) holds. To do b, simply investigate the same three properties for the relation in the problem. Note that to show a property doesn't hold, we need only find a single example showing that the property doesn't hold. To show that a property does hold, we have to show that it's true for all elements in the relation to which the property applies. I hope this helps. Please feel free to write back if you neeed more - Doctor Samus, The Math Forum Date: 05/07/2003 at 00:16:26 From: Kerri Subject: Thank you (Equivalence Relations) Thank you Doctor Samus, so much! I had no idea how to work this and now I understand it completely. (b) wasn't a problem after your explanation of (a). Thanks again!
{"url":"http://mathforum.org/library/drmath/view/62895.html","timestamp":"2014-04-20T13:41:45Z","content_type":null,"content_length":"6976","record_id":"<urn:uuid:3d9293cf-cf5a-4ca4-b6a7-71d67884abf6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Area and Perimeter of Similar Polygons 10.7: Area and Perimeter of Similar Polygons Difficulty Level: At Grade Created by: CK-12 Practice Area and Perimeter of Similar Polygons What if you wanted to create a scale drawing using scale factors? This technique takes a small object, like the handprint below, divides it up into smaller squares and then blows up the individual squares. Either trace your hand or stamp it on a piece of paper. Then, divide your hand into 9 squares, like the one to the right, probably $2 \ in \times 2 \ in$$6 \ in \times 6 \ in$$6 \ in \times 6 \ in$ Watch This CK-12 Foundation: Chapter10AreaandPerimeterofSimilarPolygonsA Brightstorm: Similarity and Area Ratios Polygons are similar when the corresponding angles are equal and the corresponding sides are in the same proportion. The scale factor for the sides of two similar polygons is the same as the ratio of the perimeters. In fact, the ratio of any part of two similar shapes (diagonals, medians, midsegments, altitudes, etc.) is the same as the scale factor. The ratio of the areas is the square of the scale factor. An easy way to remember this is to think about the units of area, which are always squared. Therefore, you would always square the scale factor to get the ratio of the areas. Area of Similar Polygons Theorem: If the scale factor of the sides of two similar polygons is $\frac{m}{n}$$\left( \frac{m}{n} \right)^2$ Example A The two rectangles below are similar. Find the scale factor and the ratio of the perimeters. The scale factor is $\frac{16}{24}$$\frac{2}{3}$$\frac{52}{78}=\frac{2}{3}$ Example B Find the area of each rectangle from Example A. Then, find the ratio of the areas. $A_{small} &= 10 \cdot 16=160 \ units^2\\A_{large} &= 15 \cdot 24=360 \ units^2$ The ratio of the areas would be $\frac{160}{360}=\frac{4}{9}$ The ratio of the sides, or scale factor was $\frac{2}{3}$$\frac{4}{9}$ Example C Find the ratio of the areas of the rhombi below. The rhombi are similar. There are two ways to approach this problem. One way would be to use the Pythagorean Theorem to find the length of the $3^{rd}$$\left( \frac{3}{5} \right)^2=\frac{9}{25}$ Watch this video for help with the Examples above. CK-12 Foundation: Chapter10AreaandPerimeterofSimilarPolygonsB Concept Problem Revisited You should end up with an $18 \ in \times 18 \ in$ Perimeter is the distance around a shape. The perimeter of any figure must have a unit of measurement attached to it. If no specific units are given (feet, inches, centimeters, etc), write “units.” Area is the amount of space inside a figure. Area is measured in square units. Polygons are similar when their corresponding angles are equal and their corresponding sides are in the same proportion. Similar polygons are the same shape but not necessarily the same size. Guided Practice 1. Two trapezoids are similar. If the scale factor is $\frac{3}{4}$$81 \ cm^2$ 2. Two triangles are similar. The ratio of the areas is $\frac{25}{64}$ 3. Using the ratios from #2, find the length of the base of the smaller triangle if the length of the base of the larger triangle is 24 units. 1. First, the ratio of the areas would be $\left( \frac{3}{4} \right)^2= \frac{9}{16}$$\frac{16}{9}$$A=\frac{16}{9} \cdot 81=144 \ cm^2$ 2. The scale factor is $\sqrt{\frac{25}{64}}=\frac{5}{8}$ 3. All you would need to do is multiply the scale factor we found in #2 by 24. $b=\frac{5}{8} \cdot 24=15 \ units$ Determine the ratio of the areas, given the ratio of the sides of a polygon. 1. $\frac{3}{5}$ 2. $\frac{1}{4}$ 3. $\frac{7}{2}$ 4. $\frac{6}{11}$ Determine the ratio of the sides of a polygon, given the ratio of the areas. 5. $\frac{1}{36}$ 6. $\frac{4}{81}$ 7. $\frac{49}{9}$ 8. $\frac{25}{144}$ This is an equilateral triangle made up of 4 congruent equilateral triangles. 9. What is the ratio of the areas of the large triangle to one of the small triangles? 10. What is the scale factor of large to small triangle? 11. If the area of the large triangle is $20 \ units^2$ 12. If the length of the altitude of a small triangle is $2 \sqrt{3}$ 13. Carol drew two equilateral triangles. Each side of one triangle is 2.5 times as long as a side of the other triangle. The perimeter of the smaller triangle is 40 cm. What is the perimeter of the larger triangle? 14. If the area of the smaller triangle is $75 \ cm^2$ 15. Two rectangles are similar with a scale factor of $\frac{4}{7}$$294 \ in^2$ 16. Two triangles are similar with a scale factor of $\frac{1}{3}$$22 \ ft^2$ 17. The ratio of the areas of two similar squares is $\frac{16}{81}$ 18. The ratio of the areas of two right triangles is $\frac{2}{3}$ Questions 19-22 build off of each other. You may assume the problems are connected. 19. Two similar rhombi have areas of $72 \ units^2$$162 \ units^2$ 20. Find the scale factor. 21. The diagonals in these rhombi are congruent. Find the length of the diagonals and the sides. 22. What type of rhombi are these quadrilaterals? 23. The area of one square on a game board is exactly twice the area of another square. Each side of the larger square is 50 mm long. How long is each side of the smaller square? Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Geometry-Concepts/r2/section/10.7/","timestamp":"2014-04-19T00:14:31Z","content_type":null,"content_length":"145356","record_id":"<urn:uuid:3e5733da-e4d1-47e4-bc13-6dde80b98cb0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Word Problem using compound December 30th 2010, 03:03 PM #1 Dec 2010 Word Problem using compound hello all! I want to start out by saying I know how to get the answer to this problem... I am specifically looking for the FORMULA to solve a problem like this. Thanks! STARTING WITH $0.00 I have an income of $10,000 an hour I have 30 stock To buy 1 stock you must pay $20,000; i.e.: In two hours I will be able to buy 1 stock for a total of 31 stock Every time you purchase 1 stock it increases your hourly income by 525. Again for example, in two hours I will have 20k and be able to buy 1 stock, bringing my hourly income up to $10,525. After two more hours I will have 32 stock and an hourly income of $11,050. If you purchase 1 stock every time you have an additional 20k; how long would it take to get to 200? (We are starting at 30 so I guess it would be the additional 170 stock) I know I can keep adding 525 and figure it out the hard way, but I am curious as to what a formula for a problem like this would look like and how it would work. I appreciate your time!!! Want to make sure that I follow your "confusion" Do you agree that the following happens during hours 9 to 13: hour per-hour purchase balance #stocks 9 12,100 18,400 34 10 12,100 -20,000 10,500 35 11 12,625 -20,000 3,125 36 12 13,150 16,275 36 13 13,150 -20,000 9,425 37 If that is the correct balance starting at hour 9 then yes I agree. I am able to do what you just did all the way to 200... but is there a formula to describe it? I make it that the 200 mark is reached after 90 hours...during this 90th hour, 5 stocks are purchased, the hourly rate having reached $97,675!! Stocks 198 to 202 are purchased, with $11,500 left in the kitty. I can't right now (too sleepy!) imagine a formula for this. Perhaps someone else will. Happy 2011 to you. December 30th 2010, 08:49 PM #2 MHF Contributor Dec 2007 Ottawa, Canada December 30th 2010, 09:06 PM #3 Dec 2010 December 30th 2010, 09:34 PM #4 MHF Contributor Dec 2007 Ottawa, Canada
{"url":"http://mathhelpforum.com/business-math/167152-word-problem-using-compound.html","timestamp":"2014-04-20T08:54:53Z","content_type":null,"content_length":"39193","record_id":"<urn:uuid:4ca2b777-3a97-4631-af73-8ad339f8482b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Cohomology Jumping Loci and the Relative Malcev Completion Abstract (Summary) Two standard invariants used to study the fundamental group of the complement X of a hyperplane arrangement are the Malcev completion of its fundamental group G and the cohomology groups of X with coefficients in rank one local systems. In this thesis, we develop a tool that unifies these two approaches. This tool is the Malcev completion S_p of G relative to a homomorphism p from G into (C^*) ^N. The relative completion S_p is a prosolvable group that generalizes the classical Malcev completion; when p is the trivial representation, S_p is the Malcev completion of G. The group S_p is tightly controlled by the cohomology groups H^1(X,L_{p^k}) with coefficients in the irreducible local systems L_{p^k} associated to the representation p.The pronilpotent Lie algebra u_p of the prounipotent radical U_p of S_p has been described by Hain. If p is the trivial representation, then u_p is the holonomy Lie algebra, which is well-known to be quadratically presented. In contrast, we show that when X is the complement of the braid arrangement in complex two-space, there are infinitely many representations p from G into (C^*)^2 for which u_p is not quadratically presented.We show that if Y is a subtorus of the character torus T containing the trivial character, then S_p is combinatorially determined for general p in Y. We do not know whether S_p is always combinatorially determined. If S_p is combinatorially determined for all characters p of G, then the characteristic varieties of the arrangement X are combinatorially determined.When Y is an irreducible subvariety of T^N, we examine the behavior of S_p as p varies in Y. We define an affine group scheme S_Y over Y such that if Y = {p}, then S_Y is the relative Malcev completion S_p. For each p in Y, there is a canonical homomorphism of affine group schemes from S_p into the affine group scheme which is the restriction of S_Y to p. This is often an isomorphism. For example, if there exists p in Y whose image is Zariski dense in G_m^N, then this homomorphism is an isomorphism for general p in Y. Bibliographical Information: Advisor:Hain, Richard M School:Duke University School Location:USA - North Carolina Source Type:Master's Thesis Keywords:mathematics malcev completion hyperplane arrangements characteristic varieties orlik solomon algebra rational homotopy theory iterated integrals Date of Publication:12/12/2007
{"url":"http://www.openthesis.org/documents/Cohomology-Jumping-Loci-Relative-Malcev-269193.html","timestamp":"2014-04-17T00:56:25Z","content_type":null,"content_length":"10025","record_id":"<urn:uuid:4ee6e789-88bc-4d13-8e8b-e8071163d525>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 3740 Proceedings of the National Academy of Sciences of the United States of America This paper was presented at a colloquium entitled “Earthquake Prediction: The Scientific Challenge,” organized by Leon Knopoff (Chair), Keiiti Aki, Clarence R.Allen, James R.Rice, and Lynn R.Sykes, held February 10 and 11, 1995, at the National Academy of Sciences in Irvine, CA. Scale dependence in earthquake phenomena and its relevance to earthquake prediction KEIITI AKI Department of Earth Sciences, University of Southern California, Los Angeles, CA 90089–0740 ABSTRACT The recent discovery of a low-velocity, low-Q zone with a width of 50–200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the source-controlled fmax of 5–10 Hz for major earthquakes. The temporal correlation between coda Q−1 and the fractional rate of occurrence of earthquakes in the magnitude range 3–3.5, the geographical similarity of coda Q−1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q−1 and conductivity at the lower crust support the hypotheses that coda Q−1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. A starting point for the science of earthquake prediction may be to find what seismogenic structures control the earthquake processes. We have invented jargon to describe structures such as asperities, barriers, breakdown zones, locking depth, and the brittle-ductile transition zone. If there is any distinct structure controlling earthquake processes, it should show up as scale-dependent phenomena—namely, departure from self-similarity. However, if all earthquakes are put together as shown, for example, in the seismic moment-source dimension relation summarized by Abercrombie and Leary (1) the self-similarity roughly holds over the range of source dimension from 10 cm to 100 km, telling us that there may be no universal structure controlling earthquake processes. There is still a hope that we may find such a controlling structure if we focus our attention on a specific fault zone or a specific seismic region and use techniques with higher resolution and accuracy. To explain why I have such a hope, I shall first describe my encounters with scale-dependent earthquake phenomena in the past two decades. After I found (2) that the ω-squared scaling law with a constant stress drop independent of magnitude explains the observed source spectral ratios (3) and resolves inconsistencies between surface wave magnitude and body wave magnitude, a natural follow-up was to determine the scaling law in various parts of the earth and map the stress drop globally. During the course of this work, we (4) found that very often the shape of the seismic spectrum does not obey the ω-squared scaling law but stays the same over a considerable range (2–3 orders) of seismic moment. This means that the corner frequency stays at a constant value and stress drop decreases with decreasing magnitude. Rautian et al. (5) have studied the same problem independently in the former Soviet Union and discovered the same phenomena. The existence of a constant corner frequency must mean that there is a unique scale length governing the earthquake phenomena. The value of the constant corner frequency varied from place to place in the range from a few to >10 Hz. My next encounter with the unique scale length was when Papageorgiou and I (6) were trying to predict the strong motion acceleration power spectra by using the specific barrier model. We found that the shape of the acceleration power spectrum is flat for frequencies higher than the subevent corner frequency, but the flat part is limited to the fmax [term coined by Hanks (7)] beyond which the spectrum sharply decays with increasing frequency (6). We found that fmax is in the range of a few to 10 Hz and shows a slight dependence on magnitude. It was very natural for me to suspect a tie between the constant corner frequency for small earthquakes and the fmax for large earthquakes. Papageorgiou and Aki (6) attributed the origin of fmax to the cohesive zone of an earthquake fault. My third encounter with a unique scale length associated with earthquakes occurred when Okubo and I (8) were measuring the fractal dimension of the San Andreas fault with the map of fault traces published by the U.S. Geological Survey. We found that there is an upper bound length beyond which the fault ceases to be fractal, and it ranges from a few hundred meters to a kilometer, in agreement with the size of the cohesive zone inferred from fmax. My fourth encounter with the unique scale length is the finding of a kink in the frequency magnitude relation at about M=3 from the recordings made at a borehole station within the Newport-Inglewood fault zone in southern California (9). The size of an M=3 earthquake is a few hundred meters, which roughly corresponds to the size of the cohesive zone estimated from fmax. My most intriguing encounter with a characteristic scale length, however, was the very strong temporal correlation between coda Q−1 and the fractional rate of earthquakes with magnitudes in a certain range (10, 11). On the other hand, the most direct confirmation of the fault zone structure with a characteristic width came from the observation of the seismic guided waves trapped in the fault zone of the Landers earthquake (12). In the present paper, I focus on these latter two subjects—namely, the coda Q and fault zone trapped The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. OCR for page 3740 Proceedings of the National Academy of Sciences of the United States of America modes—in order to infer the interrelations between fault zone structures and earthquake processes. Three-Dimensional Structure of the 1992 Landers Earthquake Fault Zone As far as I know, seismic guided waves trapped in a fault zone were first discovered in a three-dimensional vertical seismic profiling experiment conducted in the area surrounding a borehole drilled into the fault zone of the Oroville, California, earthquake of 1975 (13, 14). Records of borehole seismographs placed near the fault zone showed an unusually long-period wave train when the seismic source (a thumper) crossed the fault zone at the surface. The characteristic waveform was attributed to a Love wave-type mode trapped in a low-velocity, low-Q zone. Similar trapped modes were also identified in some of the borehole seismograms obtained at the San Andreas fault near Parkfield, California (15). The Landers, California, earthquake of 1992 offered a wealth of data for studying various aspects of fault zone trapped modes. By comparing the observed waveform with the synthetic waveform for a low-velocity, low-Q zone, Li et al. (12) estimated a fault zone width around 180 m, with shear velocity of 2.0–2.2 km/sec and a Q value of ≈50. Interestingly, a similar estimate of fault zone width was made in an entirely different study. From a detailed study of tension cracks on the surface, Johnson et al. (16) concluded that the Landers fault rupture is not a distinct slip across a fault plane but rather a belt of localized shearing spread over a width of 50–200 m. They suggested that this might be a common structure of an earthquake fault, which might have been unrecognized previously because the shearing is small, and surficial material is usually not as brittle as in the Landers area. We identify this shear zone with the low-velocity, low-Q zone found from the trapped modes because their widths are virtually the same. Since the trapped modes were observed from aftershocks with focal depths of >10 km, we conclude that the shear zone found by Johnson et al. (16) extends to the same depth. More recently, Li et al. (17) conducted an active experiment by shooting explosives in the Landers fault zone. They successfully recorded not only the direct trapped modes but also the reflection from the bottom of the fault zone at a depth of ≈10 km. A CALCRUST (California Consortium for Crustal Studies) seismic reflection line was shot in the area of a similar geological setting as the Landers epicentral area. The resultant CDP time section (18) clearly presents the transition from nonreflective upper crust to strongly reflective lower crust, which has been identified with the ductile part of the crust globally (19). The transition occurs at a depth of 10 km, in agreement with the bottom depth of the low-velocity, low-Q zone found from the trapped modes. Thus, combining all these observations, we have strong evidence supporting the hypothesis that the shear zone found near the surface extends to the top of the ductile part of the crust. Furthermore, if we identify the low-velocity, low-Q zone with the breakdown zone of the specific barrier model (6), the source of controlled fmax will be of the order of rupture velocity divided by the zone width (≈10 Hz), in agreement with our observations. In any case, the shear zone with a width of 200 m can serve only as the micromechanism of earthquakes with a fault length much longer than, say, 1 km. Thus, earthquakes associated with this fault zone must be scale dependent, and we should observe a departure from self-similarity. Let us now turn to recent results from coda Q−1 studies relevant to the subject of the present paper. Since coda waves are not explained in any existing seismology textbook, I shall first briefly describe what they are. Seismic Coda Waves When an earthquake occurs in the earth, seismic waves are propagated away from the source. After P waves, S waves, and various surface waves are gone, the area around the seismic source is still vibrating. The amplitude of vibration is uniform in space, except for the local site effect, which tends to amplify the motion at soft soil sites compared to hard rock sites. These residual vibrations are called seismic coda waves and they decay very slowly with time. The rate of decay is roughly the same independent of the locations of seismic source and recording station, as long as they are located in a given region. The closest phenomenon to this coda wave is the residual sound in a room, first studied by Sabine (20). If someone shoots a gun in a room, the sound energy remains for a long time because of incoherent multiple reflections. This residual sound has a very stable, robust nature similar to seismic coda waves, independent of the locations where the gun was shot or where the sound in the room was recorded. The residual sound remains in the room because of multiple reflections at the rigid wall, ceiling, and floor of the room. Since we cannot hypothesize any room-like structure in the earth, we attribute seismic coda waves to backscattering from numerous heterogeneities in the earth. We may consider seismic coda as waves trapped in a random medium. The seismic coda waves from a local earthquake can be best described by the time-dependent power spectrum P(ω|t), where ω is the angular frequency and t is the time measured from the origin time of the earthquake. P(ω|t) can be measured from the squared output of a bandpass filter centered at frequency ω or from the squared Fourier amplitude obtained from a time window centered at t. The most extraordinary property of P(ω| t) is the simple separability of the effects of seismic source, propagation path, and recording site response expressed by the following equation. The coda power spectrum Pij(ω|t) observed at the ith station due to the jth earthquake can be written as Pij(ω|t)=Sj(ω) Ri(ω) C(ω|t), [1] for t greater than about twice the travel time of S waves from they jth earthquake to the ith station. Eq. 1 means that Pij(ω|t) can be written as a product of a term that depends only on the earthquake source, a term that depends only on the recording site, and a term common to all the earthquakes and recording sites in a given region. The above property of coda waves expressed by Eq. 1 was first recognized by Aki (21) for aftershocks of the Parkfield, California, earthquake of 1966. The condition that Eq. 1 holds for t greater than about twice the travel time of S waves was found by the extensive study of coda waves in central Asia by Rautian and Khalturin (22). Numerous investigators demonstrated the validity of Eq. 1 for earthquakes around the world, as summarized in a review article by Herraiz and Espinosa (23). In general, Eq. 1 holds more accurately for a greater lapse time t and for higher frequencies (e.g., see ref. 24). Coda waves are a powerful tool for seismologists because Eq. 1 offers a simple means to separate the effects of source, path, and recording site. The equation has been used for a variety of practical applications, including mapping of the frequency-dependent site amplification factor (e.g., see ref. 25), discrimination of quarry blasts from earthquakes (24), the single station method for determining frequency-dependent attenuation coefficients (26), and normalizing the regional seismic network data to a common source and recording site condition (27). In the following, I focus on the common decay function C(ω|t) on the right side of Eq. 1. I first introduce coda Q to characterize C(ω|t) in the framework of single-scattering theory and then summarize the current results on what the OCR for page 3740 Proceedings of the National Academy of Sciences of the United States of America coda Q is in terms of scattering attenuation and intrinsic absorption. Then, I will survey the spatial and temporal variation in coda Q−1 and describe how they are related to earthquake processes in the lithosphere. Introducing Coda Q (or Coda Q−1) The first attempt to predict the explicit form of P(ω|t) for a mathematical model of earthquake source and earth medium was made by Aki and Chouet (28). Their models were based on the following assumptions. Both primary and scattered waves are S waves. Multiple scatterings are neglected. Scatterers are distributed randomly with a uniform density. Background elastic medium is uniform and unbounded. Assumption i has been supported by various observations, such as the common site amplification (29) and the common attenuation (26) between S waves and coda waves. It is also supported theoretically because the S to P conversion scattering due to a localized heterogeneity is an order of magnitude smaller than the P to S scattering as shown by Aki (30) with the reciprocal theorem. Zeng (31) showed that the above difference in conversion scattering between P to S and S to P leads to the dominance of S waves in the coda. Since the observed P(ω|t) is independent of the distance between the source and receiver, we can simplify the problem further by colocating the source and receiver. Then, we find (ref. 28; see also ref. 32 for more detailed derivation) that P(ω|t)=β/2·g(π)|ϕo(ω|βt/2)|2, [2] where β is the shear wave velocity, g(θ) is the directional scattering coefficient, and ϕo (ω|r) is the Fourier transform of the primary waves at distance r from the source. g(θ) is defined as 4π times the fractional loss of energy by scattering per unit travel distance of primary waves and per unit solid angle at the radiation direction θ measured from the direction of primary wave propagation. Aki and Chouet (28) adopted the following form for |ϕo(ω|r)|. |ϕo(ω|r)|=|S(ω)|r−1exp(−ωr/(2βQc)), [3] where |S(ω)| is the source spectrum, r−1 represents the geometrical spreading, and Qc is introduced to express the attenuation. Combining Eqs. 2 and 3, and including the attenuation of scattered waves, we have [4] Qc is called coda Q, and Qc−1 is called coda Q−1. The measurement of coda Q according to Eq. 4 is very simple. Coda Q−1 is the slope of straight line fitting the measured ln(t2P(ω|t)) vs. ωt. Since there is a weak but sometimes significant dependence of the slope on the time window for which the fit is made, it has become a necessary routine to specify the time window for each measured coda Q−1. Because of the simplicity of measurement of coda Q−1, its geographical variation over a large area as well as its temporal variation over a long time can be studied relatively easily. Before presenting those results, however, we need to clarify the physical meaning of coda Q−1. Physical Meaning of Coda Q−1 The physical meaning of coda Q−1 has been debated for almost 20 years. Within the context of the single scattering theory, coda Q−1 appears to represent an effective attenuation including both absorption and scattering loss. This idea prevailed for some time after Aki (26) found a close agreement between coda Q−1 and Q−1 of S waves measured in the Kanto region, Japan. On the other hand, numerical experiments by Frankel and Clayton (33), laboratory experiments by Matsunami (34), and theoretical studies including multiple scattering effects (e.g., ref. 35) concluded that the coda Q−1 measured from the time window later than the mean free time (mean free path divided by wave velocity) should correspond only to the intrinsic absorption and should not include the effect of scattering loss. The debates concerning this issue were summarized by Aki (36). To resolve the above issue, attempts have been made to separately determine the scattering loss and the intrinsic loss in regions where coda Q−1 has been measured. For this purpose, it is necessary to include multiple scattering in the theoretical model, either by the radiative energy transfer approach (37) or by the inclusion of several multiple-path contributions to the single-scattering model (38). Recently, Zeng et al. (39) demonstrated that all these approaches can be derived as approximate solutions of the following integral equation for the seismic energy density E(x, t) per unit volume at location r and at time t due to an impulsive point source applied at χo at t=0. [5] where the symbols are defined as follows: E(x, t), seismic energy per unit volume at x and t; β, velocity of wave propagation; η, total attenuation coefficient: η=ηs+ηi (energy decays with distance |x| as exp[−η|x|]); ηi, intrinsic absorption coefficient; ηs, scattering attenuation coefficient; Qs−1=ηsβ/ω, scattering Q−1; Qi−1=ηiβ/ω, absorption Q−1; B=ηs/η, albedo; Lc=1/η, extinction distance; L=1/ηs, mean free path; βEo(t), rate of energy radiated from a point source at xo at t. The assumptions underlying Eq. 5 are less restrictive and more explicit than the assumptions used in deriving Eqs. 2 and 4. The background medium is still uniform and unbounded, but scattering coefficients and absorption coefficients are explicitly specified, and all the multiple scatterings are included, although scattering is assumed to be isotropic. Eq. 5 gives the seismic energy density as a function of distance and time in contrast to Eqs. 2 and 4, which depend on time only. By comparing the predicted with the observed energy density in space and time, we can uniquely determine the scattering loss and the intrinsic absorption separately. An effective method using the Monte Carlo solution of Eq. 5 was developed by Hoshiba et al. (40), who calculated seismic energy integrated over three consecutive time windows (e.g., 0–15, 15–30, and 30–45 sec from the S wave arrival time) and plotted them against their distance from the source. The method has been applied to various parts of Japan (41), Hawaii, Long Valley, California, central California (27), and southern California (42). Although for a more complete understanding of coda Q we need models with nonuniform scattering and absorption coefficients, the results obtained so far assure us empirically that coda Q−1 are bounded rather narrowly between intrinsic Q−1 and total Q−1. With this understanding of coda Q−1, we shall now proceed to the spatial and temporal correlation observed between coda Q−1 and seismicity. OCR for page 3740 Proceedings of the National Academy of Sciences of the United States of America Geographic Variation in Coda Q The decay rate of coda waves shows a strong geographic variation. For example, Singh and Herrmann (43) found a systematic variation of coda Q at 1 Hz in the conterminous United States, more than 1000 in the central part decaying gradually to 200 in the western United States. The spatial resolution of the map of coda Q obtained by Singh and Herrmann (43) was rather poor, because they had to use distant earthquakes to cover regions of low seismicity. As mentioned earlier, Eq. 1 holds for the lapse time to greater than about twice the travel time for S waves. For a more distant earthquake, the coda part governed by Eq. 1 starts later, making the region traveled by backscattered waves greater and consequently losing the spatial resolution. Peng (44) made a systematic study of the spatial resolution of coda Q mapping as a function of the lapse time window selected for measuring coda Q. He used the digital data from the Southern California Seismic Network operated by Caltech and the U.S. Geological Survey and calculated spatial auto correlation function of coda Q−1 by the following procedure. Southern California is divided into meshes of size 0.2° (longitude) by 0.2° (latitude), and the average of coda Q−1 is calculated for each mesh by using seismograms that share the midpoint of epicenter and station in the mesh. The average value for the ith mesh is designated as χi. Then two circles of radius r and r + 20 km are drawn with the center at the ith mesh, and the mean of coda Q−1 at midpoints located in the ring between the two circles is calculated and designated as yi(r). The autocorrelation coefficient ρ(r) is computed by the formula where M is the total number of meshes, χ̄ is the mean of χi, and ӯ(r) is the mean of yi(r). ρ(r) is calculated for coda Q−1 at four different frequencies (1.5, 3, 6, and 12 Hz) and three different lapse time windows—namely, 15–30, 20–45, and 30–60 sec measured from the origin time. As shown in Figs. 1–3, the autocorrelation functions are similar among different frequencies but clearly depend on the selected time window. The longer and later time window gives the slower decay in the autocorrelation with the distance separation. If we define the distance at which the correlation first comes close to 0 as the coherence distance, the average coherence distance is ≈135 km for the time window 30–60 sec, 90 km for the window 20–45 sec, and 45 km for the window 15–30 sec. The above observation offers strong support to the assumption that coda waves are composed of S to S back-scattering waves, because the distances traveled by S waves with a typical crustal S wave velocity of 3.5 km/sec in half the lapse time 60, 45, and 30 sec are, respectively, 105, 79, and 53 km, which are close to the corresponding coherence distance—namely, 135, 90, and 45 km. In other words, the coda Q−1 measured from a time window represents the seismic attenuation property of the earth’s crust averaged over the volume traversed by the singly back-scattering S waves. Ouyang and Aki (45) constructed maps of coda Q−1 for various frequencies and time windows using the enormous digital data from the Southern California Seismic Network, now available at the Southern California Earthquake Center data center. To construct a map of coda Q−1, I first assign each coda Q−1 measurement at the midpoint between the station and the FIG. 1. Spatial autocorrelation function of coda Q−1 at various frequencies measured from the time window 15–30 sec in southern California obtained by Peng (44). earthquake epicenter. I then average coda Q−1 measurements for midpoints lying within a 0.2°×0.2° region and plot the average value at the center of this region. Fig. 4 shows an example of such a map for the center frequency 12 Hz and the coda window 20–30 sec from the origin time. The values of the averages are represented by different sizes of dots, with larger dots for larger coda Q−1, corresponding to averaged coda Q−1 (10−3) within a range as indicated on the right. Coda Q−1 measurements made from 8931 seismograms were used to construct the map shown in Fig. 4. These seismograms are from 2446 earthquakes recorded at 161 seismic stations. The hypocentral distances are shorter than 35 km in order to meet the condition that the coda window starts at the lapse time later than twice the S wave arrival time. The time window of 20–30 sec corresponds to the sampling region of radius 30–45 km. FIG. 2. Spatial autocorrelation function of coda Q−1 at various frequencies measured from the time window 20–45 sec in southern California obtained by Peng (44). OCR for page 3740 Proceedings of the National Academy of Sciences of the United States of America FIG. 3. Spatial autocorrelation function of coda Q−1 at various frequencies measured from the time window 30–60 sec in southern California obtained by Peng (44). The map shows that the regional average of coda Q−1 for 12 Hz is 1.4×10−3. The geographic variation is rather small; 50% of the sites are within the range 1.2−1.5×10−3. There is a systematically low Q−1 value along the peninsular range, sandwiched between NW-SE trending high Q−1 zones. This pattern is remarkably similar to the map of P velocity at a depth of 20 km obtained by Hu et al. (46) by polarization inversion, and high velocity corresponds to high Q (low Q−1), and low velocity corresponds to low Q. The peninsular range also shows high isostatic anomaly, indicating high density according to Griscom and Jachens (47). The above correlation suggests that the coda Q−1 may reflect the material properties in the lower crust. Temporal Change in Coda Q Chouet (48) was the first to observe a significant temporal change in coda Q at Stone Canyon, California, which could not be attributed to changes in instrument response or in the epicenter locations, focal depths, or magnitudes of earthquakes used for the measurement. The change was associated with neither the rainfall in the area nor the occurrence of any particular earthquake but showed a weak negative correlation with the temporal change in a seismicity parameter called b value (49). The b value is defined in the Gutenberg-Richter formula log N=a−bM, where N is the frequency of earthquakes with magnitude greater than M. Numerous studies made since (see ref. 50 for a critical review of early works) revealed that the temporal correlation between coda Q−1 and seismicity is not as simple as the spatial correlation described in the preceding section. In a number of cases (51–56), coda Q−1 shows a peak during a period of 1–3 years before the occurrence of a major earthquake. A similar precursory pattern showed up also before the 1989 Loma Prieta earthquake in central California and the Landers earthquake in southern California (57). From the study of coda Q−1 over a period of >50 years for both central and southern California, Jin and Aki (57) had to conclude that the coda Q−1 precursor is not reliable, because a similar pattern sometimes is not followed by a major earthquake, and some major earthquakes were not preceded by the pattern. A rather surprisingly consistent observation made by these studies is that coda Q−1 tends to take a minimum value during the period of high aftershock activity (51, 52, 55) except for the recent Northridge earthquake (45). Furthermore, Tsukuda (58) found in the epicentral area of the 1983 Misasa earthquake that a period of high coda Q−1 from 1977 to 1980 corresponds to a low rate of seismicity (quiescence). These observations suggest that the temporal change in coda Q−1 may be related primarily to creep fractures in the ductile part of the lithosphere rather than in the shallower brittle part. Several convincing cases were made also for the temporal correlation between coda Q−1 and b value. The result was at first puzzling because the correlation was negative in some cases (49, 53, 59) and positive in other cases (10, 58). To resolve this puzzle, Jin and Aki (10) proposed the creep model, in which creep fractures near the brittle-ductile transition zone of the lithosphere are assumed to have a characteristic size in a given seismic region. The increased creep activity in the ductile part would then increase the seismic attenuation and at the same time produce stress concentration in the upper brittle part favoring the occurrence of earthquakes with magnitude Mc corresponding to the characteristic size of the creep fracture. Then, if Mc is in the lower end of the magnitude range from which the b value is evaluated, the b value would show a positive correlation with coda Q−1, and if Mc is in the upper end the correlation would be negative. The creep model is consistent with the observed behaviors of coda Q−1 during the periods of aftershocks and quiescence mentioned earlier. Another support for the deeper source of the coda Q−1 change comes from the observed coincidence OCR for page 3740 Proceedings of the National Academy of Sciences of the United States of America FIG. 4. The average coda Q−1 for midpoints (of the epicenter and the receiver) lying within a 0.2°×0.2° region for frequency 12 Hz and time window 20–30 sec. Larger circles correspond to greater coda Q−1 as indicated on the right (in 10−3). FIG. 5. Comparison between temporal variations of πfQc−1 (f is ≈2 Hz) and fractional frequency of earthquakes with magnitude 4.0<M< 4.5, for central California from Jin and Aki (11). OCR for page 3740 Proceedings of the National Academy of Sciences of the United States of America FIG. 6. Cross-correlation function between the two time series shown in Fig. 5. between a large increase in coda Q−1 in southern California during 1986 and 1987 (10, 44) and the increase in electrical conductivity in the same region (60), which is attributed to the lower crust. If the creep model is correct, the strongest correlation should be found between coda Q−1 and the rate of occurrence of earthquakes with Mc, and the correlation should always be positive. Indeed, Jin and Aki (11) found a remarkable positive correlation between coda Q−1 and the fraction of earthquakes in the magnitude range Mc < M < Mc+0.5 for both central and southern California. Fig. 5 shows the result for central California where the appropriate choice of Mc is 4.0. The correlation is highest (0.84) for the zero time lag and decays symmetrically with the time shift as shown in Fig. 6. A very similar result is obtained for southern California where the appropriate choice of Mc is 3.0. The correlation is again the highest (0.81) at the zero time lag. Thus, my current working hypothesis is that the temporal change in coda Q−1 reflects the activity of creep fractures in the ductile part of the lithosphere. The ductile part of the lithosphere is larger than the brittle part. The deformation in the ductile part is the source of stress in the brittle part. Although it was found that the coda Q−1 precursor is not reliable, the study of spatial and temporal variation in coda Q−1 may still be promising for understanding the loading process that leads to earthquakes in the brittle part. Discussion The characteristic magnitude Mc attributed to the characteristic scale length of creep fracture in southern California is 3.0, which corresponds to the fault length of a few hundred meters. The closeness of this length to the fault zone width estimated from trapped modes suggests a generic relation between them. Our creep model may be relevant for understanding some of the intriguing precursory phenomena. For example, the decrease in b value coincident with the increase in coda Q−1 before the Tangshan earthquake of 1976 (53) may be attributed to the activated creep fracture in the ductile crust with scale length corresponding to a Mc value of 4–5, which increased the stress in the brittle part of the crust. As mentioned earlier, the overall self-similarity governing earthquakes with source dimension from 10 cm to 100 km requires a discrete hierarchy of characteristic scale lengths. Recently, Sornette and Sammis (61) reported a logarithmic periodicity in the precursory seismicity before the Loma Prieta earthquake of 1989. The Renormalization Group equation which leads to the logarithmic periodicity is discrete. One jumps from one time to another by a finite amount, implying the existence of a discrete hierarchy of characteristic scales. We may be at a threshold of building a truly physical theory of earthquake prediction based on the well-defined structure of the seismogenic zone. This work was supported by the Southern California Earthquake Center under National Science Foundation Cooperative Agreement EAR-8920136 and U.S. Geological Survey Cooperative Agreement –14–08–0001–A0899 and in part by Department of Energy Grant DE-FG03–87ER13807. 1. Abercrombie, R.E. & Leary, P.C. (1993) Geophys. Res. Lett. 20, 1511–1514. 2. Aki, K. (1967) J. Geophys. Res. 72, 1217–1232. 3. Berckhemer, H. (1962) Gerlands. Beitr. Geophys. 72, 5–26. 4. Chouet, B., Aki, K. & Tsujiura, M. (1978) Bull. Seismol. Soc. Am. 68, 49–79. 5. Rautian, T.G., Khalturin, V. I., Martinov, V.G. & Molnar, P. (1978) Bull Seismol. Soc. Am. 68, 749–792. 6. Papageorgiou, A.S. & Aki, K. (1983) Bull Seismol. Soc. Am. 73, 953–978. 7. Hanks, T.C. (1982) Bull Seismol. Soc. Am. 72, 1867–1880. 8. Okubo, P.G. & Aki, K. (1987) J. Geophys. Res. 92, 345–355. 9. Aki, K. (1987) J. Geophys. Res. 92, 1349–1355. 10. Jin, A. & Aki, K. (1989) J. Geophys. Res. 94, 14041–14059. 11. Jin, A. & Aki, K. (1993) J. Geodyn. 17, 95–120. 12. Li, Y.G., Aki, K., Adams, D., Hasemi, A. & Lee, W.H.K. (1994) J. Geophys. Res. 99, 11705–11722. 13. Leary, P.C., Li, Y.-G. & Aki, K. (1987) Geophys. J.R.Astron. Soc. 91, 461–484. 14. Li, Y.-G. & Leary, P.C. (1990) Bull. Seismol. Soc. Am. 80, 1245–1271. 15. Li, Y.-G., Leary, P.C., Aki, K. & Malin, P.E. (1990) Science 249, 763–766. 16. Johnson, A.M., Fleming, R.W. & Cruikshank, K.M. (1994) Bull Seismol. Soc. Am. 84, 499–510. 17. Li, Y., Aki, K., Chin, B.-H., Adams, D., Beltas, P. & Chen, J. (1995) Seismol. Res. Lett. 66, 39. 18. Li, Y.-G., Henyey, T.L. & Leary, P.C. (1992) J. Geophys. Res. 97, 8817–8830. 19. Mooney, W.D. & Meissner, R. (1992) in Continental Lower Crust, eds. Fountan, D.M., Arculus, R. & Kay, R.W. (Elsevier, Amsterdam), pp. 45–79. 20. Sabine, W.C. (1922) Collected Papers on Acoustics (Harvard Univ. Press, Cambridge, MA). 21. Aki, K. (1969) J. Geophys. Res. 74, 615–631. 22. Rautian, T.G. & Khalturin, V. I. (1978) Bull. Seismol. Soc. Am. 68, 923–948. 23. Herraiz, M. & Espinosa, A.F. (1987) PAGEOPH 125, 499–577. 24. Su, F., Aki, K. & Biswas, N.N. (1991) Bull. Seismol. Soc. Am. 81, 162–178. 25. Su, F., Aki, K., Teng, T., Zeng, Y., Koyanagi, S. & Mayeda, K. (1992) Bull Seismol. Soc. Am. 82, 580–602. 26. Aki, K. (1980) Phys. Earth Planet. Inter. 21, 50–60. 27. Mayeda, K., Koyanagi, S., Hoshiba, M., Aki, K. & Zeng, Y. (1992) J. Geophys. Res. 97, 6643–6659. 28. Aki, K. & Chouet, B. (1975) J. Geophys. Res. 80, 3322–3342. 29. Tsujiura, M. (1978) Bull. Earthquake Res. Inst. 53, 1–48. 30. Aki, K. (1992) Bull Seismol. Soc. Am. 82, 1969–1972. 31. Zeng, Y. (1993) Bull. Seismol. Soc. Am. 83, 1264–1277. 32. Aki, K. (1981) in Identification of Seismic Sources—Earthquake or Underground Explosion, eds. Husebye, E.S. & Mykkeltveit, S. (Reidel, Dordrecht, The Netherlands), pp. 515–541. 33. Frankel, A. & Clayton, R.W. (1986) J. Geophys. Res. 91, 6465– 6489. 34. Matsunami, K. (1991) Phys. Earth Planet. Inter. 67, 104–114. 35. Shang, T. & Gao, L.S. (1988) Sci. Sin. Ser. V 31, 1503–1514. 36. Aki, K. (1991) Phys. Earth Planet. Inter. 67, 1–3. 37. Wu, R. (1985) Geophys. J.R.Astron. Soc. 82, 57–80. 38. Gao, L.S., Lee, L.C., Biswas, N.N. & Aki, K. (1983) Bull. Seismol. Soc. Am. 73, 377–390. 39. Zeng, Y., Su, F. & Aki, K. (1991) J. Geophys. Res. 96, 607–619. OCR for page 3740 Proceedings of the National Academy of Sciences of the United States of America 40. Hoshiba, M, Sato, H. & Fehler, M. (1991) Paper Meteorol. Geophys. 42, 65–91. 41. Hoshiba, M. (1993) J. Geophys. Res. 98, 15809–15824. 42. Jin, A., Mayeda, K., Adams, D. & Aki, K. (1994) J. Geophys. Res. 99, 17835–17848. 43. Singh, S.K. & Herrmann, R.B. (1983) J. Geophys. Res. 88, 527–538. 44. Peng, J.Y. (1989) Thesis (Univ. Southern California, Los Angeles). 45. Ouyang, H. & Aki, K. (1994) Eos 75, 168 (abstr.). 46. Hu, G., Menke, W. & Powell, C. (1994) J. Geophys. Res. 99, 15245–15256. 47. Griscom, A. & Jachens, R.C. (1990) San Andreas Fault System, California (U.S. Geological Survey, Reston, Va), Professional Paper 1515, pp. 239–260. 48. Chouet, B. (1979) Geophys. Res. Lett. 6, 143–146. 49. Aki, K. (1985) Earthquake Prediction Res. 3, 219–230. 50. Sato, H. (1988) PAGEOPH 126, 465–498. 51. Gusev, A.A. & Lemzikov, V.K. (1984) Vulk. Seismol. 4, 76–90. 52. Novelo-Casanova, D. A, Berg, E., Hsu, H. & Helsley, C.E. (1985) Geophys. Res. Lett. 12, 789–792. 53. Jin, A. & Aki, K. (1986) J. Geophys. Res. 91, 665–673. 54. Sato, H. (1986) J. Geophys. Res. 91, 2049–2061. 55. Faulkner, J. (1988) Thesis (Univ. of Southern California, Los Angeles). 56. Su, F. & Aki, K. (1990) PAGEOPH 133, 23–52. 57. Jin, A. & Aki, K. (1993) J. Geodyn. 17, 95–120. 58. Tsukuda, T. (1988) PAGEOPH 128, 261–280. 59. Robinson, R. (1987) PAGEOPH 125, 579–596. 60. Madden, T.R., LaTorraca, G.A. & Park, S.K. (1993) J. Geophys. Res. 98, 795–808. 61. Sornette, D. & Sammis, C.G. (1995) J. Phys. (Paris), in press.
{"url":"http://www.nap.edu/openbook.php?record_id=5709&page=3740","timestamp":"2014-04-19T14:44:03Z","content_type":null,"content_length":"84871","record_id":"<urn:uuid:71fb78b1-27e1-49b5-82b5-12804610b384>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Area of a triangle August 24th 2008, 09:52 PM #1 Junior Member Aug 2008 Area of a triangle If the area of a triangle with base X is equal to the area of a square with side X, then the altitude of triangle is (a) $X/2$ Area of a triangle is equal to: $A = \frac{1}{2} \cdot \text{base} \cdot \text{height}$ Height is also known as altitude. Area of a square is: $A = \text{side} \cdot \text{side}$ The problem says that: $A_\text{triangle} = A_\text{square}$ This info is enough for you to find the answer. What we are doing is this: Since the base of the triangle is x, the side of the square is x, and we want these two areas to be the same, we see that we have the equation: $\tfrac{1}{2}x\cdot h=x^2$ Solving for h, we get $h=2\frac{x^2}{x}=\color{red}\boxed{2x}$ Does this make sense? EDIT: I see...you just won't let Jhevon and I be called Flash...you want to be a part too, Chops...don't ya?? August 24th 2008, 09:56 PM #2 Super Member Jun 2008 August 24th 2008, 09:57 PM #3
{"url":"http://mathhelpforum.com/geometry/46672-area-triangle.html","timestamp":"2014-04-16T21:59:36Z","content_type":null,"content_length":"37558","record_id":"<urn:uuid:edec2357-b79e-4a7c-99ed-f58b4be898b6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
System Advisor Model (SAM) SAM's performance models calculate the hourly output of a renewable energy system over a single year (except for the geothermal model that is based on monthly resource depletion over many years). The documents on this page are published articles and academic dissertations that describe the algorithms used by the different performance models. For more general descriptions of the models and their implementation in SAM, and for instructional materials see SAM's Help system, or documents and links on the Learning page. Photovoltaic Models SAM's photovoltaic peformance model relies on separate component models to represent the performance of modules and inverters in the system. SAM's user interface allows you to choose from several component model options, which are described in the documents listed below. A photovoltaic model reference manual is forthcoming (as of February 2014) that describes how the component models are implemented in SAM, and the algorithms not described below such as sun position and incident irradiance calculations, near-object shading, soiling, etc. Sandia Module Model King, D.L.; Boyson, W.E.; and Kratochvil, J.A. (2004). "Photovoltaic Array Performance Model." 41 pp.; Sandia Report No. 2004-3535. (PDF 1.8 MB) Sandia Inverter Model King, D.L.; Gonzalez, S.; Galbraith, G.M.; and Boyson, W.E. (2007). "Performance Model for Grid Connected Inverters." 47 pp.; Sandia Report No. 2007-5036. (PDF 1.3 MB) CEC Module Model Dobos, A. P. (2012). "An Improved Coefficient Calculator for the CEC Photovoltaic Module Model." ASME Journal of Solar Energy Engineering. In Press. De Soto, W.L. (M.S. 2004). "Improvement and Validation of a Model for Photovoltaic Array Performance." University of Wisconsin-Madison. (ZIP 1.8 MB) Neises, T. (M.S., 2011). "Development and Validation of a Model to Predict the Temperature of a Photovoltaic Cell." University of Wisconsin-Madison. (ZIP 4.4 MB) One-axis Tracking Marion, W.; Dobos, A. (2013). "Rotation Angle for the Optimum Tracking of One-Axis Trackers." National Renewable Energy Laboratory. 10 pp.; NREL/TP-6A20-58891. (PDF 388 KB) Self-shading Calculator for Fixed Tilt Arrays Deline, C.; Dobos, A.; Janzou, S.; Meydbrey, J.; Donoval, M. (2013) A Simplified Model of Uniform Shading in Large Photovoltaic Arrays. Solar Energy Vol 96, October 2013, pp 274-282. Draft Preprint ( PDF 1.3 MB) PV Subarray Mismatch Dobos, A. P. (2012). Modeling of Annual DC Energy Losses due to Off Maximum Power Point Operation in PV Arrays. [Proceedings] 38th IEEE Photovoltaic Specialists Conference (PVSC '12), 3-8 June 2012, Austin, Texas. Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE) pp. 002967-002969; NREL Report No. CP-6A20-55362. Marion, B.; Adelstein, J.; Boyle, K.; Hayden, H.; Hammond, B.; Fletcher, T.; Canada, B.; Narang, D.; Shugar, D.; Wenger, H.; Kimber, A.; Mitchell, L.; Rich, G.; Townsend, T. (2005). "Performance Parameters for Grid-Connected PV Systems." 9 pp.; NREL Report No. CP-520-37358. (PDF 816 KB) Marion, W., Anderberg, M. (2000). PVWATTS - An Online Performance Calculator for Grid-Connected PV Systems. Proceedings of the ASES Solar Conference, June 15-21, Madison, WI. Concentrating Solar Power (CSP) Models The CSP models include models for parabolic trough, power tower, linear Fresnel, dish-Stirling systems, and a generic solar system model. Empirical Trough (Based on Excelergy) Price, H. (2003). Parabolic Trough Solar Power Plant Simulation Model. Proceedings of the ISEC 2003: International Solar Energy Conference, 15-18 March 2003, Kohala Coast, Hawaii. New York: American Society of Mechanical Engineers. 665-673 pp.; NREL Report No. CP-550-34742. (PDF 548 KB) Physical Trough Model Wagner, M. J.; Gilman, P. (2011). "Technical Manual for the SAM Physical Trough Model." 124 pp.; NREL Report No. TP-5500-51825. (PDF 3.7 MB) Dish Stirling Fraser, P. (M.S. 2008). "Stirling Dish System Performance Prediction Model." University of Wisconsin-Madison. (ZIP 1.8 MB) Power Tower Molten Salt Wagner, M. (M.S. 2008). "Simulation and Predictive Performance Modeling of Utility-Scale Central Receiver System Power Plants." University of Wisconsin-Madison. (ZIP 32.3 MB) Power Tower Direct Steam Neises, T.; Wagner, M. (2012). "Simulation of Direct Steam Power Tower Concentrated Solar Plant." ASME SE 2012 6th International Conference on Energy Sustainability July 23-26, 2012. Power Tower Field Optimization Kistler, B. (1986). "A User's Manual for DELSOL3: A Computer Code for Calculating the Optical Performance and Optimal System Design for Solar Thermal Central Receiver Plants." Sandia Report No. SAND86-8018. (PDF 10 MB) Feierabend, L. (M.S., 2009). "Thermal Model Development and Simulation of Cavity-Type Solar Central Receiver Systems." University of Wisconsin-Madison. (ZIP 5.0 MB) Linear Fresnel Wagner, M.; Zhu, G. (2012). "A Direct-steam Linear Fresnel Performance Model for NREL's System Advisor Model." NREL Conference Paper CP-5500-55044. (PDF 647 KB) Wagner, M. (2012). "Results and Comparison from the SAM Linear Fresnel Technology Performance Model: Preprint. NREL Conference Paper CP-5500-54758." (PDF 726 KB) Generic Solar System Model Wagner, M. J.; Zhu, G. (2011). "Generic CSP Performance Model for NREL's System Advisor Model: Preprint." 10 pp.; NREL Report No. CP-5500-52473. (PDF 729 KB) Biomass Power Jorgenson, J.; Gilman, P.; Dobos, A. (2011). Technical Manual for the SAM Biomass Power Generation Model. 40 pp.; NREL Report No. TP-6A20-52688. (PDF 728 kB) Geothermal Power SAM's geothermal power model is based on the U.S. Department of Energy's Geothermal Electricity Technology Evaluation Model (GETEM). GETEM Manuals and Revision Notes. Wind Power Freeman, J.; Gilman, P.; Jorgenson, J.; Ferguson, T. (DRAFT 2013). Reference Manual for the System Advisor Model's Wind Performance Model. (PDF 815 KB) Quinlan, P. J. A., (M.S., 1996). "Time Series Modeling of Hybrid Wind Photovoltaic Diesel Power Systems". University of Wisconsin-Madison. (ZIP 2.1 MB) Statistical Analysis The following documents describe the Latin hypercube sampling (LHS) method that SAM uses for the statistical analysis simulation option (see SAM Help - Statistical). For a description of the LHS method implemented in SAM, see Wyss, G; Jorgensen, K. (1998). "A user's Guide To LHS: Sandia's Latin Hypercube Sampling Software." Sandia National Laboratories. SAND98-0210. 140 pp. (PDF 559 KB) For a more general discussion of sampling-based uncertainty analysis, see Helton, J.; Davis, F.; (2000). "Sampling-Based Methods for Uncertainty and Sensitivity Analysis." SAND99-2240. 121 pp. (PDF 5 For a basic description of the Latin hypercube sampling method, see the Wikipedia article "Latin hypercube sampling." Log in or register to post comments
{"url":"https://sam.nrel.gov/reference","timestamp":"2014-04-16T16:05:03Z","content_type":null,"content_length":"22609","record_id":"<urn:uuid:ed645f9e-0504-4862-99f5-b00e02119ccb>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Graham, WA Trigonometry Tutor Find a Graham, WA Trigonometry Tutor ...Regardless of the subject, I would say I am effective at recognizing patterns. I love sharing any shortcuts or tips that I discover.I have taken 2 quarters of Discrete Structures (Mathematics) at University of Washington, Tacoma. I earned a 4.0 each quarter. 16 Subjects: including trigonometry, chemistry, French, calculus ...I am uniquely qualified to tutor trigonometry, with a PhD in Aeronautical and Astronautical Engineering from the University of Washington and more than 40 years of project experience in science and engineering. The coursework for my Ph.D. included an extensive amount of mathematics, including ca... 21 Subjects: including trigonometry, chemistry, English, calculus ...During this time I also volunteered with the YMCA Special Olympics program and am very comfortable working with special needs children.I am qualified to tutor Study Skills due to my time spent in earning my A.A. and B.S. in Biology degree. In total I have earned 173 semester hours, and have been... 25 Subjects: including trigonometry, chemistry, physics, statistics With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including trigonometry, calculus, statistics, GRE ...Able to quickly learn and successfully apply new systems, applications, policies, and procedures. I have provided overall technical coordination while ensuring on-time delivery of products to government or commercial entities. Having designed graphical user interfaces and applications for the government over the past 15 years makes me a well-rounded tutor to teach programming 53 Subjects: including trigonometry, English, reading, geometry Related Graham, WA Tutors Graham, WA Accounting Tutors Graham, WA ACT Tutors Graham, WA Algebra Tutors Graham, WA Algebra 2 Tutors Graham, WA Calculus Tutors Graham, WA Geometry Tutors Graham, WA Math Tutors Graham, WA Prealgebra Tutors Graham, WA Precalculus Tutors Graham, WA SAT Tutors Graham, WA SAT Math Tutors Graham, WA Science Tutors Graham, WA Statistics Tutors Graham, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/graham_wa_trigonometry_tutors.php","timestamp":"2014-04-16T22:09:19Z","content_type":null,"content_length":"24090","record_id":"<urn:uuid:126efff3-f167-4c74-b172-b13867d31811>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
IMAP: Integrating Mathematics and Pedagogy: Overview Integrating Mathematics and Pedagogy (IMAP): An Investigation of the Effects on Elementary Preservice Teachers' Beliefs and Learning of Mathematics Grant funded to San Diego State University Foundation by the National Science Foundation and Department of Education, 1999 - 2002 In all disciplines but education, students enter as novices in need of learning the culture and terrain of the discipline. But in education, students enter as insiders with many years of experience during which they have developed deep-seated beliefs. Their beliefs about mathematics and their beliefs about how people learn mathematics interfere with their reconceptualizing mathematics in ways that will help them teach mathematics effectively. Their expectations of what they should learn in mathematics content courses constrain what they do learn. The research team for this project will undertake a series of experimental and qualitative studies that will lead us to better understand the effect of carefully designed early field experiences, coupled with mathematics content courses, on the beliefs and the mathematical growth of prospective elementary teachers of mathematics. Might integrating content with pedagogy change initial beliefs that are in conflict with the present consensus on how mathematics is learned and how it should be taught? How do beliefs affect what is learned in content courses? Do these effects depend on whether the integration of content and pedagogy occurs early in the prospective teachers' program or late in the program? How do variables such as age and experience with children affect the findings? Is the use of software programs developed for children and aimed at developing conceptual understanding, then used in both content courses and in early field experiences, effective in helping prospective teachers focus on conceptual learning and understanding? Because San Diego State University has a large teacher preparation program in which prospective elementary teachers take four content courses in mathematics and one course in methods of teaching mathematics, we have opportunities to experimentally investigate these questions with large groups of students. Studying these questions will require instruments more sensitive to change than those that now exist. Thus we will first need to develop instruments that will □ (a) provide accurate measures of beliefs prospective teachers hold and of how these beliefs change over time, □ (b) measure prospective teachers' depth of understanding of the school mathematics they will be expected to teach, and □ (c) determine the similarities and differences between what novices and experts attend to when observing teaching situations. These findings will help guide development of both the belief assessment and the field-based course. Initially a library of videoclips will be collected. Then cutting-edge eye-scanning technology will be used to determine what those in different groups (preservice teachers, inservice teachers, and university mathematics educators) attend to while viewing teaching and learning episodes. Furthermore, estimates of observers' cognitive processing as they view the videotapes of teaching situations will be made. The results will be used in designing the content course and the beliefs assessment. The beliefs assessment will consist of videoclips with an accompanying questionnaire that can be used not only in this project but also in other studies of teacher change. The research results will offer other universities a knowledge base for planning effective teacher preparation programs. The model used here can be extended to secondary preparation and to other [ Back to the top ]
{"url":"http://www.sci.sdsu.edu/CRMSE/IMAP/overview.html","timestamp":"2014-04-17T21:45:58Z","content_type":null,"content_length":"12646","record_id":"<urn:uuid:f846b608-42fd-47d3-bb60-85f689141bc8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/satellite73/medals/1","timestamp":"2014-04-18T10:52:27Z","content_type":null,"content_length":"117076","record_id":"<urn:uuid:9adc9d1e-60c1-48f7-95e3-9502d7e621ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Wilmington, CA Algebra 1 Tutor Find a Wilmington, CA Algebra 1 Tutor ...I am an ordained pastor with Calvary Chapel, and the creator of the Through the Word mobile app for daily Bible studies. I have taught through 33 books of the Bible, and written and recorded over 100 Bible Audio Guides. Public speaking can be just as easy as private speaking - it just takes some practice and a little help. 24 Subjects: including algebra 1, Spanish, calculus, writing ...I have tutored elementary, high school, and college students, and I'm happy to say, I have been able to help them all. I have tutored subjects from reading comprehension to neuroscience. At this time in my life I am a research assistant at USC in the field of neuroscience. 26 Subjects: including algebra 1, English, reading, calculus I earned a Bachelors of Science in Chemical Engineering from UC Irvine in 2013. I have 2+ years of tutoring experience: In high school, I taught pre-Algebra, Algebra, Trigonometry, Pre-Calculus, and Calculus to both groups and individual students. I have an SAT score of 1980. 12 Subjects: including algebra 1, chemistry, geometry, algebra 2 If experience is what you are looking for, you have come to the right place. I have recently earned my associate's degree in Mathematics at Golden West College. I am currently employed through the college and have been tutoring for about 1.5 years. 5 Subjects: including algebra 1, algebra 2, precalculus, trigonometry ...It's such a joy for me to see improvement in the students I work with. For example, one of my previous students received an award for Most Improved Student at the end of the academic year that I worked with her. I try to be efficient with time during each session. 10 Subjects: including algebra 1, Spanish, elementary math, grammar
{"url":"http://www.purplemath.com/Wilmington_CA_algebra_1_tutors.php","timestamp":"2014-04-17T01:40:23Z","content_type":null,"content_length":"24291","record_id":"<urn:uuid:6edd7b1f-6b85-49a2-a757-67fe9cf94e72>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Tech Briefs FSI Analysis of a Hydroelectric Power Plant Hydroelectric power plants are used for economic and environmentally friendly electric power generation all around the world. One of the technical challenges in the design of such structures is to consider the fluid-structure interaction effects. For example, when the water pressure fluctuates as it passes through the fluid passage and the hydraulic turbine, it creates a pulsatile loading that causes vibration of the powerhouse structure. In this Brief, we present some results of a study dealing with the vibrations of a powerhouse structure due to the fluid-structure interaction, obtained using ADINA. Figure 1 depicts a schematic of the powerhouse structure, the fluid passage and the turbine. Figure 2 shows the finite element model of the coupled system. The nonlinear transient response of the coupled system was solved using 3000 implicit integration time steps, with a time step of 0.002 seconds (see Ref). The fluid was modeled as a Navier-Stokes fluid. The turbulent behavior of the fluid was modeled using the shear stress transport (SST) model. The sliding mesh boundary condition was used to incorporate the large rotations of the turbine blades. Figure 1 Schematic of the powerhouse Figure 2 Finite element mesh of the (a) powerhouse structure (b) turbine (c) fluid Figure 3 Velocity vector plots (left) and nodal pressure contour plots (right) in the fluid at t = 4.80 s Figure 3 shows velocity vector and pressure contour plots in the fluid model at the time = 4.80 s. The movies at the top show the z-displacement contour plot of the powerhouse structure, and its time-dependent deformed shape due to the pressure loading from the fluid. The movie below shows the contour plot of the pressure in the fluid. This case study shows some of the powerful capabilities of ADINA for solving practical fluid-structure interaction problems. For an overview of the fluid-structure interaction analysis capabilities, please visit the following page: Fluid-structure interaction Also, some other applications of ADINA in the turbomachinery industry are presented in the following case studies: • S. Wei, L. Zhang, "Vibration analysis of hydropower house based on fluid-structure coupling numerical method"”, Water Science and Engineering, 3(1): 75-84, 2010 Hydroelectric power, fluid-structure interaction, FSI, hydraulic turbine, draft tube, powerhouse structure, vibration, sliding mesh, shear stress transport (SST), transient dynamic Courtesy of Shu-he Wei and Liao-jun Zhang, College of Water Conservancy and Hydropower Engineering, Hohai University, Nanjing 210098, P.R. China
{"url":"http://adina.com/newsgH98.shtml","timestamp":"2014-04-21T12:09:04Z","content_type":null,"content_length":"17165","record_id":"<urn:uuid:74cc78db-b62e-474f-9628-8e3021a24a73>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Yes, the Preseason Matters Every preseason we hear the same refrain from talking heads and fans alike: “the preseason doesn’t matter”. With the Oilers holding down a fairly sexy 2013 preseason record of 5-1-1 for a points percentage (PTS%) of 78.6%, it’s worthwhile to question whether that actually means anything. I’m happy to report that it does. I compiled the preseason PTS% of every team’s preseason between 2005-06 and 2011-12, along with how those teams subsequently did in each regular season. This resulted in 210 observations (30 teams * 7 seasons). What I wanted to do was regress the regular season points percentage on the preseason points percentage — basically, can you use preseason PTS% to predict regular season PTS%? Here’s the Our R-squared correlation is only 0.053, which doesn’t sound very high, but with this many observations it provides enough freedom to test minute influences. Our preseason PTS% coefficient has a T-stat of 3.4, and a P-value of 0.0008. What that means is that there’s only a 0.08% chance that a team’s preseason points percentage has no explanatory power in predicting a team’s regular season points percentage. This obviously passes a 95% test of significance, so we can say that preseason performance does matter. Here’s a more interesting way to show the relationship: It kind of looks like a random cloud, but adding a linear trendline does show a positive relationship between the two variables. Generally, the better you do in the preseason, the better you’ll do in the regular season. It may be a slight influence, but it’s there and it’s statistically significant. Here’s my favourite table: Teams that have preseason points percentages below 42.5% have an average regular season points percentage of 53.3%. They are also only playoff competitive 40% of the time, and non-competitive 60% of the time (I defined “playoff competitive” as having greater than 55% points percentage in the regular season, as this is generally the level where teams lock up a playoff spot or fall just a point Teams that have a greater than 65% points percentage in the preseason go on to average a 58.3% regular season points percentage. These teams are playoff competitive 74% of the time, and non-competitive only 26% of the time. Teams between those two extremes average a 55.4% regular season points percentage, and have a 53% chance of being playoff competitive. The Oilers playing well in the preseason does mean something. If they win tonight, they’ll have a points percentage above 80% in the preseason. Historically, teams above that 80% mark in the preseason average a 59.7% points percentage in the regular season, and have an 87% chance of being playoff competitive. So cheer for them tonight! It means something :]. 3 Comments 1. Go, Data Points, Go! 2. I have bookmarked this page, count on me to be back when the Oilers don’t make the playoffs again this year. 3. I think you probably want to use a prediction interval rather than a confidence interval here. That would mean your interval for points would be a good bit wider here. Post a Comment
{"url":"http://www.boysonthebus.com/2013/09/27/yes-the-preseason-matters/","timestamp":"2014-04-19T14:33:39Z","content_type":null,"content_length":"19528","record_id":"<urn:uuid:a327241e-2c35-4939-bf08-2276ec264f32>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Palos Verdes Estates, CA Algebra 2 Tutor Find a Palos Verdes Estates, CA Algebra 2 Tutor ...I specialize in high school math subjects like Pre-Algebra, Algebra, Algebra 2/Trigonometry, Precalculus and Calculus. I can also tutor college math subjects like Linear Algebra, Abstract Algebra, Differential Equations, and more. My teaching strategy is to emphasize the importance of dedicatio... 9 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I teach students how to become more organized, to plan ahead, and to use their study time appropriately. I encourage students to use an assignment book and track their homework due dates, test dates, and plan accordingly. Other study skills that students learn while working with me include goal-setting, prioritizing, and predicting possible test questions. 24 Subjects: including algebra 2, chemistry, writing, geometry ...I have scored above the 95th percentile on every standardized test I have taken, including the PSAT, SAT, and LSAT. I know the subject matter, but more importantly, I know what to do to get the highest score possible on these tests, and I can quickly teach these tips and techniques to you. I pr... 63 Subjects: including algebra 2, chemistry, English, ASVAB ...I was co-captain of that swim team for 1 year. I also use to be a life guard. I have 5+ years of training in school and with various actor's studios such as: New York Film Academy, John Robert Powers, and Saturday Actor's Studio. 14 Subjects: including algebra 2, calculus, physics, algebra 1 ...I am a graduating senior at Caltech in physics. I've been a tutor at Caltech in a few advanced physics and math courses and used to tutor a lot in high school. I'm looking to attend graduate school in the near future in experimental condensed matter physics. 26 Subjects: including algebra 2, calculus, geometry, physics
{"url":"http://www.purplemath.com/Palos_Verdes_Estates_CA_algebra_2_tutors.php","timestamp":"2014-04-19T17:11:36Z","content_type":null,"content_length":"24752","record_id":"<urn:uuid:3b50d791-edf4-42cf-aa71-04f26c606754>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
ContourPlot3D in spherical coordinates (self.Mathematica) submitted ago by ooglag sorry, this has been archived and can no longer be voted on I'm trying to plot a [DEL:3D:DEL] 4D surface in spherical coordinates and the best way I've found to do that is using ContourPlot3D. By default ContourPlot3D uses cartesian coordinates. In my plot code I used the With[] function to change coordinates. My output plots, though, appear fragmented. I think this is because of the zeros that arise from changing to the spherical coordinate systems. Is there any way around this? I've attached my code below. wf[n_,\[ScriptL]_,m_,r_,\[Theta]_,\[Phi]_]:=Sqrt[(2/(n a))^3 (n-\[ScriptL]-1)!/(2 n ((n+\[ScriptL])!)^3)] E^(-r/(n a)) ((2 r)/(n a))^\[ScriptL] LaguerreL[(n-\[ScriptL]-1),(2\[ScriptL]-1),((2 r)/(n a))]SphericalHarmonicY[\[ScriptL],m,\[Theta],\[Phi]] pd[n_,\[ScriptL]_,m_,r_,\[Theta]_,\[Phi]_]:=FullSimplify[wf[n,\[ScriptL],m,r,\[Theta],\[Phi]]\[Conjugate] wf[n,\[ScriptL],m,r,\[Theta],\[Phi]]] conPlot3D[3, 1, 0, r , \[Theta], \[Phi], 40] And here are some images. The first image is the plot generated from the code above and the second and third images are what I expect to see. http://imgur.com/a/oCuGx Thank you in advance! all 7 comments [–]quantum-mechanic2 points3 points4 points ago sorry, this has been archived and can no longer be voted on [–]quantum-mechanic0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]quantum-mechanic1 point2 points3 points ago sorry, this has been archived and can no longer be voted on Oh, look, orbitals! As by my username, I have spent a reasonable amount of time making orbital plots in Mathematica. I have done the coordinate substitutions using /. though I don't know if that would make a difference. One thing that's not clear to me -- how is the contour value being picked, i.e. the value c at which you want plot all those (r,theta,phi) such that psi(r,theta,phi) = c? Have you checked to make sure your pd is actually producing a pure real result? In the past when I've done that sort of thing I had to use a Re[] to get rid of a nonexistant imaginary part that confused plotting and numerical routines. I can do 2D contour plots in e.g. the xy plane without a problem. Its been a while since I tried doing what you're doing. [–]ooglag[S] 1 point2 points3 points ago sorry, this has been archived and can no longer be voted on Great suggestions. I'm going to try the /. (replace all) for my variable substitutions, and I'll also try specifying a specific contour to plot. And good call on the function outputting imaginary answers. I'll try that too. The only change I had to make was switching from using the With[] function for my coordinate conversions to using the /. for my coordinate conversions. See the new code below. conPlot3D[n_,\[ScriptL]_,m_,r_,\[Theta]_,\[Phi]_,win_]:=ContourPlot3D[( pd[n,\[ScriptL],m,r,\[Theta],\[Phi]]/.{r->Sqrt[x^2+y^2+z^2],\[Theta]->ArcCos[z/Sqrt[x^2+y^2+z^2]],\[Phi]->ArcTan[y/x]}),{x,-win,win},{y,-win,win},{z,-win,win},Contours->1,PerformanceGoal->"Quality",Mesh->7,AxesLabel->{"x","y","z"},PlotLabel->{n,\[ScriptL],m}]; That is, instead of plotting I plotted Not sure why it made the difference, but thanks, quantum-mechanic! Here's the new plot http://imgur.com/WGwovHA [–]quantum-mechanic0 points1 point2 points ago sorry, this has been archived and can no longer be voted on
{"url":"http://www.reddit.com/r/Mathematica/comments/1bj6b1/contourplot3d_in_spherical_coordinates/","timestamp":"2014-04-18T18:45:03Z","content_type":null,"content_length":"71071","record_id":"<urn:uuid:fca147a7-b56f-4994-a4ab-426e9ca37db3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Ithaca College I joined the mathematics department in 2005. I teach a wide range of undergraduate mathematics courses, focusing on introductory statistics courses. I am also very involved with Ithaca College's graduate teacher education programs: I teach a mathematics course for future elementary teachers, a graduate seminar in mathematics education, and I supervise student teachers during their professional semester. My research interests are in mathematics education, which is a field that uses tools from psychology, philosophy, history, sociology, and literary criticism (and more!) to investigate how people learn and understand mathematics. In particular, I am interested in the way the words and symbols we use while doing mathematics shape the way we participate in mathematical activity. As Brian Rotman, a mathematician and semiotician, Mathematical signs play a creative rather than merely descriptive function in mathematical practice. Those things which are 'described'... and the means by which they are described... are mutually constitutive: each causes the presence of the other; so that mathematicians at the same time think their scribbles and scribble their thoughts. That is, when we do mathematics, we interact with representations to think about mathematical ideas, and these ideas are what enable us to create the representations. In this way, things like algebraic notation, graphs on a computer, or math textbooks are thinking tools: they are the product of mathematical thought and, at the same time, enable us to think about mathematics. I am interested in analyzing the way we create these tools and how their aspects can help or hinder us as we try to do mathematics.
{"url":"http://faculty.ithaca.edu/aweinberg/","timestamp":"2014-04-20T03:24:33Z","content_type":null,"content_length":"16802","record_id":"<urn:uuid:09d044f6-ff75-4cb5-81d7-9907617e5e1b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplying Matrices Multiplying Matrices: Further Down the Rabbit Hole Thus far we've been dealing with operations that were reasonably simple; adding and subtracting matrices is limited to same-sized matrices and scalar multiplication just runs the one number through the matrix. Actual matrices can also be multiplied against each other. This is a little more complicated, but basically you just need to remember the most important thing: size matters—at least for multiplying matrices. The sizes of the matrices involved are the most important factor here, so be careful: for matrices to be multiplied together the number of columns in the first matrix must be the same as the number of rows in the second. Sample Problem So, if we have matrix A and matrix B: both the size of the matrices and the order we multiply them in matters. AB is something we can't do, because there are two columns in A and three rows in B. Game over, man. BA we can do, because B has two columns and A has two rows. Perfect. The next thing to remember is that the product matrix will have the same number of rows as the first matrix has and the same number of columns that the second matrix has. Looking back to our example, we cannot multiply AB, but BA we can do. Like we said, we pity the fool, just like B.A. Baracus. Here's how that looks: We know the product will have three rows like B does, and two columns like A does: We already know the first matrix here is B, and the second one is A. Let's call our product P. In that case the entries in the matrices could be labeled like so: You're going to want to go across and down to work this out: P[11] = (B[11])(A[11]) + (B[12])(A[21]) We multiply and add the entries of the first row of the first matrix with those of the first column of the second matrix. That's why the sizes have to match, so nothing is left over. Then, for the next entry in our product: P[12] = (B[11])(A[12]) + (B[12])(A[22]) So far our quest for P looks like this: Now we move on to the next part. You multiply and add the entries of the second row of the first matrix with those of the first column of the second matrix; then you do it with the second column of the second matrix. That gives you the second row of your product. This is way less confusing using pictures: P[21] = (B[21])(A[11]) + (B[22])(A[21]) Next, we complete row two of the product: P[22] = (B[21])(A[12]) + (B[22])(A[22]) And plugging that into the big picture: Almost there. Just one more row, and you can probably guess how it goes: P[31] = (B[31])(A[11]) + (B[32])(A[21]) P[32] = (B[31])(A[12]) + (B[32])(A[22]) make up the third row in our product. In place in the whole it looks like this: Another way of looking at it is this. We know from the rules that P is going to have three rows like B and two columns like A. Try and visualize the three of them together like this: Looking at where the intersections are, we can see which row of B and which column of A collide to come up with one of P's entries. Below, we can see that X marks the spot where row one of B and column one of A crash: We know those four numbers need to crunch into one. How do they do that? Well, we read rows left to right, like a book. We read columns top to bottom. Therefore, we multiply the first number from B's first row with the first number from A's first column (1)(2) = 2, we multiply the second number from B's first row with the second number from A's first column (3)(3) = 9, and we add those together: Next we check out the collision between row one of B and column two of A: Yup. X marks the spot, and we do the same thing. We read rows left to right, like a book. And we read columns top to bottom. We multiply the first number from B's first row with the first number from A's second column (1)(1) = 1, we multiply the second number from B's first row with the second number from A's second column (3)(2) = 6, we add those together: 1 + 6 = 7. We know, we know. We're like a broken record, but -X marks the spot, and we do the same thing. We read the row left to right, we read the column top to bottom. This time we are doing the second row in the product, so we multiply the first number from B's second row with the first number from A's first column (2)(2) = 4, we multiply the second number from B's second row with the second number from A's first column (1)(3) = 3, we add those together: 4 + 3 = 7. We're really getting somewhere now. Halfway there. Now it looks like this, and we can see what's next: X marks the spot, friend, and we do the same thing. The row goes left to right, the column top to bottom. This time we are still doing the second row in the product, we are just moving on to the second spot in it. We multiply the first number from B's second row with the first number from A's second column (2)(1) = 2, we multiply the second number from B's second row with the second number from A's second column (1)(2) = 2, we add those together: 2 + 2 = 4, to get this: And we do it all over again, this time with the third row in B, once for each column in A. (4)(2) = 8, (2)(3) = 6, and 8 + 6 = 14: And finally, (4)(1) = 4, (2)(2) = 4, and 4 + 4 = 8: Woohoo! We did it! We kept our heads, and it only gets easier so long as we remember the most important rules: for matrices to be multiplied together the number of columns in the first matrix must be the same as the number of rows in the second; and the product matrix will have the same number of rows as the first matrix has and the same number of columns that the second matrix has. Hey, remember when we said that even when we have two matrices of the same size, the order matters? You can multiply in either order, sure, but the answer won't be the same. Watch: We can do it in this order first: (2)(1) + (4)(4) = 2 + 16 = 18; (2)(0) + (4)(2) = 0 + 8 = 8; (3)(1) + (3)(4) = 3 + 12 = 15; (3)(0) + (3)(2) = 0 + 6 = 6. So this is our product: And if we turn the tables? Let's arrange that so it's easier to see: (1)(2) + (0)(3) = 2 + 0 = 2; (1)(4) + (0)(3) = 4 + 0 = 4; (4)(2) + (2)(3) = 8 + 6 = 14; (4)(4) + (2)(3) = 16 + 6 = 22. This is our product: Totally different. Weird, right? Hold the presses, though. Remember the identity matrix and zero matrix? Theeeey're baaaack... The one time a number can be multiplied and stay the same is when it is multiplied by the number one. In matrix land, the identity matrix is the one. Identity matrices are all zeros but for a diagonal line of ones from the top left to bottom right corners. They can be any dimensions so long as they are square: Whenever a matrix is multiplied by an identity matrix, the product is the same as the non-identity matrix: We proceed just as we did in multiplying before. Try and visualize the three of them together like this: We'll call the identity matrix I, the product P and the other matrix we'll call T for Ted. We just like the name, okay? Looking at where the intersections are, we can see which row of T and which column of I collide to come up with one of P's entries. Below, we can see that X marks the spot where row one of T and column one of I crash: So we multiply the first number from T's first row with the first number from I's first column (2)(1) = 2, we multiply the second number from T's first row with the second number from I's first column (3)(0) = 0, we multiply the third number from T's first row with the third number from I's first column (2)(0) = 0 and we add those together to get 2. Next we check out the collision between row one of T and column two of I: Yup. X marks the spot, and we do the same thing. So we multiply the first number from T's first row with the first number from I's second column (2)(0) = 0, we multiply the second number from T's first row with the second number from I's second column (3)(1) = 3, we multiply the third number from T's first row with the third number from I's second column (2)(0) = 0, and we add those together: 0 + 3 + 0 = 3. We continue on in that pattern and get a product that's exactly the same as T, because multiplying a matrix by I is like multiplying a number by the number 1. Rule: Whenever there's an operation that can be multiplied together and one of the matrices is an identity matrix, the answer is always the same as the non-identity matrix. And the zero matrix? Well, we remember that any number multiplied by zero is just zero. A zero matrix functions in matrix multiplication the very same way. These are all zero matrices: Whenever there's an operation that is able to be multiplied together and one of the matrices is a zero matrix, the answer is always a zero matrix that has the same number of rows as the first matrix has and the same number of columns that the second matrix has. Take this example: We know they can be multiplied together because for matrices to be multiplied together the number of columns in the first matrix must be the same as the number of rows in the second.That's true here; there are three columns in the first matrix and three rows in the second matrix. We also know that the product matrix will also be a 3 × 3 matrix becausethe product matrix will have the same number of rows as the first matrix has and the same number of columns that the second matrix has. Applying the zero matrix rule we see that whenever there's an operation that is able to be multiplied together and one of the matrices is a zero matrix, the answer is always a zero matrix that has the same number of rows as the first matrix has and the same number of columns that the second matrix has. Sample Problem Find the product: We got this one. The relevant rules: for matrices to be multiplied together the number of columns in the first matrix must be the same as the number of rows in the second; and the product matrix will have the same number of rows as the first matrix has and the same number of columns that the second matrix has. This means that we can do this calculation and the product will be a 3 × 3. Let's look at this the easy way: That's right. We look at the intersection and we can see the collision. X marks the spot where row one of the first matrix and column one of the second matrix crash: We know those four numbers need to crunch into one, just like they crunch our car into a little rectangle at the wrecking yard after we get in a crash. We've been distracted lately by matrices, right? We look at the first matrix's first row left to right. We read the second matrix's first column top to bottom. We multiply the first number from the first matrix's first row with the first number from the second matrix's first column (5)(3) = 15, we multiply the second number from the first matrix's first row with the second number from the second matrix's first column (2)(5) = 10, and we add those together: 15 + 10 = 25. Next we check out the next collision like the good rubberneckers that we are: X marks the spot, and we do the same thing. We read rows left to right and we read columns top to bottom. We multiply the first number from the first matrix's first row with the first number from the second matrix's second column (5)(6) = 30, we multiply the second number from the first matrix's first row with the second number from the second matrix's second column (2)(2) = 4, we add those 30 + 4 = 34. Now we just do the last one in that top row of the product. X marks our target spot and we know that we multiply the first number from the first matrix's first row with the first number from the second matrix's third column (5)(4) = 20, we multiply the second number from the first matrix's first row with the second number from the second matrix's third column (2)(2) = 4, we add those together: 20 + 4 = 24. We plug that in and move on to the second row of our product: We press on through the rest of the product—X marks the spot, and we do the same thing. This time we are doing the second row in the product, so we multiply the first number from the first matrix's second row with the first number from the second matrix's first column (2)(3) = 6, we multiply the second number from the first matrix's second row with the second number from the second matrix's first column (1)(5) = 5, we add those together: 6 + 5 = 11. Now it looks like this, and we can see that Alice's Brady Bunch spot is next: To get our Alice X we multiply the first number from the first matrix's second row with the first number from the second matrix's second column (2)(6) = 12, we multiply the second number from the first matrix's second row with the second number from the second matrix's first column (1)(2) = 2, we add those together: 12 + 2 = 14. Now it looks like this, and what's next—is that Jan Brady? Hmmm: That's right, Marcia. We multiply the first number from the first matrix's second row with the first number from the second matrix's third column (2)(4) = 8, we multiply the second number from the first matrix's second row with the second number from the second matrix's third column (1)(2) = 2, we add those together: 8 + 2 = 10. This is going swimmingly: We multiply the first number from the first matrix's third row with the first number from the second matrix's first column (3)(3) = 9, we multiply the second number from the first matrix's third row with the second number from the second matrix's first column (4)(5) = 20, we add those together: 9 + 20 = 29. Now what? We multiply the first number from the first matrix's third row with the first number from the second matrix's second column (3)(6) = 18, we multiply the second number from the first matrix's third row with the second number from the second matrix's second column (4)(2) = 8, we add those together: 18 + 8 = 26. Yes, last one. We multiply the first number from the first matrix's third row with the first number from the second matrix's third column (3)(4) = 12, we multiply the second number from the first matrix's third row with the second number from the second matrix's third column (4)(2) = 8, we add those together: 12 + 8 = 20. Our product:
{"url":"http://www.shmoop.com/matrices/multiplying-matrix.html","timestamp":"2014-04-18T13:37:41Z","content_type":null,"content_length":"53718","record_id":"<urn:uuid:b021bbe0-b50b-498d-b282-4c9d3206abed>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
finding the sum of geometric progressions formula Best Results From Wikipedia Yahoo Answers Youtube From Wikipedia Geometric progression In mathematics, a geometric progression, also known as a geometric sequence, is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed non-zero number called the common ratio. For example, the sequence 2, 6, 18, 54, ... is a geometric progression with common ratio 3. Similarly 10, 5, 2.5, 1.25, ... is a geometric sequence with common ratio 1 /2. The sum of the terms of a geometric progression is known as a geometric series. Thus, the general form of a geometric sequence is a,\ ar,\ ar^2,\ ar^3,\ ar^4,\ \ldots and that of a geometric series is a + ar + ar^2 + ar^3 + ar^4 + \cdots where r≠0 is the common ratio and a is a scale factor, equal to the sequence's start value. Elementary properties The n-th term of a geometric sequence with initial value a and common ratio r is given by a_n = a\,r^{n-1}. Such a geometric sequence also follows the recursive relation a_n = r\,a_{n-1} for every integer n\geq 1. Generally, to check whether a given sequence is geometric, one simply checks whether successive entries in the sequence all have the same ratio. The common ratio of a geometric series may be negative, resulting in an alternating sequence, with numbers switching from positive to negative and back. For instance 1, &minus;3, 9, &minus;27, 81, &minus;243, &hellip; is a geometric sequence with common ratio &minus;3. The behaviour of a geometric sequence depends on the value of the common ratio. If the common ratio is: • Positive, the terms will all be the same sign as the initial term. • Negative, the terms will alternate between positive and negative. • Greater than 1, there will be exponential growth towards positive infinity. • 1, the progression is a constant sequence. • Between &minus;1 and 1 but not zero, there will be exponential decay towards zero. • &minus;1, the progression is an alternating sequence (see alternating series) • Less than &minus;1, for the absolute values there is exponential growth towards positive and negative infinity (due to the alternating sign). Geometric sequences (with common ratio not equal to &minus;1,1 or 0) show exponential growth or exponential decay, as opposed to the Linear growth (or decline) of an arithmetic progression such as 4, 15, 26, 37, 48, … (with common difference 11). This result was taken by T.R. Malthus as the mathematical foundation of his Principle of Population. Note that the two kinds of progression are related: exponentiating each term of an arithmetic progression yields a geometric progression, while taking the logarithm of each term in a geometric progression with a positive common ratio yields an arithmetic progression. Geometric series A geometric series is the sum of the numbers in a geometric progression: \sum_{k=0}^{n} ar^k = ar^0+ar^1+ar^2+ar^3+\cdots+ar^n. \, We can find a simpler formula for this sum by multiplying both sides of the above equation by 1 &minus; r, and we'll see that (1-r) \sum_{k=0}^{n} ar^k & = (1-r)(ar^0 + ar^1+ar^2+ar^3+\cdots+ar^n) \\ & = ar^0 + ar^1+ar^2+ar^3+\cdots+ar^n \\ & {\color{White}{} = ar^0} - ar^1-ar^2-ar^3-\cdots-ar^n - ar^{n+1} \\ & = a - ar^ {n+1} \end{align} since all the other terms cancel. Rearranging (for r≠1) gives the convenient formula for a geometric series: \sum_{k=0}^{n} ar^k = \frac{a(1-r^{n+1})}{1-r}. If one were to begin the sum not from 0, but from a higher term, say m, then \sum_{k=m}^n ar^k=\frac{a(r^m-r^{n+1})}{1-r}. Differentiating this formula with respect to r allows us to arrive at formulae for sums of the form \sum_{k=0}^n k^s r^k. For example: \frac{d}{dr}\sum_{k=0}^nr^k = \sum_{k=1}^n kr^{k-1}= For a geometric series containing only even powers of r multiply by 1-r^2: (1-r^2) \sum_{k=0}^{n} ar^{2k} = a-ar^{2n+2}. \sum_{k=0}^{n} ar^{2k} = \frac{a(1-r^{2n+2})}{1-r^2}. For a series with only odd powers of r (1-r^2) \sum_{k=0}^{n} ar^{2k+1} = ar-ar^{2n+3} \sum_{k=0}^{n} ar^{2k+1} = \frac{ar(1-r^{2n+2})}{1-r^2}. Infinite geometric series An infinite geometric series is an infinite series whose successive terms have a common ratio. Such a series converges if and only if the absolute value of the common ratio is less than one (&nbsp;|& nbsp;r&nbsp;|&nbsp;<&nbsp;1&nbsp;). Its value can then be computed from the finite sum formulae \sum_{k=0}^\infty ar^k = \lim_{n\to\infty}{\sum_{k=0}^{n} ar^k} = \lim_{n\to\infty}\frac{a(1-r^{n+1})}{1-r}= \lim_{n\to\infty}\frac{a}{1-r} - \lim_{n\to\infty}{\frac{ar^{n+1}}{1-r}} r^{n+1} \to 0 \mbox{ as } n \to \infty \mbox{ when } |r| < 1. \sum_{k=0}^\infty ar^k = \frac{a}{1-r} - 0 = \frac{a}{1-r} For a series containing only even powers of r, \sum_{k=0}^\infty ar^{2k} = \frac{a}{1-r^2} and for odd powers only, \sum_{k=0}^\infty ar^{2k+1} = \frac{ar}{1-r^2} In cases where the sum does not start at k = 0, \sum_{k=m}^\infty ar^k=\frac{ar^m}{1-r} The formulae given above are valid only for |&nbsp;r&nbsp;|&nbsp;<&nbsp;1. The latter formula is valid in every Banach algebra, as long as the norm of r is less than one, and also in the field of p -adic numbers if |&nbsp;r&nbsp;|[p]&nbsp;<&nbsp;1. As in the case for a finite sum, we can differentiate to calculate Arithmetic progression In mathematics, an arithmetic progression (AP) or arithmetic sequence is a sequence of numbers such that the difference of any two successive members of the sequence is a constant. For instance, the sequence 3, 5, 7, 9, 11, 13, &hellip; is an arithmetic progression with common difference 2. If the initial term of an arithmetic progression is a_1 and the common difference of successive members is d, then the nth term of the sequence is given by: \ a_n = a_1 + (n - 1)d, and in general \ a_n = a_m + (n - m)d. A finite portion of an arithmetic progression is called a finite arithmetic progression and sometimes just called an arithmetic progression. The behavior of the arithmetic progression depends on the common difference d. If the common difference is: • Positive, the members (terms) will grow towards positive infinity. • Negative, the members (terms) will grow towards negative infinity. The sum of the members of a finite arithmetic progression is called an arithmetic series. Expressing the arithmetic series in two different ways: Adding both sides of the two equations, all terms involving d cancel: \ 2S_n=n(a_1+a_n). Dividing both sides by 2 produces a common form of the equation: S_n=\frac{n}{2}( a_1 + a_n). An alternate form results from re-inserting the substitution: a_n = a_1 + (n-1)d: S_n=\frac{n}{2}[ 2a_1 + (n-1)d]. In 499 CE Aryabhata, a prominent mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, gave this method in the Aryabhatiya(section 2.18) . So, for example, the sum of the terms of the arithmetic progression given by a[n] = 3 + (n-1)(5) up to the 50th term is S_{50} = \frac{50}{2}[2(3) + (49)(5)] = 6,275. The product of the members of a finite arithmetic progression with an initial element a[1], common differences d, and n elements in total is determined in a closed expression a_1a_2\cdots a_n = d^n {\left(\frac{a_1}{d}\right)}^{\overline{n}} = d^n \frac{\Gamma \left(a_1/d + n\right) }{\Gamma \left( a_1 / d \right) }, where x^{\overline{n}} denotes the rising factorial and \Gamma denotes the Gamma function. (Note however that the formula is not valid when a_1/d is a negative integer or zero.) This is a generalization from the fact that the product of the progression 1 \times 2 \times \cdots \times n is given by the factorial n! and that the product m \times (m+1) \times (m+2) \times \cdots \times (n-2) \times (n-1) \times n \,\! for positive integers m and n is given by Taking the example from above, the product of the terms of the arithmetic progression given by a[n] = 3 + (n-1)(5) up to the 50th term is P_{50} = 5^{50} \cdot \frac{\Gamma \left(3/5 + 50\right) }{\Gamma \left( 3 / 5 \right) } \approx 3.78438 \times 10^{98} Consider an AP Finding the product of first three terms a(a+d)(a+2d) =(a^{2}+ad)(a+2d) =a^{3}+3a^{2}d+2ad^{2} this is of the form a^{n} + na^{n-1} d^{n-2} + (n-1)a^{n-2}d^{n-1} so the product of n terms of an AP is: a^{n} + na^{n-1} d^{n-2} + (n-1)a^{n-2}d^{n-1} no solutions From Yahoo Answers Question:I have no idea how to solve this. I dont want an answer to the problem, just instuctions or a formula for how to solve it myself. I have to show my work, so I dont want an answer. Find the sum of the first four terms of the geometric sequence with a = -3 and r = 3. Answers:Sn = a1(1-r^n)/(1-r) Sn is the sum of the first n terms in a sequence a1 is the first term in the sequence r is the common ratio in the geometric sequence n is the number of terms you are adding up so n=4 S4 = -3*(1-3^4)/(1-3) S4 = -3*-80/-2 S4 = -120 Question:What is the equation you use to find the sum of a geometric series? Answers:Sn = a + ar + ar ---------ar^(n - 1) r Sn = __ar + ar + --------------------ar^n ( 1 - r ) Sn = a(1 - r^n) Sn = a (1 - r^n) / (1 - r) Answers:Sn = n/2[2a + (n 1)d] Sn is sum of n terms n = total numbers a = 1st term d = common difference ----- Question:what is the least possible integer that can be the sum of an infinite geometric progression whose first term is 10? please explain the answer too, i do not really get the whole concept of geometric progression, i already tried wiki put i couldn't comprehend it. also this was an extra credit question I'm just curious and am only in geometry honors right now as a freshman, so please don't call me stupid... also i know the question has incorrect grammar, i couldn't fit all if i used correct spelling, etc. Answers:In terms of the common ratio between the terms, the sum is: S = 10/(1 - r). From here, the minimum possible value occurs at r = 0 with: S = 10/(1 - 0) = 10. I hope this helps! From Youtube Sum of an Infinite Geometric Sequence :Learn to find the sum of an infinite geometric sequence. Learn more about online education at www.studyatapu.com Int Algebra: Find the Sum of a Geometric Sequence :www.mindbites.com for full video
{"url":"http://www.edurite.com/kbase/finding-the-sum-of-geometric-progressions-formula","timestamp":"2014-04-17T04:07:15Z","content_type":null,"content_length":"81311","record_id":"<urn:uuid:8965b258-4d85-43b1-b7c7-c7edefda0f21>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof Synthesis and Reflection for Linear Arithmetic "... Abstract. This paper presents a formalization of a library for automata on bit strings in the theorem prover Isabelle/HOL. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle’s code generator. With this work, we th ..." Cited by 8 (1 self) Add to MetaCart Abstract. This paper presents a formalization of a library for automata on bit strings in the theorem prover Isabelle/HOL. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle’s code generator. With this work, we therefore provide a mechanized proof of the well-known connection between logic and automata theory. 1 - Sixth International Workshop on Automated Verification of Critical Systems (AVOCS ’06) – Preliminary Proceedings , 2006 "... Proof reconstruction is a technique that combines an interactive theorem prover and an automatic one in a sound way, so that users benefit from the expressiveness of the first tool and the automation of the latter. We present an implementation of proof reconstruction for first-order logic and set-th ..." Cited by 2 (0 self) Add to MetaCart Proof reconstruction is a technique that combines an interactive theorem prover and an automatic one in a sound way, so that users benefit from the expressiveness of the first tool and the automation of the latter. We present an implementation of proof reconstruction for first-order logic and set-theoretical constructions between the interactive theorem prover Isabelle and the automatic SMT prover haRVey. 1 "... Abstract. This paper formalizes and verifies quantifier elimination procedures for dense linear orders and for real and integer linear arithmetic in the theorem prover ..." Cited by 1 (1 self) Add to MetaCart Abstract. This paper formalizes and verifies quantifier elimination procedures for dense linear orders and for real and integer linear arithmetic in the theorem prover "... We use higher-order logic to verify a quantifier elimination procedure for linear arithmetic over ordered fields, where the coefficients of variables are multivariate polynomials over another set of variables, we call parameters. The procedure generalizes Ferrante and Rackoff’s algorithm for the non ..." Cited by 1 (0 self) Add to MetaCart We use higher-order logic to verify a quantifier elimination procedure for linear arithmetic over ordered fields, where the coefficients of variables are multivariate polynomials over another set of variables, we call parameters. The procedure generalizes Ferrante and Rackoff’s algorithm for the non-parametric case. The formalization is based on axiomatic type classes and automatically carries over to e.g. the rational, real and non-standard real numbers. It is executable, can be applied to HOL formulae by reflection and performs well on practical examples. "... This talk presents reflected quantifier elimination procedures for both integer and real linear arithmetic. Reflection means that the algorithms are expressed as recursive functions on recursive data types inside some logic (in our case HOL), are verified in that logic, and can then be applied to th ..." Add to MetaCart This talk presents reflected quantifier elimination procedures for both integer and real linear arithmetic. Reflection means that the algorithms are expressed as recursive functions on recursive data types inside some logic (in our case HOL), are verified in that logic, and can then be applied to the logic itself. After a brief overview of reflection we will discuss a number of quantifier elimination algorithms for the following theories: – Dense linear orders without endpoints. We formalize the standard DNF-based algorithm from the literature. – Linear real arithmetic. We present both a DNF-based algorithm extending the case of dense linear orders and an optimized version of the algorithm by Ferrante and Rackoff [3]. – Presburger arithmetic. Again we show both a naive DNF-based algorithm and the DNF-avoiding one by Cooper [2]. We concentrate on the algorithms and their formulation in Isabelle/HOL, using the concept of locales to allow modular definitions and verification. Some of the details can be found in joint work with Amine Chaib [1].
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4029967","timestamp":"2014-04-19T02:29:46Z","content_type":null,"content_length":"22080","record_id":"<urn:uuid:83de922d-5764-42d9-8a06-e201344c304f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Indices of beta Diversity betadiver {vegan} R Documentation Indices of beta Diversity The function estimates any of the 24 indices of beta diversity reviewed by Koleff et al. (2003). Alternatively, it finds the co-occurrence frequencies for triangular plots (Koleff et al. 2003). betadiver(x, index = NA, order = FALSE, help = FALSE, ...) ## S3 method for class 'betadiver': plot(x, ...) ## S3 method for class 'betadiver': scores(x, triangular = TRUE, ...) x Community data matrix, or the betadiver result for plot and scores functions. index The index of beta diversity as defined in Koleff et al. (2003), Table 1. You can use either the subscript of β or the number of the index. See argument help below. order Order sites by increasing number of species. This will influence the configuration in the triangular plot and non-symmetric indices. help Show the numbers, subscript names and the defining equations of the indices and exit. triangular Return scores suitable for triangular plotting of proportions. If FALSE, returns a 3-column matrix of raw counts. ... Other arguments to functions. The most commonly used index of beta diversity is β_w = S/α - 1, where S is the total number of species, and α is the average number of species per site (Whittaker 1960). A drawback of this model is that S increases with sample size, but the expectation of α remains constant, and so the beta diversity increases with sample size. A solution to this problem is to study the beta diversity of pairs of sites. If we denote the number of species shared between two sites as a and the numbers of unique species (not shared) as b and c, then S = a + b + c and α = (2 a + b + c)/2 so that β_w = (b+c)/(2 a + b + c). This is the Sørensen dissimilarity as defined in vegan function vegdist with argument binary = TRUE. Many other indices are dissimilarity indices as well. Function betadiver finds all indices reviewed by Koleff et al. (2003). All these indices could be found with function designdist which uses different notation, but the current function provides a conventional shortcut. The function only finds the indices. The proper analysis must be done with functions such as betadisper, adonis or mantel. The indices are directly taken from Table 1 of Koleff et al. (2003), and they can be selected either by the index number or the subscript name used by Koleff et al. The numbers, names and defining equations can be seen using betadiver(help = TRUE). In all cases where there are two alternative forms, the one with the term -1 is used. There are several duplicate indices, and the number of distinct alternatives is much lower than 24 formally provided. The formulations used in functions differ occasionally from those in Koleff et al. (2003), but they are still mathematically equivalent. With index = NA, no index is calculated, but instead an object of class betadiver is returned. This is a list of elements a, b and c. Function plot can be used to display the proportions of these elements in triangular plot as suggested by Koleff et al. (2003), and scores extracts the triangular coordinates or the raw scores. Function plot returns invisibly the triangular coordinates as an " ordiplot" object. With index = NA, the function returns an object of class "betadisper" with elements a, b, and c. If index is specified, the function returns a "dist" object which can be used in any function analysing dissimilarities. For beta diversity, particularly useful functions are betadisper to study the betadiversity in groups, adonis for any model, and mantel to compare beta diversities to other dissimilarities or distances (including geographical distances). Although betadiver returns a "dist" object, some indices are similarities and cannot be used as such in place of dissimilarities, but that is a severe user error. Functions 10 ("j") and 11 ("sor") are two such similarity indices. Some indices return similarities instead of dissimilarities. Jari Oksanen Koleff, P., Gaston, K.J. and Lennon, J.J. (2003) Measuring beta diversity for presence-absence data. Journal of Animal Ecology 72, 367–382. Whittaker, R.H. (1960) Vegetation of Siskiyou mountains, Oregon and California. Ecological Monographs 30, 279–338. See Also designdist for an alternative to implement all these functions, vegdist for some canned alternatives, and betadisper, adonis, mantel for analysing beta diversity objects. ## Raw data and plotting m <- betadiver(sipoo) ## The indices ## The basic Whittaker index d <- betadiver(sipoo, "w") ## This should be equal to Sorensen index (binary Bray-Curtis in ## vegan) range(d - vegdist(sipoo, binary=TRUE)) version 1.16-32
{"url":"http://cc.oulu.fi/~jarioksa/softhelp/vegan/html/betadiver.html","timestamp":"2014-04-18T21:26:58Z","content_type":null,"content_length":"7190","record_id":"<urn:uuid:8d46318f-e55d-44d4-8978-f7d57ef99fcc>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Tinley Park Algebra Tutors ...I've worked for several big box test prep companies, and I have a 99% score. In the past 5 years, I've written proprietary guides on ACT strategy for local companies. These guides have been used to improve scores all over the midwest. 24 Subjects: including algebra 2, algebra 1, calculus, GRE ...I have a knowledge of tips and tricks to simplify elementary algebra concepts. As an undergraduate math major, I volunteered with a program that tutored local middle school students in math. I worked in small groups or individually with mostly 7th and 8th graders to help them keep up with the pace of the class. 5 Subjects: including algebra 1, statistics, prealgebra, probability ...I am very good at algebra and can generally be helpful to those students who are highly motivated to improve their skills in this area. I have taken many math courses in my academic career, and I have helped many students get through algebra related topics. However, I am at my best with highly motivated students who seriously want to learn the subject matter. 13 Subjects: including algebra 1, algebra 2, statistics, geometry ...I have also tutored for the Dupage Literacy Program. I am a very patient person and am passionate about teaching. I will go the extra mile to make sure the student thoroughly understands the 23 Subjects: including algebra 2, algebra 1, chemistry, geometry My tutoring experience ranges from grade school to college levels, up to and including Calculus II and College Physics. I've tutored at Penn State's Learning Center as well as students at home. My passion for education comes through in my teaching methods, as I believe that all students have the a... 34 Subjects: including algebra 1, algebra 2, reading, physics
{"url":"http://www.algebrahelp.com/Tinley_Park_algebra_tutors.jsp","timestamp":"2014-04-21T12:42:30Z","content_type":null,"content_length":"24920","record_id":"<urn:uuid:c30c8b86-58fd-45b1-9bfc-10f67237985c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
ladder problem October 11th 2012, 05:32 PM ladder problem There are 2 ladders, one is 25 ft long the other is 20 ft long, they are crossing eachther in an alley between two buildings, find the distance between the two buildings. The point where the ladders intersect to the ground is 10 ft. Attachment 25177 Please help, thanks in advance October 11th 2012, 06:04 PM Re: ladder problem I would let the width of the alley be x, where $0<x$. Drop a vertical line down from the point where the ladders meet to the ground. Let the horizontal distance from the left wall to the vertical line be $x_1$ and the horizontal distance from the vertical line to the right wall be $x_2$, hence: Now, by similarity (and the Pythagorean theorem), we find: Now solve for x.
{"url":"http://mathhelpforum.com/geometry/205148-ladder-problem-print.html","timestamp":"2014-04-18T11:21:48Z","content_type":null,"content_length":"5260","record_id":"<urn:uuid:a3880d4b-209a-489d-a9a7-38401d6fa780>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
San Rafael, CA Find a San Rafael, CA ACT Tutor ...One hour minimum for each session.I am an expert on math standardized testing, as stated in my reviews from previous students. I have worked on thousands of these types of problems and can show your student how to do every single one, which will dramatically increase their test scores! I can he... 59 Subjects: including ACT Math, chemistry, reading, physics ...You will too. I promise that you will learn what you want to know. My style is personable, engaging, and easygoing. 14 Subjects: including ACT Math, statistics, geometry, GRE ...I worked for many years professionally with statistics and next to precalculus and calculus it is my most tutored topic. Statistics doesn't require advanced math classes but it requires a good understanding how to translate word problems into a mathematical form and vice versa. Most classes cov... 41 Subjects: including ACT Math, calculus, geometry, statistics ...I've mentored young people in the Salt Lake Peer Court system as they transformed various life difficulties into academic accomplishments. I have a passion about learning and now I want to offer it to you, along with the skills you need to succeed.As a biology major at USF, I have recent and con... 32 Subjects: including ACT Math, reading, chemistry, writing ...Just as there are hundreds of ways to prove the Pythagorean Theorem, there's bound to be an explanation for you. Math isn't an abstract subject for geniuses, it's a practical tool set that just requires a lot of practice and hard work. My goal for you isn't to just memorize formulas and regurgitate facts, but to fall in love with the subject the same way I did. 19 Subjects: including ACT Math, physics, calculus, writing Related San Rafael, CA Tutors San Rafael, CA Accounting Tutors San Rafael, CA ACT Tutors San Rafael, CA Algebra Tutors San Rafael, CA Algebra 2 Tutors San Rafael, CA Calculus Tutors San Rafael, CA Geometry Tutors San Rafael, CA Math Tutors San Rafael, CA Prealgebra Tutors San Rafael, CA Precalculus Tutors San Rafael, CA SAT Tutors San Rafael, CA SAT Math Tutors San Rafael, CA Science Tutors San Rafael, CA Statistics Tutors San Rafael, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/San_Rafael_CA_ACT_tutors.php","timestamp":"2014-04-16T13:20:23Z","content_type":null,"content_length":"23733","record_id":"<urn:uuid:40106ef6-f909-4f26-a807-8e8cdec9ffab>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
A description of the different research subjects and publications can be found under the heading, and slides from some presentations I have given are below. I have used the word processor to create my Ph.D. dissertation, papers and presentations, and highly recommend it to everyone! Here is a chronological list of my publications along with reprints and recent preprints: 1. "Dynamic Density Functional Theory with hydrodynamic interactions and fluctuations", A. Donev and E. Vanden-Eijnden, submitted to J. Chem. Phys., 2014 [ArXiv:1403.3959]. 2. "Brownian Dynamics without Green's Functions", S. Delong, F. Balboa Usabiaga, R. Delgado-Buscalioni, B. E. Griffith and A. Donev, J. Chem. Phys., 140, 134110, 2014 [ArXiv:1401.4198]. 3. "A reversible mesoscopic model of diffusion in liquids: from giant fluctuations to Fick's law", A. Donev, T. G. Fai, and E. Vanden-Eijnden, J. Stat. Mech., P04004, 2014 [ArXiv:1312.1894]. A short summary is available as "Reversible Diffusion by Thermal Fluctuations", A. Donev, T. G. Fai, and E. Vanden-Eijnden, 2013 [ArXiv:1306.3158]. 4. "Metropolis Integration Schemes for Self-Adjoint Diffusions", N. Bou-Rabee and A. Donev and E. Vanden-Eijnden, to appear in SIAM MMS, 2014 [ArXiv:1309.5037] 5. "Efficient Variable-Coefficient Finite-Volume Stokes Solvers", M. Cai and A. J. Nonaka and J. B. Bell and B. E. Griffith and A. Donev, submitted to CiCP, 2013 [ArXiv:1308.4605]. 6. "Low Mach Number Fluctuating Hydrodynamics of Diffusively Mixing Fluids", A. Donev and A. J. Nonaka and Y. Sun and T. G. Fai and A. L. Garcia and J. B. Bell, to appear in CAMCOS, 2014 [ 7. "Fluctuating hydrodynamics of multispecies nonreactive mixtures" by K. Balakrishnan and A. L. Garcia and A. Donev and J. B. Bell, Phys. Rev. E 89:013017, 2014 [ArXiv:1310.0494]. 8. "A Minimally-Resolved Immersed Boundary Model for Reaction-Diffusion Problems", A. Pal Singh Bhalla, B. E. Griffith, N. A. Patankar and A. Donev, J. Chem. Phys., 139:214112, 2013 [ArXiv:1306.3159 9. "The Stokes-Einstein Relation at Moderate Schmidt Number", F. Balboa Usabiaga and X. Xie and R. Delgado-Buscalioni and A. Donev, J. Chem. Phys., 139:214113, 2013 [ArXiv:1309.7361] 10. "Inertial Coupling Method for particles in an incompressible fluctuating fluid", F. Balboa Usabiaga and R. Delgado-Buscalioni and B. E. Griffith and A. Donev, Computer Methods in Applied Mechanics and Engineering, 269:139-172, 2014 [ArXiv:1212.6427], code available at https://code.google.com/p/fluam. 11. "Temporal Integrators for Fluctuating Hydrodynamics", S. Delong and B. E. Griffith and E. Vanden-Eijnden and A. Donev, Phys. Rev. E, 87(3):033302, 2013 [arXiv:1212.1033]. 12. "Staggered Schemes for Fluctuating Hydrodynamics", F. Balboa and J. Bell and R. Delgado-Buscalioni and A. Donev and T. G. Fai and B. Griffith and C. Peskin, SIAM J. Multiscale Modeling and Simulation, 10(4):1369-1408, 2012 [arXiv:1108.5188]. 13. "Diffusive Transport Enhanced by Thermal Velocity Fluctuations", A. Donev, A. de la Fuente, J. B. Bell, and A. L. Garcia, Phys. Rev. Lett., 106:204501, 2011 [ArXiv:1103.5532]. See longer paper below for further details. 14. "Enhancement of Diffusive Transport by Nonequilibrium Thermal Fluctuations", A. Donev, A. de la Fuente, J. B. Bell, and A. L. Garcia, J. Stat. Mech., P06014, 2011, [arXiv:1103.5244]. 15. "On the Accuracy of Explicit Finite-Volume Schemes for Fluctuating Hydrodynamics", by A. Donev, E. Vanden-Eijnden, A. L. Garcia, and J. B. Bell, Communications in Applied Mathematics and Computational Science, 5(2):149-197, 2010 [arXiv:0906.2425]. 16. "A hybrid particle-continuum method for hydrodynamics of complex fluids", by A. Donev and J. B. Bell and A. L. Garcia and B. J. Alder, SIAM J. Multiscale Modeling and Simulation 8(3):871-911, 2010 [arXiv:0910.3968]. 17. "A First-Passage Kinetic Monte Carlo Algorithm for Complex Diffusion-Reaction Systems", by A. Donev, V. V. Bulatov, T. Oppelstrup, G. H. Gilmer, B. Sadigh and M. H. Kalos, J. Comp. Phys., 229 (9):3214-3236, 2010 [arXiv:0905.3576]. 18. "First-passage Kinetic Monte Carlo method", by T. Oppelstrup, V. V. Bulatov, A. Donev, M. H. Kalos, G. H. Gilmer and B. Sadigh, Phys. Rev. E, 80(6):066701, 2009 [arXiv:0905.3575]. 19. "Tethered DNA Dynamics in Shear Flow", by Y. Zhang, A. Donev, T. Weisgraber, B. J. Alder, M. D. Graham and J. J. de Pablo, J. Chem. Phys., 130:234902, 2009. 20. "A Thermodynamically-Consistent Non-Ideal Stochastic Hard Sphere Fluid", by A. Donev and A. L. Garcia and B. J. Alder, J. Stat. Mech., P11008, 2009 [arXiv:0908.0510]. 21. "Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids", by A. Donev, A. L. Garcia and B. J. Alder, Phys. Rev. Lett., 101:075902, 2008 [arXiv:0803.0359]. 22. "Stochastic Event-Driven Molecular Dynamics", by A. Donev, A. L. Garcia and B. J. Alder, J. Comp. Phys., 227(4):2644-2665, 2008, [arXiv:0708.0251]. 23. "Asynchronous Event-Driven Particle Algorithms", by A. Donev, SIMULATION: Transactions of the Society for Modeling and Simulation International, 85(4):229-242, 2009. 24. "Configurational Entropy of Binary Hard-Disk Glasses: Nonexistence of an Ideal Glass Transition", by A. Donev, F. H. Stillinger and S. Torquato, J. Chem. Phys., 127:124509, 2007. 25. "Underconstrained Jammed Packings of Hard Ellipsoids", by A. Donev, R. Connelly, F. H. Stillinger and S. Torquato, Phys. Rev. E, 75:051304, 2007 [cond-mat/0608334]. 26. "Calculating the Free Energy of Nearly Jammed Hard-Particle Packings Using Molecular Dynamics", by A. Donev, F. H. Stillinger, and S. Torquato, J. Comp. Phys., 225:509–527, 2007. 27. "Do Binary Hard Disks Exhibit an Ideal Glass Transition?", by A. Donev, F. H. Stillinger, and S. Torquato, Phys. Rev. Lett., 96:225502, 2006, [cond-mat/0603183]. 28. "Packing Hyperspheres in High-Dimensional Euclidean Spaces", by M. Skoge, A. Donev, F. H. Stillinger and S. Torquato, Phys. Rev. E, 74:041127, 2006 [ibid 75:029901, 2007], [cond-mat/0608362]. 29. "Some Observations on the Random Packing of Hard Ellipsoids", by P. M. Chaikin, A. Donev, W. Man, F. H. Stillinger, and S. Torquato, Ind. Eng. Chem. Res., 45(21):6960-6965, 2006. 30. "Tetratic Order in the Phase Behavior of a Hard-Rectangle System", by A. Donev, J. Burton, F. H. Stillinger, and S. Torquato, Phys. Rev. B, Vol. 73:054109, 2006, [cond-mat/0508550]. 31. "Unexpected Density Fluctuations in Jammed Disordered Sphere Packings", by A. Donev, F. H. Stillinger, and S. Torquato, Phys. Rev. Lett., 95:090604, 2005, [cond-mat/0506406]. 32. "Manufacturable extremal low-dielectric, high-stiffness porous materials", S. Torquato, A. Donev, A. G. Evans, and C. J. Brinker, J. Appl. Phys., 97:124103, 2005. 33. "Experiments on Random Packings of Ellipsoids", W. Man, A. Donev , F. H. Stillinger, M. T. Sullivan, W. B. Russel, D. Heeger , S. Inati, S. Torquato and P. M. Chaikin, Phys. Rev. Lett., 94:198001, 2005. 34. "Pair Correlation Function Characteristics of Nearly Jammed Disordered and Ordered Hard-Sphere Packings", by A. Donev, F. H. Stillinger, and S. Torquato, Phys. Rev. E, 71:011105, 2005, [cond-mat/ 35. "Neighbor List Collision-Driven Molecular Dynamics Simulation for Nonspherical Particles. I. Algorithmic Details II. Applications to Ellipses and Ellipsoids", by A. Donev, F. H. Stillinger, and S. Torquato, J. Comp. Phys, 202(2):737-764 (part I) and 202(2):765-793 (part II), 2005, [physics/0110034]. 36. "Comment on "Jamming at zero temperature and zero applied stress: The epitome of disorder", by A. Donev, S. Torquato, F. H. Stillinger, and R. Connelly, Phys. Rev. E, 70:043301, 2004. 37. "Unusually Dense Crystal Packings of Ellipsoids", by A. Donev, F. H. Stillinger, P. M. Chaikin and S. Torquato, Phys. Rev. Lett., 92:255506, 2004, [cond-mat/0110034]. 38. "Improving the Density of Jammed Disordered Packings using Ellipsoids" by A. Donev, I. Cisse, D. Sachs, E. A. Variano, F. H. Stillinger, R. Connelly, S. Torquato and P. M. Chaikin, Science, 303:990-993, 2004. 39. "Breakdown of Elasticity Theory for Jammed Hard-Particle Packings: Conical Nonlinear Constitutive Theory", by S.Torquato, A. Donev, and F. H. Stillinger, Int. J. Solids Structures, 40 (25):7143-7153, 2003. 40. "Energy-Efficient Actuation in Infinite Lattice Structures", by A. Donev and S. Torquato, J. Mech. Phys. Solids, 51(8):1459-1475, 2003. Related animations can be found in the page devoted to Design of Adaptive Periodic Trusses 41. "Jamming in Hard Sphere and Disk Packings", by A. Donev, S. Torquato, F. H. Stillinger, and R. Connelly, J. Appl. Phys., 95(3):989, 2004. 42. "A Linear Programming Algorithm to Test for Jamming in Hard-Sphere Packings", by A. Donev, S. Torquato, F. H. Stillinger, and R. Connelly, J. Comp. Phys., 197(1):139-166, 2004. 43. "Minimal Surfaces and Multifunctionality", by S.Torquato and A. Donev, Proceedings of the Royal Society of London: Mathematical, Physical and Engineering Sciences, 460(2047):1849 - 1856, 2004. 44. "Optimal design of manufacturable three-dimensional composites with multifunctional characteristics", by S.Torquato, S. Hyun and A. Donev, J. Appl. Phys., 94(9):5748-5755, 2003. 45. "Multifunctional Optimal Composite Microstructures: Simultaneous Transport of Heat and Electricity", by S.Torquato, S. Hyun and A. Donev, Phys. Rev. Lett., 89(26):266601, 2002. 46. "Random manifolds in non-linear resistor networks: applications to varistors and superconductors", by A. Donev, C. E. Musolff and P. M. Duxbury, J. Phys. A: Math. Gen., 35:L327-L333, 2002 [ 47. "Generalized von Smoluchowski model of reaction rates, with reacting particles and a mobile trap", by A. Donev, J. Rockwell and D. ben-Avraham, J. Stat. Phys., 95(1-2):97-112, 1999 [cond-mat/ As an academic I frequently give presentations at various scientific meetings, conferences, symposia, etc. Here are PDFs of selected presentations produced with the option of the Beamer latex class. Some of the latest presentations can be found under What's New? Note: The movies are typically not included in the PDF files, but some are available 1. "The Truth about diffusion (in liquids)", Applied Math Seminar at UC Berkeley February 2014. 2. "The Truth about diffusion (in liquids)", invited contribution to minisymposium on Multiscale modeling at SciCADE 2013, Valladolid, Spain, September 2013, and Applied Math Seminar at Courant October 2013. 3. "Minimally-Resolved Simulations of Suspensions of Active Brownian Particles", presentation given at the AFOSR Computational Math Program Review BRICC, Arlington VA, July 2013. 4. "Coupling an Incompressible Fluctuating Fluid with Suspended Structures", presentation given at the SIAM Conference on Mathematical Aspects of Materials Science, session on Multiscale Computation of Fluctuating Hydrodynamics and Microscale Mechanics, Philadelphia June 2013. 5. "Coupling an Incompressible Fluctuating Fluid with Suspended Structures", presentation given at the Mechanical Engineering Department at Northwestern University, May 2013. 6. "Coupling an Incompressible Fluctuating Fluid with Suspended Structures", presentation given at the Scientific Computing Seminar at Brown University, March 2013. 7. "Low Mach Number Fluctuating Hydrodynamics of Diffusively Mixing Fluids", talk given at the minisymposium on Hydrodynamics of Complex Fluids at the Micro and Nano-Scales, SIAM CSE13 Conference, Boston, February 2013. 8. "Coupling an Incompressible Fluctuating Fluid with Suspended Structures", invited presentation given at the workshop on Fluid-Structure Interactions in Soft-Matter Systems, Monash University Prato Center, Italy, November 2012. 9. "Computational Fluctuating Hydrodynamics Modeling of Giant Fluctuations", presentation given at the Department of Physics at Università degli Studi di Milano, Italy, November 2012. 10. "Coupling a Fluctuating Fluid with Suspended Particles", presentation at UPenn and NJIT applied mathematics seminar, October 2012. 11. "Multiscale Problems in Fluctuating Hydrodynamics", talk at the Workshop on Modelling the Dynamics of Complex Molecular Systems, Lorentz Center, Leiden, Netherlands, August 2012. Some movies can be found here. 12. "Stochastic Simulation of Complex Fluid Flows", talk at the Workshop on Multiscale Modeling in Soft Condensed Matter held at the KITP institute, UCSB, April 2012. Some movies can be found below. 13. "Numerical Methods for Fluctuating Hydrodynamics", talk at Center for Computational and Integrative Biology, Rutgers-Camden, November 2011. 14. "Coupling a Fluctuating Fluid with Suspended Particles", presentation at the CECAM workshop "Multiscale Modeling of Simple and Complex Liquid Flow Using Particle-Continuum Hybrids", Zaragoza, Spain, October 2011. 15. "Diffusive Transport Enhanced by Thermal Velocity Fluctuations", seminar at U.N.E.D., Madrid, Spain, October 2011. 16. "Numerical Methods for Fluctuating Hydrodynamics", talk at DSFD 2011, Fargo, ND, August 2011. 17. "Diffusive Transport Enhanced by Thermal Velocity Fluctuations", minisymposium on Fluctuating Hydrodynamics, ICIAM 2011, Vancouver, Canada, July 2011. 18. "Coupling a Fluctuating Fluid with Suspended Structures", minisymposium talk at ICIAM 2011, Vancouver, Canada, July 2011. 19. "A Hybrid Particle-Continuum Method Coupling a Fluctuating Fluid with Suspended Structures", plenary talk at the AMS von Neumann Symposium, Snowbird, Utah, July 6th 2011. 20. "Diffusive Transport by Thermal Velocity Fluctuations", Rochester University mathematical physics colloquium, April 15th 2011. 21. "Finite-Volume Schemes for Fluctuating Hydrodynamics", Courant Numerical Analysis Seminar, March 2011 22. "A Hybrid Particle-Continuum Approach to Hydrodynamics at Small Scales", XXVI Inter University Seminar on Mathematical Sciences Research (SIDIM), February 25, 2011, University of Puerto Rico at 23. "Enhancement of Diffusive Mass Transfer by Thermal Fluctuations", Courant Materials Working Group, February 14th 2011. 24. "Coarse-grained particle, continuum and hybrid models for complex fluids", at the Workshop for Multiscale Simulation of Heterogeneous Materials and Coupling of Thermodynamic Models, Leuven, Belgium, January 13th 2011. 25. "Coupling a Fluctuating Fluid with Suspended Structures", Part I. "Particle-Continuum Hybrid" (with movies), and "Part II: Inertial Stochastic Immersed Boundary Method", Courant BioMathematics Seminar, December 2010. 26. "An Event-Driven Kinetic Monte Carlo Algorithm for Reaction-Diffusion Systems", Courant Graduate Student Seminar, December 2010. 27. "Numerical Methods for Fluctuating Hydrodynamics", Courant Applied Math Seminar, September 2010. This has the movies embedded in it as flash animations so it is larger (~20MB) 28. "Finite-Volume Schemes for Fluctuating Hydrodynamics", SIAM MS10 Conference, May 2010 29. "A hybrid particle-continuum method for hydrodynamics of complex fluids", UCSB Applied Math Seminar, January 2010 30. "Fluctuating Hydrodynamics of Non-Ideal Fluids Via Stochastic Hard-Sphere Molecular Dynamics (SHSD)", DSMC09 Conference, September 2009 31. "Asynchronous Event-Driven Particle Algorithms in Computational Materials Science", AMS Meeting, January 2009 32. "Jammed Packings of Hard Particles", thesis defense presentation, June 2006 Here are some selected animations from my presentations. I often use two dimensions to make it easier to visualize. Newer animations are in the QuickTime format (extension mov). To play MNG animations (file extension mng), I use under Linux and IrfanView for Windows Older animations concerning jamming in packings of hard-particles can be found
{"url":"http://cims.nyu.edu/~donev/Publications.html","timestamp":"2014-04-18T06:15:42Z","content_type":null,"content_length":"42806","record_id":"<urn:uuid:d3947619-b8b8-4e23-b07a-a14ebb2195dd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
General solutions for choice sets: The Generalized Optimal-Choice Axiom set Andrikopoulos, Athanasios and Zacharias, Eleftherios (2008): General solutions for choice sets: The Generalized Optimal-Choice Axiom set. Download (215Kb) | Preview In this paper we characterize the existence of best choices of arbitrary binary relations over non finite sets of alternatives, according to the Generalized Optimal-Choice Axiom condition introduced by Schwartz. We focus not just in the best choices of a single set X, but rather in the best choices of all the members of a family K of subsets of X. Finally we generalize earlier known results concerning the existence (or the characterization) of maximal elements of binary relations on compact subsets of a given space of alternatives. Item Type: MPRA Paper Original General solutions for choice sets: The Generalized Optimal-Choice Axiom set Language: English Keywords: Generalized Optimal-Choice Axiom; maximal elements; acyclicity; consistency; ≻-upper compactness Subjects: D - Microeconomics > D1 - Household Behavior and Family Economics > D11 - Consumer Economics: Theory Item ID: 11645 Depositing Eleftherios /E Zacharias Date 23. Nov 2008 02:12 Last 15. Feb 2013 15:02 Alcantud, J. C., Characterization of the existence of maximal elements of acyclic relations, Economic Theory, 19(2002), 407-416. Arrow, K., Debreu, G., Existence of an equilibrium for a competitive economy, Econometrica, 22, (1954), 265-290. Bergstrom, T. C., Maximal elements of acyclic relations on compact sets, J. Econ. Theory, 10(1975), 403-404. Border, K. C., Fixed point theorems with applications to economics and game theory. Cambridge: Cambridge University Press 1985. Borglin, A., Keiding, H., Existence of equilibrium actions and of equilibrium: A note on the new existence theorems, J. Math. Econ. 3,(1976), 313-316. Brown, D. J., Acyclic choice. Cowles Foundation Discussion Paper No. 360, Yale University. Campbell, D. E., Walker, M., Maximal elements of weakly continuous relations, J. Econ. Theory, 50, (1990), 459-464. Debreu, G., A social equilibrium existence theorem, Proceedings of the National Academy of Sciences of the U.S.A. (1952) 38, 886-893. Duggan J., A general extension theorem for binary relations, J. Econ. Theory, 86, (1999), 1-16. Fishburn, P. C., Condorcet social choice function, SIAM J. Appl. Math., 33(1977), 469-489. Fishburn, P. C., Preference structures and their numerical representations, Theoret. Comput. Sci., 217(1999), 359-383. lyn : Lynn, A. S., J. Arthur Seebach, Jr., Counterexamples in Topology, Campbell, D. E., Springer-Verlag, New York, (1978). Mehta, G., Maximal elements in Banach spaces, Ind. J. Pure Appl. Math., 20, (1989), 690-697. Peris, J. E., Subiza, B., Maximal elements of not necessarily acyclic binary relations, Econ. Let., 44, (1994), 385-388. Shafer, W., Sonnenschein, H., Equilibrium in abstract economies without ordered preferences, J. Math. Econ., 2, (1975), 345-348. Sloss, J. L., Stable points of directional preference relations. Technical Report No. 71-7, Operations Research House, Stanford University Sonnenschein, H., Demand Theory without transitive preferences, with applications to the theroy of competitive equilibrium. In: Chipman, J. S., Hurwicz, L., Richter, M. K., Sonnenschein, H. (eds.) Preferences utility and demand, New York: Harcourt Brace Jovanovich 1971. Subiza, B., Peris, J. E., Numerical representations for lower quasi-continuous preferences, Math. Soc. Sci., 33, (1997),149-156. Suzumura K., Remarks on the theory of collective choice, Economica, 43, (1976), 381-390. Suzumura K., Rational choice, collective decisions and social welfare, Cambridge University Press, New York (1983). Suzumura K., Upper semicontinuous extensions of binary relations, J. Math. Econ., 37, (2002), 231-246. Schwartz, T., The logic of Collective Choice, New York: Columbia University Press. Van Deemen M.A., Coalition formation and social choice, Academic Publishers, Dordrecht, (1997). Walker, M., On the existence of maximal elements, J. Econ. Theory, 16, (1995), 470-474. Yannelis, N., Prabhakar, N., Existence of maximal elements and equilibria in linear topological spaces, J. Math. Econ. 12, (1983), 233-245. Yannelis, N., Maximal elements over noncompact subsets of linear topological spaces, Economics Letters 17, (1985), 133-136. Zhou, J., Tian, G., Transfer method for characterizing the existence of maximal elements of binary relations on compact or noncompact sets, SIAM Journal of Optimization 2 (3), (2002), URI: http://mpra.ub.uni-muenchen.de/id/eprint/11645
{"url":"http://mpra.ub.uni-muenchen.de/11645/","timestamp":"2014-04-21T00:34:28Z","content_type":null,"content_length":"23757","record_id":"<urn:uuid:079879bf-0606-4914-b480-3eed4a0c16fc>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
The Excel Chart Series Formula Introduction to the Series Formula An individual series' data ranges are highlighted when that series is highlighted (see below). The Y values in the series are within the blue outlined range. The Category (X axis) values are outlined in purple, and the Series name (legend entry) is outlined in green. Each outline has a handle -- a small square of the same color -- at its lower right corner. The source data range of the series can be enlarged or contracted by dragging these handles. The source data can be moved by dragging one of the colored outlines. In the screen shot above, note that information about the series is displayed above the worksheet. The Name Box contains the phrase Series "February", while the Formula Bar contains the series definition formula: This formula can be broken up into four elements as follows: =SERIES([Series Name],[X Values],[Y Values],[Plot Order]) Note: Bubble Charts require one additional range of data, for the sizes of the bubbles. This is included in the chart series formula as a fifth argument. This argument can be dealt with in essentially the same manner as the [Y Values], per the discussion below. =SERIES([Series Name],[X Values],[Y Values],[Plot Order],[Bubble Size Values]) In our example: Sheet1!$B$4 contains the Series Name ("February") Sheet1!$C$2:$F$2 contains the Category Labels ("Apples", "Oranges", "Grapes", "Bananas") Sheet1!$C$2:$F$2 contains the Y Values (8, 11, 3 ,7) and the series is plotted second (2) among the chart's series collection. The Series Name can be blank, a text string in double quotation marks, a reference to a worksheet range (one or more cells), or a reference to a named range (named formula). The X Values can be blank, a literal array of numeric values or text labels enclosed in curly braces, a reference to a worksheet range (one or more cells), or a reference to a named range (named formula). The Y Values can be a literal array of numeric values enclosed in curly braces, a reference to a worksheet range (one or more cells), or a reference to a named range (named formula). The Plot Order can be a whole number between 1 and the number of series in the chart. Editing the Series Formula Editing Ranges The series formula is an Excel formula like any other. You can click in the formula bar, and edit the formula manually, to change aspects of the charted series. Select part of the formula, type in the text you want there instead, and press Enter to apply the changes (or sometimes to cause an error!). For example, start with the series formula: Select just the "C" in the series X values range to change the first column in this range. Select "Sheet1" if we want to take the X values from another sheet: Select the "2" at the end to rearrange the plot order of the series: If you change this to a lower number, the edited series will move to the lower number, and shift all series between its old and new position one number higher. If you enter zero, Excel will assume you mean 1 and proceed accordingly. If you change this to a higher number, the edited series will move up, and shift all series between its old and new position one number lower. If you enter a number greater than the number of series, the edited series will be given a number equal to the number of series. Select the entire X values element of the series formula to completely change the reference: Hint: When you select the sheet and cell range of one of the series formula elements, you can type in your preferred text for this element, but you can also drag with the mouse to select another range. Click on the sheet tabs below to select a range on another worksheet in this workbook, or select from another open workbook using the Windows menu (below) or Ctrl-Tab to switch to the other When the series data source has been chanaged to another workbook, the formula includes the name of the other workbook, as shown below: It is easier and more error-proof to use the Source Data dialog to edit the ranges in the series formula. Entering Values Instead of Ranges To enter a series of numbers as an array, you must separate the values with commas and enclose the series in curly braces: To enter a series of text labels as an array (X values only!), you must enclose each value in quotes, separate the values with commas and enclose the series in curly braces: To enter a text label for the series name, enclose the label within quotes. When entered as arrays, our example series formula is transformed to this: It is easier and more error-proof to use the Source Data dialog to enter non-range values into the series formula. If you enter something Excel doesn't like, you will get one of the standard Formula Error dialog boxes, shown below in decreasing order of information content. You can revert to the previous formula by typing Esc. • You may have typed the name of a non-existent sheet. • You may have typed the name of a non-existent named range. • Check that the fully qualified range, in the form SheetName!Address has been entered. Add a Series with the Series Formula A handy technique for adding a series to a chart involves series formula manipulation. The procedure is straightforward: 1. Click on the series to be copied 2. Select the entire series formula 3. Copy (Ctrl-C) 4. Select the chart's Plot Area 5. Paste (Ctrl-V) 6. Edit this formula now, or anytime later 7. Press Enter 8. Repeat as necessary This technique is particularly suited to adding several series whose formulas differ only slightly from the original series. For example, when all the Y values come from parallel ranges, and all you need to do is change Sheet1!$C$2:$F$2 in the series formula to Sheet1!$C$3:$F$3, then to Sheet1!$C$4:$F$4. For more information, refer to:
{"url":"http://peltiertech.com/Excel/ChartsHowTo/ChartSeriesFormula.html","timestamp":"2014-04-18T01:15:08Z","content_type":null,"content_length":"17720","record_id":"<urn:uuid:13151e61-2ce8-4a64-be41-cb5d6b93af1f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - motion in space: velocity and acceleration popo902 Sep30-09 10:40 PM motion in space: velocity and acceleration 1. The problem statement, all variables and given/known data A gun has a muzzle speed of 80 meters per second. What angle of elevation should be used to hit an object 150 meters away? Neglect air resistance and use g = 9.8m/sec2 as the acceleration of gravity. 2. Relevant equations x = (vcos//theta)t y = (vsin//theta)t - 1/2gt^2 g = 9.8 m/sec^2 3. The attempt at a solution i substituted the given values: x should be 150 and y should be 0 right? 150 = (80 cos\\theta)t 0 = ( 80sin\\theta)t - 4.9t^2 i can't figure out how to eliminate a variable or how to begin to solve for //theta i solved for t using the "x" equation ( t = 15/8cos\\theta) and put that into the "y" equation and i got this... 0 = 150tan//theta - 73.5/64(sec\\theta)^2 where do i go from here to find \\theta? aPhilosopher Sep30-09 10:56 PM Re: motion in space: velocity and acceleration One thing I can recommend is to use symbols until you have a quadratic to solve and then plug in numbers. It's easier to spot mistakes and somebody trying to help you can follow the equations without having to do arithmetic in their heads. It always helps me but I hate arithmetic ;) I wonder if you could substitute something for [tex]sec^{2} \theta[/tex]. There might be a trig identity. It might even involve [tex]tan^{2} \theta[/tex] which would be sweet 'cause then you'd have a quadratic for the tangent. popo902 Sep30-09 11:33 PM Re: motion in space: velocity and acceleration oh (sec\\theta)^2 = 1 + tan//theta^2 !! i'll make //theta = Q since it's kind of a hassle to type it all the time... so far its 0 = 150tanQ - 73.5/64(1 + (tanQ)^2) and i did the quadratic formula but that gives me values when tanQ = 2 different long numbers that i got then i untangent it ( or tan^-1) and both numbers aren't close to the right answer ...what am i doing wrong? p.s. i hate arithmetic too aPhilosopher Oct1-09 07:21 AM Re: motion in space: velocity and acceleration You're doing too much arithmetic. Post the quadratic you get up here without any numbers in it (Well, 2 in the exponent is acceptable) and somebody will check it out for you. g=g, v=v, x=x, etc. For example, t = (x/v)*secQ Re: motion in space: velocity and acceleration with no numbers??? i don't understand but heres my equation at first though...: x = tanQ 0 = 150x - 73.5/64(1 + x^2) 0 = 150x - 73.5/64 - (73.5/64)x^2 to ( multiplied everything by -1) 0 = (73.5/64)x^2 - 150x + 73.5/64 then i put it in the quadratic formula ( x = [-b +/- sqrt(b^2 - 4ac)] / 2a) with x = tanQ tanq = [150 +/- sqrt(22500 - 4(73.5/64)^2)] / 2(73.5/64) simplified a bit.... tanq = [150 + sqrt(22500 - ~5. 3)] / 73.5/32) tanq = [150 - sqrt(22500 - ~5. 3)] / 73.5/32) after i get the value of both right hand sides, i get the tan^-1 of those values (in radians cuz theanswr is in radians) and they both DON'T get me this answer: 0.115878293158862 radians am i doing something wrong? aPhilosopher Oct3-09 01:26 PM Re: motion in space: velocity and acceleration Well if x = (vcosQ)t and y = (vsinQ)t - (g/2)t^2, then solve x for t and substitute that into y without replacing any of the variables with the numbers that you want too. Simplify it to get a quadratic in tanQ. If you want, at this point, put your formula up here and I will see if it's right. After you have done this, you can safely plug in the values that you want for y, x and v and solve the equation for tanQ. This way, all the arithmetic errors can be contained to one part of the calculation and it cuts down the arithmetic involved over all, hence making a mistake less likely. Re: motion in space: velocity and acceleration oh ok t = x /(vcosQ) y = (vsinQ)[x /(vcosQ)] - (g/2)[x /(vcosQ)]^2 y = xsinQ/ cosQ - g/2 (x/v) (1/cosQ)^2 y = x(tanQ) - g/2 (x/v) (1 + (tanQ)^2) y = g/2(x/v)tanQ^2 + tanQx + g/2(x/v) that's right, right? then i put it into: tanQ = [-x +/- sqrt( x^2 - 4(g/2[x/v]^2))] / 2(g/2(x/v)) after that: tan^-1(tanQ) that's right too right? aPhilosopher Oct3-09 06:33 PM Re: motion in space: velocity and acceleration x and v get squared in the t^2 term and you have a sign error in distributing -g/2 (x^2/v^2). The best thing is to substitute the numbers in just before you solve the quadratic if you are going to use the quadratic formula. It's much less awkward that way. All times are GMT -5. The time now is 07:25 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=341873","timestamp":"2014-04-21T12:25:55Z","content_type":null,"content_length":"11437","record_id":"<urn:uuid:8e85ee49-670f-4ab7-be32-9aeae0798b06>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Parameterize the motion of an object traveling a circular path. August 15th 2011, 06:26 PM Parameterize the motion of an object traveling a circular path. Parameterize the motion of an object traveling a circular path clockwise beginning at a point (4,0). The object completes one revolution in 3.14 seconds. It has to be in the form of x=sin( ) and y=cos( ). August 15th 2011, 07:50 PM Re: Need help understanding this please!!!!!!!!! What have you tried? Where are you stuck? August 16th 2011, 04:47 AM Re: Need help understanding this please!!!!!!!!! Your answer can't be in the form x= cos( ), y= sin( ) because will never give (4, 0). Did you mean x= Acos( ), y= Asin( ) for some constant, A? August 16th 2011, 05:23 AM Re: Parameterize the motion of an object traveling a circular path. Hello, cdiesel89! Please give us the original wording. Parameterize the motion of an object traveling a circular path clockwise beginning at a point (4,0). . Where is the center? The object completes one revolution in 3.14 seconds. It has to be in the form: . $x=\sin(\;),\;\;y=\cos(\;)$ The given forms are for a unit circle (radius 1). There are a brizillion possible circles. One of them has it center at (3,0). Its equation is: . $(x-3)^2+ y^2 \:=\:1$ The parametric equations are: . $\begin{Bmatrix}x &=& 3 + \sin\theta \\ y &=& \cos\theta \end{Bmatrix}$ HallsofIvy is correct. If the center is at the Origin and the radius is to be 4, . . the forms must have a leading coefficient. One set of parametric equations is: . $\begin{Bmatrix}x &=& 4\sin\theta \\ y &=& 4\cos\theta \end{Bmatrix}$ If the period is to be $\pi$ seconds, let $\theta = 2t$ . . where $t$ is in seconds.
{"url":"http://mathhelpforum.com/pre-calculus/186204-parameterize-motion-object-traveling-circular-path-print.html","timestamp":"2014-04-20T21:06:34Z","content_type":null,"content_length":"7369","record_id":"<urn:uuid:5f06752c-21f8-465e-a42a-04f943c73c7a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Kinetics Question Welcome to PF, aquapod17. You did part (a) correctly. The relevant equation is the one that relates speed, distance, and time to each other. (You must have been told this equation, or it's in your textbook ... otherwise you would not be asked to do a problem like this.) (b) Instead of 10 miles, what would the distance be so that your answer is 15 minutes? You already know, from part (a), if that distance is greater than or less than 10 miles.
{"url":"http://www.physicsforums.com/showthread.php?t=261534","timestamp":"2014-04-16T04:33:12Z","content_type":null,"content_length":"28201","record_id":"<urn:uuid:e8456b5f-69c2-46d6-867e-a375b5d792cb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Modern Physics/The Doppler Effect From Wikibooks, open books for an open world Special Relativity 1 - 2 - 3 - 4 - 5 - 6 The Doppler Effect You have probably heard how the pitch of a train horn changes as it passes you. When the train is approaching, the pitch or frequency is higher than when it is moving away from you. This is called the Doppler effect. A similar, but distinct effect occurs if you are moving past a source of sound. If a stationary whistle is blowing, the pitch perceived from a moving car is higher while moving toward the source than when moving away. The first case thus has a moving source, while the second case has a moving observer. In this section we will compute the Doppler effect as it applies to light moving through a vacuum. The figure below shows the geometry for computing the time between wave fronts of light for a stationary and a moving reference frame. • The time between wavefronts for the stationary observer, in the stationary frame, is T. • The time between wavefronts for the moving observer, in the stationary frame, is T′. • The time between wavefronts for the moving observer, in the stationary frame, is τ Since the world lines of the wave fronts have a slope of unity, the sides of the shaded triangle both have the same value, C. If the observer is moving at speed U , the slope of the observer's world line is c/U, i.e Solving this for X and substituting in give cT′=cT+X gives $T'=\frac{T}{1-\frac{U}{c}} \quad (1)$ In classical physics T′ and τ are the same so this formula, as it stands, leads directly to the classical Doppler shift for a moving observer. However, in relativity T′ and τ are different. We can use the Lorentz transformation to correct for this. The second wavefront passes the moving observer at (UT′,cT′) in the stationary observers frame, but at (0,cτ) in its own frame. The Lorentz transform tells us that. $c\tau = \gamma \left( cT'-\frac{U}{c}UT' \right) = cT' \gamma \left( 1 - \frac{U^2}{c^2} \right) = cT'/ \gamma$ Substituting in equation (1) gives $\begin{matrix} \tau & = & T \frac{ \sqrt{ 1 - U^2/c^2 }}{1-U/c} \\ & = & T \sqrt{ \frac{ (1-U/c)(1+U/c)}{(1-U/c)^2}} \\ & = & T \sqrt{ \frac{c+U}{c-U} } \end{matrix}$ From this we infer the relativistic Doppler shift formula for light in a vacuum: $\omega^{\prime}= \omega \sqrt{ \frac{c-U}{c+U} }$ since frequency is inversely proportional to time. We could go on to determine the Doppler shift resulting from a moving source. However, by the principle of relativity, the laws of physics should be the same in the reference frame in which the observer is stationary and the source is moving. Furthermore, the speed of light is still c in that frame. Therefore, the problem of a stationary observer and a moving source is conceptually the same as the problem of a moving observer and a stationary source when the wave is moving at speed c. This is unlike the case for, say, sound waves, where the stationary observer and the stationary source yield different formulas for the Doppler shift. External Links[edit]
{"url":"http://en.wikibooks.org/wiki/Modern_Physics/The_Doppler_Effect","timestamp":"2014-04-19T23:16:45Z","content_type":null,"content_length":"28725","record_id":"<urn:uuid:23dca26a-5710-43b3-97ed-37fe7b46b9b6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Math applications; was Wheel of God From: george murphy (gmurphy@raex.com) Date: Thu Aug 09 2001 - 16:53:43 EDT Next message: SteamDoc@aol.com: "Re: The Wheel of God" "D. F. Siemens, Jr." wrote: > Sorry to break this off, but this part involves something on which a > comment is warranted. First, math is a broad area in which all the > statements are "true in all possible worlds." This applies to arithmetic > and number theory, geometry, the calculus, complexity theory, and all the > rest of the special areas. This does not mean that all the theorems apply > equally to all situations. Both "one line parallel to a given straight > line can be drawn through a point on the plane not on the given line" and > "there are no parallel lines" cannot be true when applied within any > world simultaneously. They hold only in Euclidean and Riemannian > geometries. Something similar holds for ordinary and modular arithmetics. > Since within a specific mathematical calculus or system all statements or > theorems are proved "true," if a set of interpreted statements holds, one > may interpret other theorems in the expectation that they will also yield > physical truths. There is no guarantee that the process will yield an > unending series of truths. Changes may be required at some point. This is > seen most obviously in Newton's work, which may be viewed as an > interpreted Euclidean geometry. Everything worked for a couple of > centuries. But the next step was Einstein's interpreted Riemannian > geometry (Special Theory) which provided for Newton's results as a > limiting case. The math gets stickier with the General Theory. > As I see it, the problem from the empirical side is collecting enough > data on which a discipline can be based; on the theoretical, a system > that fits the observations, so that predictions can be made. This leads > to expansion and testing. I recall a report in the 50s that the > mathematics of something Einstein produced was so difficult that only > three persons living had the background to understand it. One of them had > taken the time to work through the math (18 months ?) and announced that > it was consistent. The report quoted him, "I'm only a mathematician. I > can't comment on the physics." I'm only a philosopher, so I don't know > what Einstein had presented, nor whether it was an advance or a dead end. > But I suspect that one reason the applicability of mathematics is lauded > is that the dead ends are not pursued and are soon forgotten. This type > of selectivity certainly functions in other areas of human recall. So I'm > not too surprised that our mathematics fit the world. Dave - 3 comments, in (I think) ascending order of importance. 1) A possible source of confusion (not just in your post) is the ambiguity of the term "Riemannian geometry." This can mean either a. the result of choosing the alternative to the Euclidean parallel postulate which Dave describes, a geometry which can be realized on the surface of a sphere with antipodes identified, or b. a differential metric geometry - i.e., one in which a unique separation is defined between any two nearby points, but whose properties may vary from one point to another. It's the 4-D version of this that Einstein used for general relativity. 2) From the date I suspect that the other theory of Einstein which you mention is his last attempt at a unified field theory which used a differential geometry more general than that of Riemann. (It's described in Appendix II of the last edition of Einstein's The Meaning of Relativity.) The notion that only 3 living persons had the background to understand it was nonsense - like "Only six men in the world understand relativity." The math is somewhat more complicated than that of general relativity but not qualitatively so. I did some work on it - or more precisely, on a closely related approach of Schroedinger's. Unfortunately I've come to be about 98% sure that it's a dead end as far as physics is 3) Yes, a lot of theories which are mathematically consistent (at least as much as Goedel will allow), beautiful, &c are bad physics - i.e., they don't correspond with the real world. I think the significance of the discovery of consistent non-Euclidean geometries by Bolyai & Lobachevski isn't often enough noted in this regard. This showed that more than one consistent mathematical pattern for a world is possible. Thus one can't, in platonic fashion, regard the physical world as a representation of a unique math. Even if one wants to think in semi-platonic fashion, the creator must have had some choice of which pattern to use. This is why Torrance, e.g., has insisted on the idea of the contingent rationality of creation. I don't think that this makes makes what Wigner called the "unreasonable effectiveness" of math in describing the world any less remarkable. There is no a priori reason why our experience of the physical world should correspond closely to any math pattern. George L. Murphy "The Science-Theology Dialogue" This archive was generated by hypermail 2b29 : Thu Aug 09 2001 - 16:53:32 EDT
{"url":"http://www2.asa3.org/archive/asa/200108/0124.html","timestamp":"2014-04-16T10:44:44Z","content_type":null,"content_length":"9444","record_id":"<urn:uuid:7100b9aa-ba92-42bc-aec4-bc23b511e97c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Woburn SAT Math Tutor Find a Woburn SAT Math Tutor ...I also have experience tutoring these topics. Classes I have tutored: MIT 6.042 - Mathematics for Computer Science I took linear algebra at MIT, and the concepts I learned reappeared ubiquitously throughout my engineering coursework. Because of this, I am very familiar with both the theory and engineering applications. 8 Subjects: including SAT math, physics, calculus, differential equations ...I always enjoyed learning the countries, state and their capitals and remember them to this day. Aacademically, I got a 4 in AP Human Geography and took several courses in college in Geography. I have very strong English skills that come from being an avid reader and love of classic literature and the various courses I have taken throughout my academic career. 28 Subjects: including SAT math, English, reading, writing ...At the undergraduate level, I have worked as a laboratory instructor in both introductory biology as well as anatomy and physiology. I have also taught at the middle school level as a teaching partner in two 6th grade earth science classes, where I helped individual students as well as led group lessons. I have previous tutoring experience at the college level in both statistics and 22 Subjects: including SAT math, reading, writing, biology I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College. 13 Subjects: including SAT math, chemistry, calculus, geometry ...That is why I am one of the busiest tutors in Massachusetts and the United States (top 1% across the country). I provide 1-on-1 instruction in all levels of math and English, including test preparation (SAT, GMAT, LSAT, GRE, ACT, SAT II, SSAT, PSAT, TOEFL; English reading and writing, Algebra I... 67 Subjects: including SAT math, English, calculus, reading
{"url":"http://www.purplemath.com/Woburn_SAT_math_tutors.php","timestamp":"2014-04-18T05:53:39Z","content_type":null,"content_length":"24079","record_id":"<urn:uuid:c894a1db-4660-449a-b50e-f788da40045e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
3 Methods for Finding Least Common Denominator Date: 02/04/98 at 10:25:37 From: Vera Rooney Subject: Finding least common denominator My fifth graders are having a very difficult time understanding how to get the least common denominator. Any shortcuts? Thank you. Date: 02/04/98 at 16:03:11 From: Doctor Rob Subject: Re: Finding least common denominator Shortcuts, no. I use three methods when faced with a problem like First of all, the least common denominator is a multiple of all the denominators. In fact it is the least common multiple (LCM) of all the denominators. One can find the LCM of a set of numbers by doing it two numbers at a time. Find the LCM of the first two numbers. Then find the LCM of that result and the third number. That will be the LCM of the first three numbers. Now find the LCM of this result and the fourth number, and so on. This reduces the problem to dealing with just two numbers at a time. First method: Write down the multiples of the two numbers in two lists. Find the numbers which are on both lists, and pick the smallest. Example: LCM of 12 and 18. Multiples of 12 are {12, 24, 36, 48, 60, 72, ...}. Multiples of 18 are {18, 36, 54, 72, ...}. Numbers common to both lists are {36, 72, ...}. The smallest is 36, the answer. Second method: Factor the two numbers into products of powers of prime numbers. Create a new number multiplying together all the primes occurring in either number, raised to the higher of the two exponents. That is the Example: LCM of 45 and 12. 45 = 3^2*5, 12 = 2^2*3. New number = 2^2*3^2*5 = 180, the LCM sought. Third method: Find the greatest common divisor of the two numbers. Multiply the two numbers together and divide by the greatest common divisor. (You have to know how to find the greatest common divisor to be able to use this method. Do your students know how to do that?) Example: LCM of 45 and 12. Greatest common divisor is 3. LCM is 45*12/3 = 180. I hope this helps. -Doctor Rob, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/58140.html","timestamp":"2014-04-20T01:55:18Z","content_type":null,"content_length":"7116","record_id":"<urn:uuid:c50b5eea-b6b8-4946-9c0e-65dae172ec6b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
The sum of a number and its additive inverse is equal to _____. Weegy: In mathematics, a multiplicative inverse or reciprocal for a number x, denoted by 1/x or x-1, is a number which when multiplied by x yields the multiplicative identity, 1. The multiplicative inverse of a fraction a/b is b/a. [ For the multiplicative inverse of a real number, divide 1 by the number. For example, the reciprocal of 5 is one fifth (1/5 or 0.2), and the reciprocal of 0.25 is 1 divided by 0.25, or 4. The reciprocal function, ...
{"url":"http://www.weegy.com/?ConversationId=200BB0C6","timestamp":"2014-04-18T20:49:05Z","content_type":null,"content_length":"40531","record_id":"<urn:uuid:a1e99955-5eb6-44df-a804-0d7c1761b0f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Adopt Mersenne Twister 64bit? Siu Kwan Lam siu@continuum... Sun Mar 10 13:12:27 CDT 2013 Hi all, I am redirecting a discussion on github issue tracker here. My original post (https://github.com/numpy/numpy/issues/3137): "The current implementation of the RNG seems to be MT19937-32. Since 64-bit machines are common nowadays, I am suggesting adding or upgrading to MT19937-64. Thoughts?" Let me start by answering to njsmith's comments on the issue tracker: > Would it be faster? Although I have not benchmarked the 64-bit implementation, it is likely that it will be faster on a 64-bit machine since the number of iteration (controlled by NN and MM in the reference implementation http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/VERSIONS/C-LANG/mt19937-64.c) is reduced by half. In addition, each generation in the 64-bit implementation produces a 64-bit random int which can be used to generate double precision random number. Unlike the 32-bit implementation which requires generating a pair of 32-bit random int. But, on a 32-bit machine, a 64-bit instruction is translated into 4 32-bit instructions; thus, it is likely to be slower. (1) > Use less memory? The amount of memory use will remain the same. The size of the RNG state is the same. > Provide higher quality randomness? My naive answer is that 32-bit and 64-bit implementation have the same 2^19937-1 period. Need to do some research and experiments. > Would it change the output of this program: > import numpy > numpy.random.seed(0) > print numpy.random.random() > ? Unfortunately, yes. The 64-bit implementation generates a different random number sequence with the same seed. (2) My suggestion to overcome (1) and (2) is to allow the user to select between the two implementations (and possibly different algorithms in the future). If user does not provide a choice, we use the MT19937-32 by default. numpy.random.set_state("MT19937_64", …) # choose the 64-bit implementation -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20130310/363432de/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2013-March/065778.html","timestamp":"2014-04-18T23:37:40Z","content_type":null,"content_length":"4941","record_id":"<urn:uuid:0fec4e78-7cb7-4204-8427-ae0236e0b259>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Schriften zur Funktionalanalysis und Geomathematik This report gives an insight into basics of stress field simulations for geothermal reservoirs. The quasistatic equations of poroelasticity are deduced from constitutive equations, balance of mass and balance of momentum. Existence and uniqueness of a weak solution is shown. In order of to find an approximate solution numerically, usage of the so–called method of fundamental solutions is a promising way. The idea of this method as well as a sketch of how convergence may be proven are given. Locally Supported Wavelets for the Separation of Spherical Vector Fields with Respect to their Sources (2011) We provide a space domain oriented separation of magnetic fields into parts generated by sources in the exterior and sources in the interior of a given sphere. The separation itself is well-known in geomagnetic modeling, usually in terms of a spherical harmonic analysis or a wavelet analysis that is spherical harmonic based. However, it can also be regarded as a modification of the Helmholtz decomposition for which we derive integral representations with explicitly known convolution kernels. Regularizing these singular kernels allows a multiscale representation of the magnetic field with locally supported wavelets. This representation is applied to a set of CHAMP data for crustal field modeling.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16171/start/0/rows/10/doctypefq/report/yearfq/2011","timestamp":"2014-04-16T11:50:29Z","content_type":null,"content_length":"24075","record_id":"<urn:uuid:74734f38-0341-4a49-8596-4705e44ac236>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Determination of a control parameter in the two-dimensional diffusion equation. (English) Zbl 0982.65103 Summary: This paper considers the problem of finding $w=w\left(x,y,t\right)$ and $p=p\left(t\right)$ which satisfy ${w}_{t}={w}_{xx}+{w}_{yy}+p\left(t\right)w+\phi ,$ in $R×\left(0,T\right]$, $w\left(x,y,0\right)=f\left(x,y\right)$, $\left(x,y\right)\in R=\left[0,1\right]×\left[0,1\right]$, $w$ is known on the boundary of $R$ and also ${\int }_{0}^{1}{\int }_{0}^ {1}w\left(x,y,t\right)dxdy=E\left(t\right)$, $0<t\le T$, where $E\left(t\right)$ is known. Three different finite-difference schemes are presented for identifying the control parameter $p\left(t\right)$, which produces, at any given time, a desired energy distribution in a portion of the spatial domain. The finite difference schemes developed for this purpose are based on the (1,5) fully explicit scheme, and the (5,5) Noye-Hayman (N-H) fully implicit technique, and the Peaceman and Rachford (P-R) alternating direction implicit (ADI) formula. These schemes are second-order accurate. The ADI scheme and the 5-point fully explicit method use less central processor (CPU) time than the (5,5) N-H fully implicit scheme. The P-R ADI scheme and the (5,5) N-H fully implicit method have a larger range of stability than the (1,5) fully explicit technique. The results of numerical experiments are presented, and CPU times needed for this problem are reported. 65M32 Inverse problems (IVP of PDE, numerical methods) 65M06 Finite difference methods (IVP of PDE) 35K15 Second order parabolic equations, initial value problems 35R30 Inverse problems for PDE
{"url":"http://zbmath.org/?q=an:0982.65103","timestamp":"2014-04-19T19:44:05Z","content_type":null,"content_length":"24152","record_id":"<urn:uuid:51d350df-ee2c-4575-a20d-84b8664cc63c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Symbolic Mathematics implementation in JAVA up vote 0 down vote favorite I am trying to build an expression parser which evaluates the value of a mathematical expression. However the problem is for certain categories of expression which evaluates to an irrational number. Lets take an example such as (√2)² This should evaluate to 2 .However due to the logic the program is coded it returns a fractional point number. First √2 is evaluated which equals 1.4142135 and then the result is squared giving 1.9999998 Presently, what all could do is to send the expression to Mathematica via JLink and then use the result. However this takes the help of a third party software. I want to know if this whole thing can be implemented in java. java wolfram-mathematica jlink There is a related question which should give you some clues. – ShyJ Nov 22 '12 at 15:29 add comment 2 Answers active oldest votes Yes, you can certainly implement an expression parser in Java. I think your fundamental error is not in Java programming but in program design. You should not be evaluating surds until as late as possible. Instead you should be evaluating expressions like (sqrt(n))^2 to sqrt(n)*sqrt(n) and sqrt(n)*sqrt(n) to n. Only then should you consider converting n into an equivalent floating-point number. up vote 2 down vote accepted What you have identified as an error in your approach is a feature of floating-point arithmetic which you will struggle to fight against. A better strategy is to work around it by implementing symbolic operations on symbolic expressions. add comment Double value = new Double(2); Double power = new Double(2); Double result = Math.pow(value, power); up vote 0 down vote result = Math.sqrt(result); add comment Not the answer you're looking for? Browse other questions tagged java wolfram-mathematica jlink or ask your own question.
{"url":"http://stackoverflow.com/questions/13515761/symbolic-mathematics-implementation-in-java?answertab=votes","timestamp":"2014-04-23T12:37:58Z","content_type":null,"content_length":"67386","record_id":"<urn:uuid:4e75e635-93dd-4658-ab22-25a0742e1f4f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
The Interior-Point Revolution in Constrained Optimization Results 1 - 10 of 13 , 2000 "... n Automatic Control, pages 898 907, 1990. J. Shamma and M. Athans. Guaranteed properties of gain scheduled control for linear parameter-varying plants. Automatica, pages 559 564, 1991. J. Shamma and M. Athans. Gain-scheduling: Potential hazards and possible remedies. IEEE Control Systems Magazine, ..." Cited by 41 (4 self) Add to MetaCart n Automatic Control, pages 898 907, 1990. J. Shamma and M. Athans. Guaranteed properties of gain scheduled control for linear parameter-varying plants. Automatica, pages 559 564, 1991. J. Shamma and M. Athans. Gain-scheduling: Potential hazards and possible remedies. IEEE Control Systems Magazine, 12(3):101 107, June 1992. [Sch96] A. Schwartz. Theory and Implementation of Numerical Methods Based on Runge-Kutta Integration for Optimal Control Problems. PhD Disser- tation, University of California, Berkeley, 1996. [SCH+00] M. Sznaier, J. Cloutier, R. Hull, D. Jacques, and C. Mracek. Reced- ing horizon control lyapunov function approach to suboptimal regula- tion of nonlinear systems. Journal of Guidance, Control, and Dynamics, 23(3):399 405, 2000. [SD90] M. Sznaier and M. J. Damborg. Heuristically enhanced feedback con- trol of constrained discrete-time linear systems. Automatica, 26:521 532, 1990. [SMR99] P. Scokaert, D. Mayne, and J. Rawlings. Suboptimal model predictive cont - Journal of Global Optimization "... Abstract. This paper presents, within a unified framework, a potentially powerful canonical dual transformation method and associated generalized duality theory in nonsmooth global optimization. It is shown that by the use of this method, many nonsmooth/nonconvex constrained primal problems in R n c ..." Cited by 17 (9 self) Add to MetaCart Abstract. This paper presents, within a unified framework, a potentially powerful canonical dual transformation method and associated generalized duality theory in nonsmooth global optimization. It is shown that by the use of this method, many nonsmooth/nonconvex constrained primal problems in R n can be reformulated into certain smooth/convex unconstrained dual problems in R m with m � n and without duality gap, and some NP-hard concave minimization problems can be transformed into unconstrained convex minimization dual problems. The extended Lagrange duality principles proposed recently in finite deformation theory are generalized suitable for solving a large class of nonconvex and nonsmooth problems. The very interesting generalized triality theory can be used to establish nice theoretical results and to develop efficient alternative algorithms for robust computations. , 2000 "... The local convergence properties of a class of primal-dual interior point methods are analyzed. These methods are designed to minimize a nonlinear, nonconvex, objective function subject to linear equality constraints and general inequalities. They involve an inner iteration in which the log-barrier ..." Cited by 11 (3 self) Add to MetaCart The local convergence properties of a class of primal-dual interior point methods are analyzed. These methods are designed to minimize a nonlinear, nonconvex, objective function subject to linear equality constraints and general inequalities. They involve an inner iteration in which the log-barrier merit function is approximately minimized subject to satisfying the linear equality constraints, and an outer iteration that species both the decrease in the barrier parameter and the level of accuracy for the inner minimization. It is shown that, asymptotically, for each value of the barrier parameter, solving a single primal-dual linear system is enough to produce an iterate that already matches the barrier subproblem accuracy requirements. The asymptotic rate of convergence of the resulting algorithm is Q-superlinear and may be chosen arbitrarily close to quadratic. Furthermore, this rate applies componentwise. These results hold in particular for the method described by Conn, Gould, Orb... "... This paper presents a brief review and some new developments on the canonical duality theory with applications to a class of variational problems in nonconvex mechanics and global optimization. These nonconvex problems are directly related to a large class of semi-linear partial differential equatio ..." Cited by 8 (7 self) Add to MetaCart This paper presents a brief review and some new developments on the canonical duality theory with applications to a class of variational problems in nonconvex mechanics and global optimization. These nonconvex problems are directly related to a large class of semi-linear partial differential equations in mathematical physics including phase transitions, post-buckling of large deformed beam model, chaotic dynamics, nonlinear field theory, and superconductivity. Numerical discretizations of these equations lead to a class of very difficult global minimization problems in finite dimensional space. It is shown that by the use of the canonical dual transformation, these nonconvex constrained primal problems can be converted into certain very simple canonical dual problems. The criticality condition leads to dual algebraic equations which can be solved completely. Therefore, a complete set of solutions to these very difficult primal problems can be obtained. The extremality of these solutions are controlled by the so-called triality theory. Several examples are illustrated including the nonconvex constrained quadratic programming. Results show that these very difficult primal problems can be converted into certain simple canonical (either convex or concave) dual problems, which can be solved completely. Also some very interesting new phenomena, i.e. trio-chaos and meta-chaos, are discovered in post-buckling of nonconvex systems. The author believes that these important phenomena exist in many nonconvex dynamical systems and deserve to have a detailed study. "... This paper presents a unified critical-point theory in non-smooth, non-convex and dissipative Hamilton systems. The canonical dual/polar transformation methods and the associated bi-duality and triality theories proposed recently in non-convex variational problems are generalized into fully nonlinea ..." Cited by 6 (5 self) Add to MetaCart This paper presents a unified critical-point theory in non-smooth, non-convex and dissipative Hamilton systems. The canonical dual/polar transformation methods and the associated bi-duality and triality theories proposed recently in non-convex variational problems are generalized into fully nonlinear dissipative dynamical systems governed by non-smooth constitutive laws and boundary conditions. It is shown that, by this method, non-smooth and non-convex Hamilton systems can be reformulated into certain smooth dual, complementary and polar variational problems. Based on a newly proposed polar Hamiltonian, a nice bipolarity variational principle is established for three-dimensional non-smooth elastodynamical systems, and a potentially powerful complementary variational principle can be used for solving unilateral variational inequality problems governed by non-smooth boundary conditions "... this article the motivation for desiring an "interior" path, the concept of the complexity of solving a linear programming problem, a brief history of the developments in the area, and the status of the subject as of this writing are discussed. More complete surveys are given in Gonzaga (1991a,1991b ..." Cited by 3 (1 self) Add to MetaCart this article the motivation for desiring an "interior" path, the concept of the complexity of solving a linear programming problem, a brief history of the developments in the area, and the status of the subject as of this writing are discussed. More complete surveys are given in Gonzaga (1991a,1991b,1992), Goldfarb and Todd (1989), Roos and Terlaky (1997), Roos, Terlaky and Vial (1997), Terlaky (1996), Ye (1997), Wright (1996) and Wright (1998). Generalizations to nonlinear problems are briefly discussed as well. For thorough treatment of interior point algorithms on those areas, the reader is referred to den Hertog (1993), Nesterov and Nemirovskii (1993) and Saigal, Vandenberghe and Wolkowicz (1998). , 1998 "... The analytic central path for linear programming has been studied because of its desirable convergence properties. This dissertation presents a detailed study of the analytic central path under perturbation of both the righthand side and cost vectors for a linear program. The analysis is divided int ..." Cited by 2 (1 self) Add to MetaCart The analytic central path for linear programming has been studied because of its desirable convergence properties. This dissertation presents a detailed study of the analytic central path under perturbation of both the righthand side and cost vectors for a linear program. The analysis is divided into three parts: extensions of results required by the convergence analysis when the data is unperturbed to include that case of data perturbation, marginal analysis of the analytic center solution with respect to linear changes in the right-hand side, and parametric analysis of the analytic central path under simultaneous changes in both the right-hand side and cost vectors. To extend the established convergence results when the data is fixed, it is rst shown that the union of the elements comprising a portion of the perturbed analytic central paths is bounded. This guarantees the existence of subsequences that converge, but these subsequences are not guaranteed to have the same limit without further restrictions on the data movement. Sufficient conditions are provided to insure that the limit is the analytic center of the iii limiting polytope. Furthermore, as long at the data converges and the parameter of the path is approaching zero, certain components of the the analytic central path are forced to zero. Since the introduction of the analytic center to the mathematical programming community, the analytic central path has been known to be analytic in both the right-hand side and cost vectors. However, since the objective function is a continuous, piece-wise linear function of the right-hand side, the analytic center solution is not differentiable. We show that this solution is continuous and is infinitely, continuously, one-sided differentiable. Furthermore, the analytic center sol... , 2006 "... We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the di#culties inherent in current methods and present robust algorithms. ..." Cited by 1 (1 self) Add to MetaCart We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the di#culties inherent in current methods and present robust "... the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or ..." Cited by 1 (1 self) Add to MetaCart the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, "... Some well-known real-number codes are DFT codes. Since these codes are cyclic, they can be used to correct erasures (errors at known positions) and detect errors, using the locator polynomial via the syndrome, with efficient algorithms. The stability of such codes are, however, very poor for burst e ..." Add to MetaCart Some well-known real-number codes are DFT codes. Since these codes are cyclic, they can be used to correct erasures (errors at known positions) and detect errors, using the locator polynomial via the syndrome, with efficient algorithms. The stability of such codes are, however, very poor for burst error patterns. In such conditions, the stability of the system of equations to be solved is very poor. This amplifies the rounding errors inherent to the real number field. In order to improve the stability of real-number error-correcting codes, other types of coding matrices were considered, namely random orthogonal matrices. These type of codes have proven to be very stable, when compared to DFT codes. However, the problem of detecting errors (when the positions of these errors are not known) with random codes was not addressed. Such codes do not possess any specific structure which could be exploited to create an efficient algorithm. In this paper, we present an efficient method to locate errors with codes based on random orthogonal matrices. Index Terms — Error correction, random matrices, real number codes, sparse solutions
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1521835","timestamp":"2014-04-21T11:19:23Z","content_type":null,"content_length":"40170","record_id":"<urn:uuid:6e10edc0-3e34-43c6-991d-0964893406a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling a One Dimensional Collision | Science Blogs | WIRED • By Rhett Allain • 11.14.13 | • 8:26 am | I shouldn’t write this post, but I can’t help myself. I’m weak. This all started with some videos I made for a lab. The idea was that the students could use video analysis to explore these collisions. Video analysis works well for cases like this since you can get the velocity of both carts at the same time. If you use a motion detector, you have to use two detectors. This is possible, but I think the video analysis is a little bit easier. Now, I know that video analysis can be a little tricky for introduction students – so I set up the video files for the students beforehand. Here is a short tutorial on how to analyze one of these collisions with Tracker Video Analysis – the best and most free video tool. Oh, I guess you want the videos also? Here is a mostly elastic collision and here is an inelastic collision. Of course, you can answer questions like: is momentum conserved? Is the kinetic energy conserved? Sure, those are fun but they are just the tip of the iceberg. Can I make computer model that starts with the masses of the two carts and the initial velocities that produces the velocities after the collision? Sure, this would be easy if I just use conservation of momentum – but I want to model the interaction and SHOW that momentum is conserved. Actual Data Before I get into a model, let me first get data from the video. In this case, I will just start with the mostly elastic collision from above. Here is a plot showing the horizontal position of both carts. Note that I accidentally scaled this in centimeters instead of meters. Sorry. By looking at the slopes of these lines, I get the following data. Note that Tracker Video reports the standard error for each fitting parameter and I have reported that value also just in case you would like to use this for a propagation of error exercise. • Mass of blue = 254.5 g +/- 0.1 g • Mass of red = 253.5 g +/- 0.1 g • x-velocity of blue cart before collision = -0.00292 m/s (+/- 0.00067 m/s) • x-velocity of blue cart after collision = -0.9738 m/s (+/- 0.0022 m/s) • x-velocity of red before collision = -0.9842 m/s (+/- 0.00084 m/s) • x-velocity of red after collision = 0.001136 m/s (+/- 0.00023 m/s) I’m really surprised how well this data turned out – but who cares? I want to make a model. Collision Model Here is my plan: use fake springs. Suppose that there was a super short and super stiff spring on one of the carts. As the two carts get close to each other, this spring would be compressed and exert a force on both carts. One force that is acting on both carts is the key. Without this, momentum will not be conserved. Here is what that spring might look like. If the two carts are less than a distance s apart, then there is a force pushing them away from each other. This force is proportional to the distance between the two carts. Simple, right? It’s actually not too difficult to model this with VPython. You can even make a track and carts and everything. Here is my first test. Actually, I was just playing around and these have the wrong masses, but you get the idea. Here is the plan in a little more detail. • Break the motion into small time steps. • If the carts are close, then calculate the force a spring would push them apart (using F = -k*x). • Use this force to calculate the new momentum for both carts. • Use this momentum (and thus the velocity) to calculate the new position of the carts. • Repeat. Not too hard. This is what that looks like using VPython. Yes, it looks pretty – but I really need to make a graph. Here is a plot from my model with the same initial velocities and mass from the actual carts. I’m a little disappointed because that was too easy. I guess if you like, you could play around with the spring constant and interaction distance. Yes, I will give you a link to my code at the end. Also, I skipped a whole bunch of stuff about numerical models. Here is a quick intro to numerical calculations. Read that if you don’t know what I mean by “time step”. There is no need to look at conservation of energy here. Clearly, the kinetic energy is the same before and after the collision since the red cart stops and the blue car leaves with the same velocity the red car started with (just about). Inelastic Collisions A plain spring won’t work for inelastic collisions. Why? Well, as the two carts compress this virtual spring in between them, the spring will eventually push them back apart. In an inelastic collision, the two carts should stick together instead of bouncing off. I will skip all the details of momentum and collisions, but here is an introduction for you if you want it. All collisions really come down to this: □ There is an interaction between two objects. □ This interaction is some force between these objects. That means object A exerts the same magnitude for on B that B exerts on A. □ This interaction on object A lasts the EXACT SAME time as the interaction on object B. If you look at the momentum principle, it says: So a spring between two objects exerts the SAME force for the SAME time on both objects and this is why they have the SAME change in momentum. This is an important thing to consider as I change my springs. It has to be the same force on both objects. When a normal spring is compressed, there is stored energy in the spring. When they uncompress, this energy goes back into kinetic energy of the two objects. Using an ideal spring means that you would have a perfectly elastic collision. But what if I change the spring? What if the energy going into the compression of the spring is less than the energy you get out? In that case, kinetic energy would not be conserved. However, as long as the force on the two objects is the same for the same time, momentum will be conserved. Here is a plot of the spring force as a function of compression for my non-elastic collision spring. The blue curve represents a normal spring (the left half of this curve is covered by the green curve). The work done on a spring (which is also the energy stored) would be the area under this curve. So for a normal spring, the area on the left and right have equal and opposite values. You get out the same amount of energy you put into it. Now look at my asymmetrical spring. It compresses just like the normal spring. However, on the right side the slope is not as steep (the red curve) meaning it has a lower spring constant. The area under the red curve is not the opposite of the left side. You don’t get all the energy out that you put in. In my collision model, I have a modified spring constant. If the two carts are moving apart, then the spring constant is multiplied by some constant (I used e). If e = 1, then it is a normal spring and a perfectly elastic collision. If e = 0, then the spring constant for expanding carts is zero. You would get no energy back out of this spring and it would be a perfectly inelastic collision. Here is a plot of the horizontal position of the two carts. That seems to work. You have to remember, this is a model. This meme best shows my feelings about models. What makes a nice model? A model is good if it agrees with the data. That’s pretty much it. There are so many things left to look at. Instead of making this one GIANT blog post, I will let you finish the analysis. First, here is the code – https://gist.github.com/rhettallain/7437303. I have commented out the parts that make a graph with matplotlib. I’m not sure which is the best way to run this calculation. If you use the vpython module, you will get a nice visual representation of the collision along with a graph. If you turn that stuff off, the graph looks a little nicer. Really, you could do this without vpython at all. However, I really like to at least import some things – like: vector, mag, norm. Here is your homework assignment. • Run the program. Yes, that is an actual thing you need to do. Who knows if it even works. You always want to have a working version before you start messing around with stuff. • Now you can mess with stuff. First just try different collisions. Try carts with different masses. Try both carts initially moving. Try changing between elastic and inelastic. For each case, you can have the program give you the final speeds of the carts and you can compare this to the expected final speed (work it out on paper). If it always works, you could use this program to check the calculations on your other physics homework. • Play with the following parameters in the program: k (the spring constant), s (the length of this interacting spring), and dt (the time step). How big of an impact on the final answers does each of these have? • What about the value of e? How is this variable related to the coefficient of restitution? • Look at a perfectly elastic collision. Make a plot of percent error in the final momentum as a function of the spring constant. The cool way to do this would be to define the whole calculation as a function in python. If you don’t want to be cool, you could just pick like 5 different spring constants and run the program 5 different times. Choose to be cool. • What about a 2 dimensional (or 3) collision? Could you change this program to work in more than one dimension? Hint: the answer is yes and I have already done this. Let me leave you with one final question to consider (that I will probably answer in a future blog post). What if I was moving along with one of the cars so that I measured the velocities of these cars from this moving reference frame? Would momentum still be conserved? Why? Would kinetic energy be conserved in an elastic collision? Why?
{"url":"http://www.wired.com/2013/11/modeling-a-one-dimensional-collision/","timestamp":"2014-04-19T15:38:02Z","content_type":null,"content_length":"112579","record_id":"<urn:uuid:f0d5d39a-ef23-4735-8852-eb8b6d9dcd82>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Newbie:What Mathematics background required [Archive] - OpenGL Discussion and Help Forums 07-23-2001, 05:06 PM Hello. I want to start up openGL, but want to know what mathematics background is required for 3d graphics programming, and if there are any online mathematics books avaliable to read or download. Secondly as I live in a country where books on such material is not easily avaliable, therefore I would like to know if the red book is avaliable online, and where. Last of all I know c++ and java upto some extent,what depth in the language knowledge is required. I will be thankfull to you for this information. http://www.opengl.org/discussion_boards/ubb/smile.gif
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-129773.html","timestamp":"2014-04-21T07:27:41Z","content_type":null,"content_length":"13024","record_id":"<urn:uuid:2d8120c1-2952-4221-9f89-b2f1fd260aa3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: expression rewrites - how to? Dave Gillespie <synaptx!thymus!daveg@uunet.UU.NET> Thu, 3 Feb 1994 00:34:58 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: Dave Gillespie <synaptx!thymus!daveg@uunet.UU.NET> Keywords: translator, parse Organization: Compilers Central References: 94-02-011 Date: Thu, 3 Feb 1994 00:34:58 GMT Gary Funck writes: > The source language (Pascal) and the target language (Ada) have > differing operator priorities (and associativity). The translator > must rewrite expressions from the source language into those > of the target language, while preserving the order of evaluation > expressed in the source language. Our moderator outlined the basic answer in his postscript to your message, but I thought I'd contribute some more background and a few suggestions. > An additional constraint that > is desirable is to keep the parenthesization present in the original > source text, if this parenthesization is adequate to preserve the > order of evaluation when translated into the target language. > Additionally, the expression should be rewritten with minimal > parenthesization. These two sentences are at odds with each other, so you'll have to figure out what exactly you mean by them. Basically, a good translator has to worry about the following four steps: 1. Add parentheses when Ada requires them but Pascal doedn't. 2. Remove parentheses which Pascal requires but Ada doesn't. 3. Add optional parentheses to enhance clarity. 4. Preserve unnecessary parentheses which were there for clarity. Only step 1 is necessary for a correct translation. In my Pascal to C translator "p2c" I found steps 2 and 3 to be a very good idea; I don't do step 4 in p2c but I probably should. (I've had one or two comments about this from p2c users in the past few years, but it's interesting that none of them considered step 4 to be vital.) I also faced this problem with Calc, a calculator/algebra system for GNU Emacs that supports several "language modes" for algebraic formulas. It can both parse and display in C, Pascal, FORTRAN, TeX, and a few other languages. The "translation" happens implicitly when you enter a formula in one language mode and then switch to another language mode for display. I found the operator precedence parsing algorithm to work well in both programs. Calc uses an explicit table-driven recursive operator precedence parser. The table keeps a "left" precedence and a "right" precedence for each operator. Calc uses the same table for both parsing and formatting expressions. The parsing algorithm looks like this; you ought to find something like it in any compiler book: parse a "factor", e.g., variable name or literal; while (next-operator-left-precedence >= prec) { skip and save operator token; call parse_expr(the-operator-right-precedence); For left associative operators, the right-precedence is one greater than the left-precedence; for right associating operators, the left-precedence is one greater. The display algorithm in Calc looks like this: format_expr(expr, prec) if (expr is a factor) return formatted-factor; else if (prec > min(expr-left-prec, expr-right-prec)) return "(" + format_expr(expr, 0) + ")"; return format_expr(left-subexpr, expr-left-prec) + operator + format_expr(right-subexpr, expr-right-prec); P2c uses an ad-hoc algorithm for parsing, but its formatting algorithm is essentially like Calc's. The above routines will give you steps 1 and 2 of my outline but not 3 or 4. I've found it helps quite a bit to add some unnecessary parentheses as described in step 3. For example, p2c adds the unnecessary parens in "(a < b) == (c < d)" by default, and optionally also parenthesizes "(a * b) + (c * d)" and "(a && b) || (c && d)". P2c uses ad-hoc heuristics to decide when to add unnecessary parens; I worked out the heuristics just by trying lots of things and seeing what looked "nice," although most of them would be familiar to any experienced C programmer. In this sense, you're actually pretty lucky going from Pascal to Ada, since the basic precedence layouts are similar even if they differ in For step 4, you want to include explicit parentheses as a kind of pseudo-operator. Be careful not to generate two sets of parentheses by accident. Also, there is the tricky question of what to do about parentheses that *were* necessary in Pascal but are not in Ada. Consider the expression "(a = b) and (c = d)", in which the parens are required. Translating to Ada, "a = b and c = d" is sufficient. I would be inclined to remove the parens in this case and then allow step 3 to add them back in if the extra-parens option is turned on. But you might be able to kill two birds with one stone by preserving all parens, even if they were used "under duress" in the Pascal. This strategy might preserve too many undesirable parentheses, but I'll bet Pascal and Ada are similar enough for you to get away with it. Another case to watch out for, taking Pascal-to-C as an example, is "if (a = b) then ...", which contains unnecessary parentheses that should *not* lead to "if ((a == b)) ...". My C code generator emits the parens around an "if" condition separately, not as part of the expression generator. If I had p2c preserve parens, I'd have to do something special to avoid this pitfall. I don't remember my Ada syntax well enough to say whether or not this kind of thing crops up with Ada. If other parts of your translator actually examine and modify expression trees, watch out for funny interactions with explicit parentheses. For example, if the array A is 1-based in Pascal but 0-based in C, p2c translates "A[i+1]" to "A[i]", not "A[i+1-1]". If for some reason the Pascal code said "A[(i+1)]", you'd have to ask whether it's better to produce "A[(i+1)-1]" or "A[i]"; you probably *don't* want to produce "A[(i)]". This example is kind of silly since people don't write [( ... )], but there are plenty of other situations in p2c where this kind of thing could plausibly happen. Again, Ada is much more Pascal-like than C, so maybe this will never come up for you. -- Dave Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/94-02-022","timestamp":"2014-04-16T22:08:18Z","content_type":null,"content_length":"11126","record_id":"<urn:uuid:ac7683c1-b812-42cf-8403-a7e47b28e325>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Anderson Localization Transition: Introductory Training Course Theme of training course: When a single-particle quantum Hamiltonian system is subjected to a disorder potential, it is expected on physical grounds that a transition from localised to extended energy eigenstates takes place as a function of the disorder strength. Such a transition should be accompanied by a characteristic change in the energy spectrum: if the disorder is large enough for Anderson localisation to occur, the random Schrödinger operator is known to have dense point spectrum; on the other hand, if the disorder is weak and the space dimension larger than d = 2, then one expects the existence of absolutely continuous spectrum. Giving a mathematical proof of this conjectured scenario, and clarifying the nature of the spectrum and the eigenfunctions at the transition point or in d = 2, remains an important and outstanding problem of mathematical physics. Many features of the scenario are believed to extend to a broader class of quantum systems including, most prominently, those exhibiting transitions of Quantum Hall This training course is mainly directed at researchers in early stages of their careers. Its aim is to provide the participants with an introduction to the subject, by exposing them to ideas, terminology and analytical techniques of the rigorous as well as the heuristic kind. Methods used in the study of Anderson localisation by mathematicians and by theoretical physicists will be reviewed by experts from both communities. Reviewing the state of the art for both disciplines will hopefully help to bridge the existing language gap between the communities and create an environment conducive to fruitful collaboration between physicists and mathematicians during the rest of the program. Tentative list of topics to be covered: • phenomenology of Anderson localisation (T. Spencer) • introduction to the spectral theory of random Schrödinger operators (L. Pastur) • introduction to supermatrix techniques and the nonlinear σ-model (Y. Fyodorov) • rigorous techniques for 1D and quasi 1D systems (I. Goldsheid) • rigorous methods in the statistical mechanics of phase transitions (D. Brydges) • critical phenomena in two-dimensional disordered systems (A. Ludwig) Information for Conference Participants | Local Information | Newton Institute Map | Mathematics and Physics of Anderson localization: 50 Years After | Workshops | Newton Institute Home Page
{"url":"http://www.newton.ac.uk/programmes/MPA/mpaw01.html","timestamp":"2014-04-20T23:43:05Z","content_type":null,"content_length":"4787","record_id":"<urn:uuid:994236ff-5644-4f9f-bf25-5d1d4d34cfde>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Highland Village, TX SAT Math Tutor Find a Highland Village, TX SAT Math Tutor ...Then I attended public school at Downing Middle School and Marcus High School in Flower Mound, TX. For many years, I taught children's Sunday school at my church. I played the flute competitively for 6 years and now enjoy it as a hobby. 7 Subjects: including SAT math, algebra 1, prealgebra, algebra 2 ...There are some topics in AP courses such as details of bonds and thermo-chemistry for which I do not have adequate knowledge. For Hubble observation statistics I used various Excel features such as writing cell formulas, generating plots, and doing import from/export to text files. I still use the formula capability often. 15 Subjects: including SAT math, chemistry, calculus, statistics ...Prior to that I managed and tutored at an SAT ACT company in Plano. Currently, I tutor and work part time as an instructor in GRE GMAT Quantitative at UT Dallas while also exploring careers in educational technology. I hold a B Sc in Electrical Engineering and have worked in the telecom and tec... 11 Subjects: including SAT math, geometry, GRE, algebra 1 ...I work very hard to make learning meaningful and fun. As an educational psychologist, I have completed many hours of advanced coursework, and I am well-versed in the current research regarding learning, memory, and instructional practices. I utilize this knowledge to identify underlying process... 39 Subjects: including SAT math, chemistry, English, writing ...Please do contact me if you are struggling in school or just need some extra help. I will be more than happy to give you or your child the studying help you need. BS Biology Emory University 2007 MA Biology Washington University in St. 28 Subjects: including SAT math, chemistry, English, biology Related Highland Village, TX Tutors Highland Village, TX Accounting Tutors Highland Village, TX ACT Tutors Highland Village, TX Algebra Tutors Highland Village, TX Algebra 2 Tutors Highland Village, TX Calculus Tutors Highland Village, TX Geometry Tutors Highland Village, TX Math Tutors Highland Village, TX Prealgebra Tutors Highland Village, TX Precalculus Tutors Highland Village, TX SAT Tutors Highland Village, TX SAT Math Tutors Highland Village, TX Science Tutors Highland Village, TX Statistics Tutors Highland Village, TX Trigonometry Tutors Nearby Cities With SAT math Tutor Addison, TX SAT math Tutors Bartonville, TX SAT math Tutors Coppell SAT math Tutors Copper Canyon, TX SAT math Tutors Corinth, TX SAT math Tutors Double Oak, TX SAT math Tutors Flower Mound SAT math Tutors Hickory Creek, TX SAT math Tutors Lake Dallas SAT math Tutors Lewisville, TX SAT math Tutors Little Elm SAT math Tutors Northlake, TX SAT math Tutors Oak Point, TX SAT math Tutors Shady Shores, TX SAT math Tutors Southlake SAT math Tutors
{"url":"http://www.purplemath.com/Highland_Village_TX_SAT_math_tutors.php","timestamp":"2014-04-20T23:35:11Z","content_type":null,"content_length":"24391","record_id":"<urn:uuid:05b8390e-8b54-4cee-9ef4-21de65d5d175>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Batting Stat Baseball Batting Statistics Explained Baseball is arguably the most statistic-driven sport. Indeed, virtually everything that happens during a game can be recorded on the score sheet and categorized numerically. In recent years, fans and analysts have utilized technological advances to create new, often more complex ways to evaluate player performance. The 21st century has ushered in what could legitimately be deemed a statistical revolution. And in order for us as players and fans to keep up with the trends of the sport, we must acquaint ourselves with some of these brand new stats. What follows is an overview of some of baseball’s most prevalent offensive statistics. To keep this list more concise, it will not go over “counting” stats such as runs, hits, doubles, home runs, walks, strikeouts, stolen bases, and sacrifices. These statistics are certainly still significant, but neither their origins nor their means of evaluating performance require any further explanation. Standard Batting Stats If you watch baseball regularly, chances are you’ve come across the following five statistics. With the exception of OPS (which has been around less than 30 years), they’re all ingrained in the baseball vernacular, having been used since the game’s early days. Batting Average Calculated using the simple formula of hits divided by at-bats, batting average has always been the standard for measuring offensive success. In the big leagues, the player with the highest annual average is awarded with the “batting title.” The drawbacks of batting average are that it doesn’t credit a hitter for reaching base via a walk, nor does it account for run production or extra-base hits. Furthermore, many statisticians believe that constructing an average based solely on hits is not sufficient enough to accurately gauge batting prowess. On-Base Percentage (OBP) More inclusive than batting average, OBP measures the frequency at which a player reaches base, using the following formula: • OBP = (hits + bases on balls + hit by pitch) ÷ (at bats + bases on balls + hit by hitch + sacrifice flies) Many people place more value in OBP than batting average. This is because, ultimately, a hitter’s individual goal is to reach base. In this way, on-base percentage more fully reflects a batter’s success rate. Runs Batted In (RBI) Faces of Baseball ... Miguel Cabrera Team: Detroit Tigers Position: First Base Height: 6'4" Date of Birth: 4-18-1983 Getting to know Miguel: Few players fill up a stat sheet like Tigers’ first baseman, Miguel Cabrera. The slugger from Venezuela is just 27 years old, but has posted at least 25 home runs and 100 RBI in each of the past seven seasons. A career .314 hitter, five-time All-Star, and perennial Triple Crown threat, Cabrera’s career OPS of .937 is tenth among active players. If you get a base hit, force out, fly out, walk, or hit by pitch that directly results in a run being scored, you are credited with a run batted in. For years, RBI was one of the most important offensive stats, which is understandable, since the ultimate goal in baseball is to score runs. Recently, with the advent of Sabermetrics, RBI’s value as a statistic has taken a hit. Statisticians argue that RBI is far too contingent upon factors that a hitter cannot control, most importantly, whether runners are on base when a hitter bats. While batters that accumulate a lot of RBI are undoubtedly successful run-producers, it’s also true that the players with the most RBI are typically the players with the most RBI opportunities. Slugging Percentage Slugging percentage basically gauges a player’s power by measuring all of the bases accumulated via base hits. In order to have a high slugging percentage, a batter must not only be a successful hitter, he must also hit frequently for extra bases. The formula divides total bases by at-bats: • Slugging percentage = [singles + (doubles × 2) + (triples × 3) + (home runs × 4)] ÷ at bats In many instances, the players who hit the most home runs will also be among the leaders in slugging percentage. It doesn’t necessarily measure how good of a hitter you are, as much as it measures how dangerous of a hitter you are. On-Base Plus Slugging (OPS) This stat was first conceived in the 1980s, and is considered by some to be the first Sabermetric statistic. OPS measures exactly what its name suggests: On-base percentage plus slugging percentage. Essentially, OPS takes two useful stats and puts them into one category. It measures a player’s ability to get on base, as well as his ability to hit for power. Many people believe that OPS is the most accurate and comprehensive indicator of the best hitters. Sabermetric Batting Stats The term “Sabermetrics” is derived from the acronym SABR (the Society of American Baseball Research). The concept was pioneered most famously by stat-innovator Bill James, who defined Sabermetrics as, “the search for objective knowledge about baseball.” In this spirit, the following statistics possess varying degrees of complexity, but all are designed to precisely reflect a player’s measurable value to his team. There are dozens of Sabermetric stats in existence, but the five on this brief list are among the most accessible and comprehensive. Runs Created Active Career Leaders in Runs Created 1. Manny Ramirez (1,993) 2. Alex Rodriguez (1,970) 3. Jim Thome (1,880) 4. Chipper Jones (1,813) 5. Todd Helton (1,649) James developed this stat in order to quantify the numbers of runs a hitter directly creates for his team over the course of a season. His rational was that a hitter’s job isn’t to get base hits and draw walks, but to put runs on the scoreboard. And yet, no single stat had previously been able to measure a player’s individual impact on team run-production. James originally used a simple but flawed formula that has since been refined (and made much more complex): • Runs created = [(hits + bases on balls + hit by pitch – caught stealing – grounded in double play) × (total bases + .26[bases on balls – intentional bases on balls + hit by pitch] + .52[sacrifice hits + sacrifice flies + stolen bases])] ÷ (at bats + bases on balls + hit by pitch + sacrifice hits + sacrifice flies) Isolated Power Isolated Power is a measure of a hitter’s raw power, essentially reflecting extra bases per at-bat. Those players who accumulate a lot of total bases also tend to post high Isolated Power numbers. It can be found using either of two relatively simple formulas: • Isolated power = (slugging percentage – batting average) • Isolated power = [doubles + (triples × 2) + (home runs × 3)] ÷ at bats Secondary Average This stat was created as a way to look at a player’s extra bases gained, independent of and complementary to, standard batting average. By accounting for extra base hits, walks, and stolen bases, secondary average is a more complete reflection of total production. It has a much greater variance than batting average; the leading MLB hitters are typically up near .500. It is calculated with the following formula: • Secondary average = (total bases – hits + bases on balls + stolen bases – caught stealing) ÷ at bats Batting Average on Balls in Play (BABIP) BABIP is a statistic measuring the percentage of plate appearances ending with a batted ball in play (which excludes home runs, walks, and strikeouts) for which the batter is credited with a hit. In other words, when Player-X hits the ball in play, how often does he get a hit? Since a particularly high or low BABIP is usually difficult to maintain, the stat is often used to explain fluky seasons by hitters. To some extent, it measures how lucky a player is getting when he hits the ball in play. BABIP uses the following formula: • BABIP = (hits – home runs) ÷ (at bats – strikeouts – home runs + sacrifice flies) Value over Replacement Player (VORP) Instead of looking at Runs Created in a vacuum, VORP measures player performance against the “replacement level.” The term “replacement level” is meant to reflect a level of ability that is easily available to any team in a given league. In order to find Player-X’s VORP, follow these steps: • Multiply the league’s average runs per out by Player-X’s total outs made. This gives you the number of runs a league-average caliber player produces. • “Replacement Level” is defined as 80 percent of league-average (75 percent for catchers, because their defense matters more; 85 percent for first basemen and designated hitters, because their defense matters less). Therefore, multiply that league-average number of runs by 0.8 (or 0.75 or 0.85 if applicable). This is the number of runs you could expect a “replacement player” to • Subtract the replacement player’s Runs Created from Player-X’s actual Runs Created, and the result is VORP. Embrace the Revolution Although they’re relatively new to the world of baseball stat analysis, the statistics on this list have already been widely adopted as performance indicators by scouts, coaches, sportswriters, and statisticians everywhere. Now that you have a basic introduction, hopefully you don’t feel too overwhelmed by the Sabermetric Revolution. Basically, the goal is to evaluate baseball as objectively as possible. And although it’s never a good idea to worry about your stats as a player, keeping up with the trends of the game allows us to be more well-informed. Plus, next time you’re struggling at the plate, you can impress your coach by blaming it on a low BABIP!
{"url":"http://baseball.isport.com/baseball-guides/baseball-batting-statistics-explained","timestamp":"2014-04-21T14:41:18Z","content_type":null,"content_length":"115974","record_id":"<urn:uuid:d2c57bf6-5413-4d29-8681-65f375948ecf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Fibonacci numbers Next: Towers of Hanoi. Up: Recursion. Previous: None. Fibonacci numbers Recursion is the concept of something being defined in terms of itself. It sounds like it's circular - but it's not necessarily so. A circular definition, like defining a rose as a rose, is termed infinite recursion. But some recursive definitions aren't circular: They have a base case, where the recursive definition no longer applies, and the definition for any other case eventually reaches the base case. In mathematics, things are often defined recursively. For example, the Fibonacci numbers are often defined recursively. The Fibonacci numbers are defined as the sequence beginning with two 1's, and where each succeeding number in the sequence is the sum of the two preceeding numbers. 1 1 2 3 5 8 13 21 34 55 ... We obtained 8 in the above sequence by summing the previous two numbers (3 and 5). A formal mathematical definition would define this using mathematical symbols. 1 if n = 0 or n = 1 F(n) = { F(n-1) + F(n-2) otherwise This makes the recursiveness of the definition obvious: Notice how we've defined F(n) in terms of F (namely, F(n-1) + F(n-2)). (Incidentally, such recursive definitions of functions are called In computer programming, also, it's often convenient to define something recursively. And Java allows you to do this - all you have to do is to use the function within its own definition. The following program computes and prints a Fibonacci number requested by the user. import csbsju.cs160.*; public class Fibonacci { public static void main(String[] args) { IO.println("Which Fibonacci? "); IO.println("It is " + fib(IO.readInt())); private static int fib(int n) { if(n <= 1) return 1; else return fib(n - 1) + fib(n - 2); Notice how the recursive function has its base case - in this example, the base case is when n is at most 1, when the program returns 1 without any further recursive calls. Recursive programs must always have a base case! Novices tend to forget to include a base case when programming. As a programmer, it's important that you understand how this works within the computer. The computer will go through the following process to compute fib(3): 3 exceeds 1, so I need to compute and return fib(3 - 1) + fib(3 - 2). To compute this, I first need to compute fib(2). 2 exceeds 1, so I need to compute and return fib(2 - 1) + fib(2 - 2). To compute this, I first need to compute fib(1). 1 is less than or equal to 1, so I need to return 1. Now that I know fib(1), I need to compute fib(0). 0 is less than or equal to 1, so I need to return 1. I now know fib(2 - 1) + fib(2 - 2) = 1 + 1 = 2. I return this. Now that I know fib(2) is 2, I need to compute fib(1). 1 is less than or equal to 1, so I need to return 1. I now know fib(3 - 1) + fib(3 - 2) = 2 + 1 = 3. I return this. It helps to draw a recursion tree to illustrate the different recursive calls entered in the process of the computation. A recursion tree has a node for each call of the method, connected by a line to the method call that called it. For the example of fib(3), the recursion tree would be as follows. For fib(5), the following is the recursion tree. Next: Towers of Hanoi. Up: Recursion. Previous: None.
{"url":"http://ozark.hendrix.edu/~burch/csbsju/cs/160/notes/29/0.html","timestamp":"2014-04-21T02:00:17Z","content_type":null,"content_length":"4298","record_id":"<urn:uuid:412312bc-8501-4c2c-94bd-9811e8ebb37a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 2nd 2008, 03:48 AM #1 Let A,B,C be sets. (i)Which of the following are always true? 1). A \ ( B \ C) = ( A \ B) U C 2). A \ (B U C) = (A \ B) \ C 3). A \ (B <intersection> C)= (A \ B) U (A \ C) I have the first one as not always true and the second and third as always true. I just wanted someone to check this because I think i'm doing them wrong. Is there a difference between A \ B \ C and (A \ B) \ C? Sorry I don't know how to get unions and intersections to work so I used a U for the union and wrote <intersection> for intersections. I have the first one as not always true and the second and third as always true. I just wanted someone to check this because I think i'm doing them wrong. Is there a difference between A \ B \ C and (A \ B) \ C? Sorry I don't know how to get unions and intersections to work so I used a U for the union and wrote <intersection> for intersections. You can rewrite : $A \backslash B=A \cap B^c$ It's \cup for union and \cap for intersection A \ B \ C I don't think this is a correct writing, unless you can prove (A\B)\C=A\(B\C) $A\backslash \left( {B\backslash C} \right) = A \cap \left( {B \cap C^c } \right)^c = A \cap \left( {B^c \cup C} \right)$ I don't think this is a correct writing, unless you can prove (A\B)\C=A\(B\C) But surely if the set is A not B not C then there would just be A regardless of where you put brackets? $<br /> A \backslash B=A \cap B^c<br />$ Where did this come from? I'm trying to think of it as a Venn diagram but I can't see what it would look like. $<br /> A\backslash \left( {B\backslash C} \right) = A \cap \left( {B \cap C^c } \right)^c = A \cap \left( {B^c \cup C} \right)<br />$ ohhh, so A \ B \ C is not the same as (A \ B) \ C. That seems weird! does "\" mean "not"? My foundations lecturer introduced "¬" that apparently also means not. It seems strange that he would introduce two different symbols that apparently mean the same thing! (he also did the same for ^ and $\cap$ and it's counterpart with the union. Is there a difference between all this notation???) Why ? Where did this come from? I'm trying to think of it as a Venn diagram but I can't see what it would look like. Hmm well, it's the definition oO How do you define $A \backslash B$ ? It's the set of elements that are in A but not in B, that is to say they are in A and in $B^c$ Hang on, I think i'm missing something. "\" means "not" doesn't it? For example, "A \ B" means "A not B". I'm pretty sure that's what our lecturer told us. =S Hmmm "not A" usually stands for $A^c$, it affects only one set. Here, \ is a binary operator, that is to say it deals with two sets. If you prefer, it's "A and not B" and we can refer to elements that are in A but not in B Okay then. Going back to the first question: A \ (B \ C)=A \ $(B \cap C^d) eq$ (A \ B) $\cup C$ (I used d instead of c since C was already a set). The second question would become: Need to show: A \ $(B \cup C)$=(A \ B) \ C (A \ B) \ C= $A \cap B^d$ \ C Would I need another identity to show that it's true? (if it's true, I think it is). Truthfully, I would much rather remember identities and try to get one side to equal the other side. Venn diagrams are rather limited. #3 is correct. $\begin{array}{rcl}<br /> {A\backslash \left( {B \cap C} \right)} & = & {A \cap \left( {B \cap C} \right)^c } \\<br /> {} & = & {A \cap \left( {B^c \cup C^c } \right)} \\<br /> {} & = & {\left( {A \cap B^c } \right) \cup \left( {A \cap C^c } \right)} \\<br /> {} & = & {\left( {A\backslash B} \right) \cup \left( {A\backslash C} \right)} \\<br /> {} & {} & {} \\<br /> <br /> \end{array}$ This set operation is known as setminus. The are two notations in use: $\left( {A\backslash C} \right) = A-C$ November 2nd 2008, 04:02 AM #2 November 2nd 2008, 04:24 AM #3 November 2nd 2008, 04:28 AM #4 November 2nd 2008, 04:29 AM #5 November 2nd 2008, 04:49 AM #6 November 2nd 2008, 04:55 AM #7 November 2nd 2008, 05:12 AM #8 November 2nd 2008, 05:50 AM #9
{"url":"http://mathhelpforum.com/discrete-math/57035-sets.html","timestamp":"2014-04-21T05:53:48Z","content_type":null,"content_length":"64761","record_id":"<urn:uuid:7d6928ff-b9d5-4d9f-a397-a6a870137300>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Divide by zero tweak I am sure just about everyone here has run into divide by zero issues. I have a standard way of handling divide by zero problems that I have found on this, and other sites. Here is code I found somewhere on a blog or forum, and began using: Public Function DivideBy(ByVal Exp1, ByVal Exp2) If Exp2 = 0 Then DivideBy = 0 Else: DivideBy = Exp1/Exp2 End If End Function This code is very easy to call from the expression builder, and provides a very consistent way to handle divide by zero errors. I recently was working on a report where I was using a reference to report textbox as the denominator. That textbox however, had some additional data in the expression that returned a N/A or something similar when the expression had some undesired results due to some bad data. This was fairly easy to deal with right in the expression, but I thought I might have to deal with this again and set out to modify my function. I ran into a slight problem in that the code window is not aware of all .Net name spaces. So I ended up having to use the full name space to get the function working. So here is my simple tweak that will handle both a divide by zero and a non numeric component in the equation. Public Function DivideBy(ByVal Exp1, ByVal Exp2)If Microsoft.VisualBasic.IsNumeric(Exp1) And Microsoft.VisualBasic.IsNumeric(Exp2) Then If Exp2 = 0 Then DivideBy = 0 Else: DivideBy = Exp1/Exp2 End IfElse DivideBy = "N/A"End IfEnd Function Very good Daniel, there is more than one way to skin this cat but I still am searching for the best mathematical and/or philosphical answer to what it means to divide by zero. Check out http:// Thanks for posting this Daniel. I meant to write a blog on this a long time ago, but it completely slipped my mind. Now we can find it here on BIDN. :) I was just posting a link to this blog and I realized I never said how to embed the code or how to use the code. So, take the public function at the bottom of the blog and paste it into the Report >> Report Properties >>Code tab window. Then to call the code from an expression you use code.Divideby(numerator,denominator)
{"url":"http://www.bidn.com/blogs/Daniel/ssas/1245/divide-by-zero-tweak","timestamp":"2014-04-20T13:19:35Z","content_type":null,"content_length":"102407","record_id":"<urn:uuid:90f0f242-5f69-4deb-bcad-96bbe985ab53>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
The Worlds of David Darling: Bookshop: Space-time, Relativity, and Quantum Bookshop: Space-time, relativity, and quantum gravity Home > Bookshop > Space-time, relativity, and quantum gravity GENERAL LEVEL Warped Passages : Unraveling the Mysteries of the Universe's Hidden Dimensions. Lisa Randall The concept of additional spatial dimensions is as far from intuitive as any idea can be. Indeed, although Harvard physicist Randall does a very nice job of explaining – often deftly through the use of creative analogies – how our universe may have many unseen dimensions, readers' heads are likely to be swimming by the end of the book. Randall works hard to make her astoundingly complex material understandable... Publishers Weekly Gravity's Arc : The Story of Gravity from Aristotle to Einstein and Beyond. David Darling "From Aristotle to Einstein and beyond, Gravity's Arc is a lucid and beautifully written exposition of the still mysterious force that holds our universe together – and the even more mysterious dark twin that may blow it apart." Joshua Gilder, author of Heavenly Intrigue: Johannes Kepler, Tycho Brahe, and the Murder Behind One of History's Greatest Scientific Discoveries The Great Beyond : Higher Dimensions, Parallel Universes and the Extraordinary Search for a Theory of Everything. Paul Halpern Halpern explains that over the past century gravity has been the shadow flickering on the walls of the cave hinting at other realms. Why is it so weak compared with electromagnetism? With string theory, and its successor, M-theory, physicists speculate that gravity "leaks" back and forth between our reality, an 11-dimensional "brane" (or membrane) and other branes, perhaps as close as a millimeter away. Publishers Weekly The Fabric of the Cosmos: Space, Time, and the Texture of Reality. Brian Greene [Greene's] driving question in The Fabric of the Cosmos ... is fundamental: "What is reality?" Over sixteen chapters, he traces the evolving human understanding of the substrate of the universe, from classical physics to ten-dimensional M-Theory. Amazon.com Einstein's Cosmos: How Albert Einstein's Vision Transformed Our Understanding of Space and Time. Michio Kaku There are many Einstein biographies out there, and I've read a number of them. In my opinion, this is one of the most concise and readable ones. The writing is clear and engaging, thus making the book difficult to put down. Einstein's theories are clearly explained for anyone to understand, amidst the main highlights of his life and times. I recommend this book to a wide audience, from science buffs to Einstein fans... Amazon reader review Black Holes and Time Warps: Einstein's Outrageous Legacy. Kip Thorne Thorne, the Feynman Professor of Theoretical Physics at CalTech, here offers an accessible, deftly illustrated history of curved spacetime. Covering developments from Einstein to Hawking, he takes his readers to the very edge of theoretical physics: straight through wormholes – and maybe back again – past hyperspace, "hairless" wormholes and quantum foam to the leading questions that drive quantum physics. Publishers Weekly Relativity Visualized. Lewis Epstein Epstein is the best teacher of this difficult subject you will ever encounter. His book breaks new ground in relating space, time, and mass in a geometrical way that is – at last – simple to visualize. Albert Einstein's own book on relativity, though a model of clarity, does not provide this all-important geometric model of four dimensional space/time. Epstein has understood everything that is difficult for us about relativity at a gut level, and thoroughly demystifies it, without ever making the kind of deep conceptual errors to which authors of "popular" books on physics are apt to be prone. Amazon reader review Spacetime Physics: Introduction to Special Relativity. Edwin Taylor & John Wheeler I used this book to begin my mathematical study of Relativity (and am now working my way through the author's next book, Exploring Black Holes). This book is an excellent introduction into the field from a mathematical perspective, with an excellent presentation, interesting problem sets, and solutions for the odd numbered problems in the back (which is great for learning on your own). The prose is highly readable, and uses very accessible terminology to help the reader understand "what is really going on." Amazon reader review Exploring Black Holes: Introduction to General Relativity. Edwin Taylor & John Wheeler Taylor (MIT) and Wheeler (Princeton) use metrics rather than Einstein's field equations to introduce students with modest mathematics backgrounds (elementary calculus and algebra) to concepts of relativity. Focusing always on encouraging curiosity (the inside cover contains a long list of questions such as "what does it feel like to fall toward a black hole"), the authors provide tools for answering questions and carrying out calculations about curved spacetime near Earth and black holes. Book News
{"url":"http://www.daviddarling.info/bookshop/space.html","timestamp":"2014-04-16T04:16:16Z","content_type":null,"content_length":"13510","record_id":"<urn:uuid:1b24547f-4f38-4b24-8c1b-b08eac8bd7b4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
n the 1 Oct 13:22 2008 Some typos in the libproj4 manual Mikael Rittri <Mikael.Rittri <at> carmenta.com> 2008-10-01 11:22:08 GMT I think I have found some typos in the libproj4 manual. Page 96, under the headline "Oblique Stereographic using intermediate sphere": * Equation (7.36) says "k)0", but the parenthesis does not match anything. I think it should have been "k_0" in the LaTeX source, so that the text would be "k0", with the zero as a subscript. * Equation (7.37) reads chi = ( cos c ... ) but something is missing. I looked in Snyder's book, and I think it should be chi = asin( cos c ...) (chi is that Greek letter that looks like a tall and narrow x, but isn't an x.) Page 78, table 6.1: Lambert Conformal Conic, in the general formula for rho, there is a numerator tan^n( pi/4 + phi1/2) but in the case phi1 = phi2, the analoguous numerator is written as tan^n( pi/4 + phi1) / 2 Should not these numerators be identical? They differ only in the placement of the ending parenthesis. Gerald Evenden wrote: > Please, please, please. Anyone finding errors in the previously reference > manual please notify this group. Thank-you. (Continue reading)
{"url":"http://blog.gmane.org/gmane.comp.gis.proj-4.devel/month=20081001","timestamp":"2014-04-19T07:07:48Z","content_type":null,"content_length":"70721","record_id":"<urn:uuid:d880d362-3c6e-4641-b6e3-61b594649982>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Bolzano's Theorem Bolzano's theorem states that if is a continuous function in the closed interval with and of opposite sign, then there is a in the open interval such that . Snapshot 1: The function is positive in the interval and therefore for all in . Snapshot 2: The function is negative in the interval so for all in . Snapshot 3: The function is positive for and negative for , therefore there is a in such that .
{"url":"http://demonstrations.wolfram.com/BolzanosTheorem/","timestamp":"2014-04-17T16:13:05Z","content_type":null,"content_length":"44320","record_id":"<urn:uuid:a8a91017-afdb-4d65-972c-300929706f62>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Market Equilibrium February 5th 2008, 08:45 AM #1 Jan 2008 Market Equilibrium Please help me with this problem: Question: If the supply and demand functions for a commodity are given by 4p-q = 42 and (p+2) q = 2100, respectively, find the price that will result in market equilibrium. When solving for market equilibrium quantity, supply = demand, but since we are looking for the price, I'm guessing it should either be set up as a quadratic equation or possibly by setting the equations equal to each other and solving for q (Example: -2q + 2100 = q + 42), then substituting to get p. However, I am uncertain as to exactly how I should find the answer. I tried solving for -2q + 2100 = q + 42 and it gives q=68, which makes p= $728. Was this the correct way to calculate the price? Thanks for your time and assistance. Last edited by currypuff; February 5th 2008 at 12:09 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/business-math/27523-market-equilibrium.html","timestamp":"2014-04-16T20:39:25Z","content_type":null,"content_length":"29160","record_id":"<urn:uuid:846e29a4-297d-47a9-a9bf-ea8def0c724d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding probability functions November 12th 2011, 10:27 PM Finding probability functions Four fair coins are tossed simultaneously. Find the probability of the random variable X = number of heads and compute the following probabilities: a) obtaining no heads b) precisely 1 head c) at least 1 head d) not more than 3 heads. I'm not sure how to set up these probabilities. For example problems, the book has $f(x) = \begin{pmatrix}n\\ x \end{pmatrix}p^xq^{n-x}$ for a binomial distribution and $f(x) = \begin{pmatrix}n\\ x\end{pmatrix}(\frac{1}{2})^n$ for a symmetric case. There are other probability functions that they list as well. My biggest problem is finding out the proper way to write these probabilities out, or what probability function can be used (if any). For example, in a), I can logically see that it would be (1/2)*(1/2)*(1/2)*(1/2), but how can I write this as a probability function? November 12th 2011, 10:48 PM Re: Finding probability functions Four fair coins are tossed simultaneously. Find the probability of the random variable X = number of heads and compute the following probabilities: a) obtaining no heads b) precisely 1 head c) at least 1 head d) not more than 3 heads. I'm not sure how to set up these probabilities. For example problems, the book has $f(x) = \begin{pmatrix}n\\ x \end{pmatrix}p^xq^{n-x}$ for a binomial distribution and $f(x) = \begin{pmatrix}n\\ x\end{pmatrix}(\frac{1}{2})^n$ for a symmetric case. There are other probability functions that they list as well. My biggest problem is finding out the proper way to write these probabilities out, or what probability function can be used (if any). For example, in a), I can logically see that it would be (1/2)*(1/2)*(1/2)*(1/2), but hibution B(ow can I write this as a probability function? The number of heads has a binomial distribution $\text{B}(4,\ 0.5)$, so the probability of $r$ heads is: $p(numb\_ heads=r)=b(n;4,\ 0.5)= \begin{pmatrix}4\\ r \end{pmatrix}(0.5)^r (0.5)^{4-r}=\frac{4!}{(4-r)! r!}(0.5)^4$ So if $r=0$ we have: $p(numb\_ heads=0)=\frac{4!}{(4-0)! 0!}(0.5)^4=(0.5)^4$ November 13th 2011, 11:19 AM Re: Finding probability functions Not sure how to write c). I've tried doing the probability of 1 head, 2 heads, 3 heads, and 4 heads separately and adding/multiplying them together. Is it correct to do these probabilities separately? November 13th 2011, 11:53 AM Re: Finding probability functions
{"url":"http://mathhelpforum.com/advanced-statistics/191755-finding-probability-functions-print.html","timestamp":"2014-04-19T08:22:28Z","content_type":null,"content_length":"9344","record_id":"<urn:uuid:4acb5975-dc42-443a-a026-8e0f2108ad8c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Axon, axoff n most of the equipment I work with, the sensors (RTD’s - heat sensors that tell the controller the temperature of something, photocells – proximity sensors that sense when the product, ie., a box, is passing a certain point on a conveyor, limit switches – simple mechanical open/close switches that would tell the controller a safety door was open or a button was being pushed) each typically communicate with the controller through it’s own pair of wires that go from the sensor to an input/output board in the controller. You can end up with a lot of wires. Now there are systems that use control networks and “smart sensors” that not only send the information for which it exists (“115 ohms meaning 350 degrees F”, “I sense a box!”, or “button pushed”) but also a packet of info identifying the sensor. This way, you can run all this information through a single cable instead of hundreds of wire pairs. My question is… in which way does the human body work? The first (dumb open/close switches) or the second (smart sensors)? His guess that it’s the second option (his initial guess at least). My guess was the first. Wikipedia talks of nerve cells but I can’t seem to find a smoking gun. Opinions? What I often ponder is the ideal algorithm for running one, or more critically, a set of elevators. Poking around I’ve found that my initial hunch was correct — there’s what’s referred to as the elevator algorithm that dictates that an elevator will move in its current direction, stopping only to let people on or off, until it has no more calls in that direction. At that point it can either sit there and wait (which is probably more energy and cost efficient) or try to go to a more useful floor (the lobby when people are expected to arrive, or the top floor when they’re expected to leave). Some other interesting factoids: • The elevator algorithm is also used for hard disk access, to optimize the motion of the arm when dealing with read/write requests. • In areas where there are a lot of Jews, you will often find Sabbath elevators, which operate in accordance with some Orthodox and conservative rabbinic prohibitions. Wild! • Some modern elevators (including, apparently, the one in the Adelaide office of my company) require users to select their desired floor from a central console. They are then told which numbered elevator to get on. Inside the elevator, there are no buttons to push. This is apparently much more efficient but has some human-factors drawbacks: □ The console doesn’t recognize when a group of people is too large to fit in a single elevator. □ A single person requesting an elevator multiple times might end up with multiple elevators dispatched to retrieve her/him. □ People not knowing the system often get on an elevator and end up being taken for a ride! What other heuristics could you use if you had to program a set of elevators? Another thing to ponder: how could you determine the finer points of the algorithm used in a given office building, just by calling and riding the elevators? It would seem to require an accomplice at the very least. Incompatible Timesharing System or ITS got so fed up with people deliberately trying to find ways to crash the system that they came up with a novel solution — a KILL SYSTEM command that anyone could run that would crash the system (presumably to take all the fun and challenge out of it). I love that. While it’s hard to imagine such a feature being implemented in a modern operating system, I believe the spirit of the idea might still be usable in other contexts. Pretty much every single online gaming website I’ve seen has a problem with people running cheats — computer programs that stand in for the human and respond with uncanny precision or speed. I know FPS games have a large problem with cheaters who can fire with with deadly aim (among other tricks), but my own experience is with more basic games. I used to spend a lot of time on online versions of the word game Boggle, including PlaySite’s Tangleword (which is now IWin’s Boggle) and Yahoo’s Word Racer. Cheaters were a rampant, recurring, and frustrating problem. People would write programs that generated all the words for a given board from a dictionary, and would achieve phenomenal scores as a result, much to the chagrin of everyone trying to win on brainpower alone. Writing such a program is not difficult (I know, because I wrote one; I never used it to win, except against other cheaters), but these people were difficult if not impossible to discourage. Might the solution be to allow anyone a “super-user” account that always wins? I’d love to see the experiment done. I recently worked on a major redesign of a website for a major maker of tourist guidebooks. They had just fully embraced the idea of letting users supply content across many different areas of the site, but in every case we had to seriously consider the possibility of malicious uploads, largely because of one pathological individual who had been carrying out a vendetta against the company for the last decade. On every public forum on their old site he took every opportunity to add comments that were embarrassing, confusing, malicious, or disgusting. He would create new accounts as soon as his old ones were banned, often several times in the same day. (The solution we implemented for the site redesign was that uploads everywhere had to be approved of by a moderator.) I got to wondering, what if instead of banning such an individual, his account got tagged in such a way that he could still view his postings, but no one else could? Presumably he might never know that his account was so tagged, and would continue to waste his energies devising his malicious missives when in fact his words would be reaching nobody. It would be activated when either he is logged in, or a cookie is set on his machine (presuming he doesn’t disallow them; most forums require you to allow). I’m sure the most persistent people would eventually catch on, and resort to logging out and removing the cookie, or checking from another machine to see if their posts were actually getting through, but at the very least it would increase the burden on these lowlifes. This idea might work with the problem of cheaters on some gaming sites too. If their account gets tagged, they can still “log in”, but no one would see their scores but them. I’ve found that such cheaters actually thrive on the outraged comments they generate, but I could never convince other players to just ignore them in the comment areas. When such a “secretly blacklisted” tag is set by a moderator, no one would see the cheater’s comments either, though he would see theirs. To the cheater, it would just seem like they were being ignored. It would be trickier for FPS-type games; when the cheater kills someone, the server would have to pretend (to the cheater’s client machine) that the person’s avatar had died, and send no more updates as to that person’s whereabouts. Some situations wouldn’t be fakable but I bet you could fool a lot of the cheaters for much of the time. It would be the ultimate pwn. Could spam be dealt with similarly? I’ve always wished there was as option in email programs that allowed you to respond as if your account didn’t exist. That is, send the exact response to the sender that the mail server on your host would send if there was no such login on that host. I’m not fully sure if this could work, though; is the check for whether an account exists done during the initial handshake between sender and receiver, or at some later point? Built to Spill – Randy Described Eternity The song asks you to imagine a very large number by means of an analogy. The full lyrics go like this: every thousand years this metal sphere ten times the size of Jupiter floats just a few yards past the earth you climb on your roof and take a swipe at it with a single feather hit it once every thousand years `til you’ve worn it down to the size of a pea yeah I’d say that’s a long time but it’s only half a blink in the place you’re gonna be where you gonna be where will you spend eternity I’m gonna be perfect from now on I’m gonna be perfect starting now stop making that sound stop making that sound I will say I forgot but it was only yesterday and it’s all you had to say Despite the technical difficulties of having a metal sphere that large pass that close to the Earth, it’s a cool metaphor. So how long of a time are we talking about here? Let’s come up with an First consider how much a swipe of a feather would remove from the sphere. Let’s take a guess and say that 100 swipes would remove a square millimeter. (That’s being conservative, I think; could you really complete dissolve a grain of sand by brushing it a hundred times with a feather?) Now consider the size of the sphere, “ten times the size of Jupiter”. Let’s assume they mean ten times the volume, which would be 10 × 1.43128 × 10^15 km³, or 1.43128×10^16 km³, which in cubic millimeters is 1.43128 × 10^34 mm³. The “size of a pea” amount left over is incidental. So if we just multiple that figure by 100 (feather swipes) times 1000 years, and we get 1.43128 × 10^39 years. Let’s call this Randy’s number. Is it older than the current age of the universe? Yes, by a long shot. The universe is only about 1.37 × 10^10 years; Randy’s number is 100 000 000 000 000 000 000 000 000 000 times larger. Yeah, I’d say that’s a long time. Clifford Pickover’s Mazes For The Mind mentions a few other large (estimated) numbers: • The talking number — the total number of words spoken by humans since the dawn of history: 10^16 • The Coney Island number — the total number of grains of sand on the beach at Coney Island: 10^20 • The ice age number — the total number of ice crystals necessary to form the ice age: 10^30 Douglas Hoftstadter in Metamagical Themas mentions a few others: • The Hemoglobin number — the number of hemoglobin molecules in the human body: 6 x 10^21 • Rubik’s constant — the number of possible configurations of a Rubik’s cube: 4.3 x 10^19 Huge numbers, to be sure. Larger still is the infamous googol, which is 10^100. Now consider the googolplex, which is 10^googol. It is said that such a number could never be written down, as it would require more space than is available in the known universe. But even these numbers are peanuts compared to some other numbers which have been envisioned. Consider tetration, Knuth’s up-arrow notation, Steinhaus-Moser notation, or Conway’s chained arrow Finally, just to fry your brain a little more, consider a number proposed in the excellent web comic XKCD, in his strip What XKCD Means”: The Ackermann function with Graham’s number as its arguments is a number orders of magnitude larger than we can imagine. Actually, you can hardly say it’s “orders of magnitude” larger since that only implies exponentially larger; this number goes way, way, way beyond that! But it’s still finite. And if you divide it by infinity, you still get zero. That kills me. render 3D objects painlessly. Usually this is done using transformation matrices. This sort of matrix manipulation was actually discovered back in the 18th century, but like many great mathematical discoveries, had no real practical application. Until the modern era, that is, when someone discovered that this trick was perfect for rendering 3D graphics. It lets you rotate, scale, shear, etc. any image, and the really nice thing is that once you decide all the transformations you want, you can reduce them to a single matrix and so compute the positions of everything you want to draw rather quickly. But unless you live and breathe the stuff, it’s too complicated to remember how to set it up off the top of your head, and a lot of code even if you know how. So here’s a tip that will get you rendering 3D objects in no time. First, draw yourself a set of axises, like so: It doesn’t have to be very accurate, just so as it looks fairly proportional. Now, imagine that each of the arrows is one unit long. You need to write functions that return x and y for a given (x,y,z) point. So ask yourself — for each unit of x, how much do we actually move horizontally? How about for each unit of y? For each unit of z, we don’t move horizontally at all, but for the other two, you can make a guess. Let’s say the tip of the x axis is 16 pixels to the right of the origin, and the tip of the y is 20. Again, accuracy isn’t important, as long as you’re in the ballpark and the numbers are proportional. Since you want it to appear in the center, add an offset that’s half the width of the screen. So your method for determining the screen x coordinate of your point will look something like this: private int getX( Point p ) return CENTER_X + p.x*16 + p.y*20 + p.z*0; Obviously, the p.z*0 could have been left out, but I leave it there for illustration. Similarly, the screen y coordinate can be determined just by guessing the offset of the tips of the x, y, and z axises on our sketch. Looks like about -10, 5, and -20 respectively (remember y coordinates count down from the top in most computer graphics systems). So our function for determining the screen y of your point will look like this: private int getY( Point p ) return CENTER_Y + p.x*-10 + p.y*5 + p.z*-20; Let’s see it in action. Here’s an applet that defines a few lines in terms of their 3D endpoints, and then draws them on our 2D screen using the methods described above. Specifically, it defines all the lines necessary to draw the cube that stretches from -1 to 1 on each axis. Give it a click to run it: (Java source for main class here. Parent applet here.) Not bad, huh? It’s handy for quick visualizations, and you’ll probably see it again here before long. recursion in general, and the Towers of Hanoi in particular. This will be review for anyone who’s studied computer science, but that’s okay since I won’t have many readers at this point anyway. The Towers of Hanoi is a puzzle of simple construction but fascinating complexity. It consists of three pegs. To start with, on one of the pegs is a number of disks, stacked from largest to smallest. The object is to move the stack of disks to one of the other pegs, with the restrictions that you can only move one disk at a time, and a disk may never be placed on a disk that’s smaller than it. There’s a beautiful computer algorithm to solve it, and that I want to talk about. But the Towers of Hanoi was a toy long before there were computers. So first, have a play at solving it by yourself. Increase the number of disks if you find it too easy to start with. Most people, when they first start playing with it, end up moving disks pretty much randomly at the start. After a point you may find yourself making small goals, like “Okay, I need to get these smallest three disks over to this peg, so I can move the next biggest disk…” Eventually you realize that, if you’re starting with, say, six disks, you’re going to have to move five of them out of the way (onto the extra peg), then move the sixth and largest disk to the target peg, then move the first five on top of it. So let’s look at the actual program that solves it. The number of disks has been made a parameter so we can eventually solve it for any number of disks. I’ll add comments to make it exceedingly clear what’s going on. private static void hanoi( int numberOfPegs, String fromPeg, String targetPeg, String otherPeg ) // First, move all but the largest disk to the extra peg. if (numberOfPegs > 1) hanoi( /* move */ numberOfPegs-1, /* pegs */ /* from the */ fromPeg, /* to the */ otherPeg, /* via the */ targetPeg); // Next, move the largest disk to the target page. We'll just print // out the move we're making. System.out.println("Move a disk from " + fromPeg + " to " + targetPeg); // Lastly, move the disks that we moved out of the way on top of // the largest disk. if (numberOfPegs > 1) hanoi( /* move */ numberOfPegs-1, /* pegs */ /* from the */ otherPeg, /* to the */ targetPeg, /* via the */ fromPeg); We’ve had to add the extra peg as the last parameter, just to keep track of it. But notice how similar it is to the original plan, of moving all but the largest disk out of the way, then moving the largest disk, then moving all the remaining disks on top of it. But as an algorithm it looks like it only sketches the broad plan. It just doesn’t look complex enough to solve that first step — moving all but the largest disk out of the way — not to mention he third step, which is just as complex. But miraculously, it works! Try running the code yourself. How could it possibly do all those intermediate steps? It turns out that the problem of how to move n-1 disks from the “from” peg to the “other” peg is just a smaller, simpler version of the problem of moving n disks from the “from” peg to the “target” peg. It’s just that we’re now using the “extra” peg as our target peg, and the target peg as the extra peg. So this method will just be solving the problem for smaller and smaller versions of the problem, until eventually it has to solve it for only a single disk. And if you trace the code for what happens when there’s a single disk (when numberOfPegs==1), you’ll see that all it will do is the print statement. To me, the Towers of Hanoi problem is the most beautiful algorithm there is. And it fits the requirements for a recursive algorithm perfectly; it can be broken up into smaller problems, until it reaches an exceedingly simple case (just moving a single disk). I wonder how long it took after the first computers were made until the Towers of Hanoi was successfully programmed. Very probably it was done in machine code, using a stack of some sort. But what was the first language that would have allowed it to be done with a nice readable recursive function? I would love to have been a fly on the wall. There’s lots more to the Towers of Hanoi, which I will cover in a later post. Hello, World! And welcome to Computronium. In this forum I’ll be talking about things that thrill my brain. Mostly, as the name implies, it’ll be about computing, especially the recreational variety, but I’ll be taking brief interludes into math, physics, astronomy, or whatever else strikes my fancy. I’ll be focusing on the bits that are just fun, or interesting, or beautiful to me. Hopefully they will be to you too. I have met a number of people in my life who enjoy computer programming for its own sake. But there’s also people who think that anything created on a computer is inherently soulless, and that any art created on a computer is cold and robotic. I couldn’t disagree more. Art created on a computer is as human as the person who makes it. But the logic underneath too is beautiful in and of itself. Though I’ll be presenting things that give me “intellectual” thrills, my goal will always be to simplify, and to explain clearly, without dressing things up in fancy language. I’ll be writing about what I know but also about what I don’t — things I’m curious about. So I’m counting on input from you all. Not everything I’ll be writing about will be deeply profound — some of the topics I’m pondering will require just a few sentences. But even in dusting off these old ideas that I’ve pondered before, I’ve come up with new ways of looking at them, as well as some completely new ideas. I’m sure, with the participation in this forum that I’m hoping for, there’ll be lots and lots of other things that come up. So, the main topic: computing. Computers themselves are ghastly, I believe, BUT, they’re the only things that let you compute. We’ll not be arguing here the merits of Java versus C++, or talk much about the flavor du jour of software processes. I have preferences in these matters, of course, but the underlying algorithms are what really interest me. Some languages do indeed encourage you to look at problems in interesting ways, but for much of what I’ll be talking about, the choice of programming language is pretty much moot. And as far as process — while I’m totally convinced of the necessity of good software practices, I don’t find a lot of joy in talking about it.
{"url":"http://computronium.blogspot.com/","timestamp":"2014-04-16T10:09:43Z","content_type":null,"content_length":"56825","record_id":"<urn:uuid:dd2ad87a-a0f3-42d9-b2fd-abf3f81eaec9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
NIPS 2010 Retrospective Happy New Year and I know I've been silent but I've been busy. But no teaching this semester (YAY!) so maybe you'll see more posts. At any rate, I'm really late to the table, but here are my comments about this past year's NIPS. Before we get to that, I hope that everyone knows that this coming NIPS will be in Granada, and then for (at least) the next five years will be in Tahoe. Now that I'm not in ski-land, it's nice to have a yearly ski vacation ... erm I mean scientific conference. But since this was the last year of NIPS in Vancouver, I thought I'd share a conversation that occurred this year at NIPS, with participants anonymized. (I hope everyone knows to take this in good humor: I'm perfectly happy to poke fun at people from the States, too...). The context is that one person in a large group, which was going to find lunch, had a cell phone with a data plan that worked in Canada: A: Wow, that map is really taking a long time to load. B: I know. It's probably some socialized Canadian WiFi service. C: No, it's probably just slow because every third bit has to be a Canadian bit? D: No no, it's because every bit has to be sent in both English and French! Okay it's not that funny, but it was funny at the time. (And really "B" is as much a joke about the US as it was about Canada :P.) But I'm sure you are here to hear about papers, not stupid Canada jokes. So here's my take. tutorial on optimization by Stephen Wright was awesome. I hope this shows up on videolectures soon. (Update: it has!) I will make it required reading / watching for students. There's just too much great stuff in it to go in to, but how about this: momentum is the same as CG! Really?!?! There's tons of stuff that I want to look more deeply into, such as robust mirror descent, some work by Candes about SVD when we don't care about near-zero SVs, regularized stochastic gradient (Xiao) and sparse eigenvector work. Lots of awesome stuff. My favorite part of NIPS. Some papers I saw that I really liked: A Theory of Multiclass Boosting (Indraneel Mukherjee, Robert Schapire): Formalizes boosting in a multiclass setting. The crux is a clever generalization of the "weak learning" notion from binary. The idea is that a weak binary classifier is one that has a small over random guessing (which, in the binary case, gives 50/50). Generalize this and it works. Structured sparsity-inducing norms through submodular functions (Francis Bach): I need to read this. This was one of those talks where I understood the first half and then got lost. But the idea is that you can go back-and-forth between submodular functions and sparsity-inducing norms. Construction of Dependent Dirichlet Processes based on Poisson Processes (Dahua Lin, Eric Grimson, John Fisher): The title says it all! It's an alternative construction to the Polya urn scheme and also to the stick-breaking scheme. A Reduction from Apprenticeship Learning to Classification (Umar Syed, Robert Schapire): Right up my alley, some surprising results about apprenticeship learning (aka Hal's version of structured prediction) and classification. Similar to a recent paper by Stephane Ross and Drew Bagnell on Efficient Reductions for Imitation Learning Variational Inference over Combinatorial Spaces (Alexandre Bouchard-Cote, Michael Jordan): When you have complex combinatorial spaces (think traveling salesman), how can you construct generic variational inference algorithms? Implicit Differentiation by Perturbation (Justin Domke): This is a great example of a paper that I never would have read, looked at, seen, visited the poster of, known about etc., were it not for serendipity at conferences (basically Justin was the only person at his poster when I showed up early for the session, so I got to see this poster). The idea is if you have a graphical model, and some loss function L(.) which is defined over the marginals mu(theta), where theta are the parameters of the model, and you want to optimize L(mu(theta)) as a function of theta. Without making any serious assumptions about the form of L, you can actually do gradient descent, where each gradient computation costs two runs of belief propagation. I think this is amazing. Probabilistic Deterministic Infinite Automata (David Pfau, Nicholas Bartlett, Frank Wood): Another one where the title says it all. DP-style construction of infinite automata. Graph-Valued Regression (Han Liu, Xi Chen, John Lafferty, Larry Wasserman): The idea here is to define a regression function over a graph. It should be regularized in a sensible way. Very LASSO-esque model, as you might expect given the author list :). Other papers I saw that I liked but not enough to write mini summaries of: Word Features for Latent Dirichlet Allocation (James Petterson, Alexander Smola, Tiberio Caetano, Wray Buntine, Shravan Narayanamurthy) Tree-Structured Stick Breaking for Hierarchical Data (Ryan Adams, Zoubin Ghahramani, Michael Jordan) Categories and Functional Units: An Infinite Hierarchical Model for Brain Activations (Danial Lashkari, Ramesh Sridharan, Polina Golland) Trading off Mistakes and Don't-Know Predictions (Amin Sayedi, Morteza Zadimoghaddam, Avrim Blum) Joint Analysis of Time-Evolving Binary Matrices and Associated Documents (Eric Wang, Dehong Liu, Jorge Silva, David Dunson, Lawrence Carin) Learning Efficient Markov Networks (Vibhav Gogate, William Webb, Pedro Domingos) Tree-Structured Stick Breaking for Hierarchical Data (Ryan Adams, Zoubin Ghahramani, Michael Jordan) Construction of Dependent Dirichlet Processes based on Poisson Processes (Dahua Lin, Eric Grimson, John Fisher) Supervised Clustering (Pranjal Awasthi, Reza Bosagh Zadeh) Two students who work with me (though one isn't actually mine :P), who went to NIPS also shared their favorite papers. The first is a list from Avishek Saha: A Theory of Multiclass Boosting (Indraneel Mukherjee, Robert Schapire) Repeated Games against Budgeted Adversaries (Jacob Abernethy, Manfred Warmuth) Non-Stochastic Bandit Slate Problems (Satyen Kale, Lev Reyzin, Robert Schapire) Trading off Mistakes and Don't-Know Predictions (Amin Sayedi, Morteza Zadimoghaddam, Avrim Blum) Learning Bounds for Importance Weighting (Corinna Cortes, Yishay Mansour, Mehryar Mohri) Supervised Clustering (Pranjal Awasthi, Reza Bosagh Zadeh) The second list is from Piyush Rai, who apparently aimed for recall (though not with a lack of precision) :P: Online Learning: Random Averages, Combinatorial Parameters, and Learnability (Alexander Rakhlin, Karthik Sridharan, Ambuj Tewari): defines several complexity measures for online learning akin to what we have for the batch setting (e.g., radamacher averages, covering numbers Online Learning in The Manifold of Low-Rank Matrices (Uri Shalit, Daphna Weinshall, Gal Chechik): nice general framework applicable in a number of online learning settings. could also be used for online multitask learning. Fast global convergence rates of gradient methods for high-dimensional statistical recovery (Alekh Agarwal, Sahand Negahban, Martin Wainwright): shows that the properties of sparse estimation problems that lead to statistical efficiency also lead to computational efficiency which explains the faster practical convergence of gradient methods than what the theory guarantees. Copula Processes (Andrew Wilson, Zoubin Ghahramani): how do you determine the relationship between random variables which could have different marginal distributions (say one has gamma and the other has gaussian distribution)? copula process gives an answer to this. Graph-Valued Regression (Han Liu, Xi Chen, John Lafferty, Larry Wasserman): usually undirected graph structure learning involves a set of random variables y drawn from a distribution p(y). but what if y depends on another variable x? this paper is about learning the graph structure of the distribution p(y|x=x). Structured sparsity-inducing norms through submodular functions (Francis Bach): standard sparse recovery uses l1 norm as a convex proxy for the l0 norm (which constrains the number of nonzero coefficients to be small). this paper proposes several more general set functions and their corresponding convex proxies, and links them to known norms. Trading off Mistakes and Don't-Know Predictions (Amin Sayedi, Morteza Zadimoghaddam, Avrim Blum): an interesting paper -- what if in an online learning setting you could abstain from making a prediction on some of the training examples and just say "i don't know"? on others, you may or may not make the correct prediction. lies somewhere in the middle of always predicting right or wrong (i.e., standard mistake driven online learning) versus the recent work on only predicting correctly or otherwise saying "i don't know". Variational Inference over Combinatorial Spaces (Alexandre Bouchard-Cote, Michael Jordan): cool paper. applicable to lots of settings. A Theory of Multiclass Boosting (Indraneel Mukherjee, Robert Schapire): we know that boosting in binary case requires "slightly better than random" weak learners. this paper characterizes conditions on the weak learners for the multi-class case, and also gives a boosting algorithm. Multitask Learning without Label Correspondences (Novi Quadrianto, Alexander Smola, Tiberio Caetano, S.V.N. Vishwanathan, James Petterson): usually mtl assumes that the output space is the same for all the tasks but in many cases this may not be true. for instance, we may have two related prediction problems on two datasets but the output spaces for both may be different and may have some complex (e.g., hierarchical, and potentially time varying) output spaces. the paper uses a mutual information criteria to learn the correspondence between the output spaces. Learning Multiple Tasks with a Sparse Matrix-Normal Penalty (Yi Zhang, Jeff Schneider): presents a general multitask learning framework and many recently proposed mtl models turn out to be special cases. models both feature covariance and task covariance Efficient algorithms for learning kernels from multiple similarity matrices with general convex loss functions (Achintya Kundu, vikram Tankasali, Chiranjib Bhattacharyya, Aharon Ben-Tal): the title says it all. :) multiple kernel learning is usually applied in classification setting but due to the applicability of the proposed method for a wide variety of loss functions, one can possibly also use it for unsupervised learning problems as well (e.g., spectral clustering, kernel pca, etc). Getting lost in space: Large sample analysis of the resistance distance (Ulrike von Luxburg, Agnes Radl, Matthias Hein): large sample analysis of the commute distance: shows a rather surprising result that commute distance between two vertices in the graph if the graph is "large" and nodes represent high dimensional variables is meaningless. the paper proposes a correction and calls it "amplified commute distance". A Bayesian Approach to Concept Drift (Stephen Bach, Mark Maloof): gives a bayesian approach for segmenting a sequence of observations such that each "block" of observations has the same underlying concept. MAP Estimation for Graphical Models by Likelihood Maximization (Akshat Kumar, Shlomo Zilberstein): they show that you can think of an mrf as a mixture of bayes nets and then the map problem on the mrf corresponds to solving a form of the maximum likelihood problem on the bayes net. em can be used to solve this in a pretty fast manner. they say that you can use this methods with the max-product lp algorithms to yield even better solutions, with a quicker convergence. Energy Disaggregation via Discriminative Sparse Coding (J. Zico Kolter, Siddharth Batra, Andrew Ng): about how sparse coding could be used to save energy. :) Semi-Supervised Learning with Adversarially Missing Label Information (Umar Syed, Ben Taskar): standard ssl assumes that labels for the unlabeled data are missing at random but in many practical settings this isn't actually true.this paper gives an algorithm to deal with the case when the labels could be adversarially missing. Multi-View Active Learning in the Non-Realizable Case (Wei Wang, Zhi-Hua Zhou): shows that (under certain assumptions) exponential improvements in the sample complexity of active learning are still possible if you have a multiview learning setting. Self-Paced Learning for Latent Variable Models (M. Pawan Kumar, Benjamin Packer, Daphne Koller): an interesting paper, somewhat similar in spirit to curriculum learning. basically, the paper suggests that in learning a latent variable model, it helps if you provide the algorithm easy examples first. More data means less inference: A pseudo-max approach to structured learning (David Sontag, Ofer Meshi, Tommi Jaakkola, Amir Globerson): a pseudo-max approach to structured learning: this is somewhat along the lines of the paper on svm's inverse dependence on training size from icml a couple of years back. :) Hashing Hyperplane Queries to Near Points with Applications to Large-Scale Active Learning (Prateek Jain, Sudheendra Vijayanarasimhan, Kristen Grauman): selecting the most uncertain example in a pool based active learning can be expensive if the number of candidate examples is very large. this paper suggests some hashing tricks to expedite the search. Active Instance Sampling via Matrix Partition (Yuhong Guo): frames batch mode active learning as a matrix partitioning problems and proposes local optimization technique for the matrix partitioning problem. A Discriminative Latent Model of Image Region and Object Tag Correspondence (Yang Wang, Greg Mori): it's kind of doing correspondence lda on image+captions but they additionally infer the correspondences between tags and objects in the images, and show that this gives improvements over corr-lda. Factorized Latent Spaces with Structured Sparsity (Yangqing Jia, Mathieu Salzmann, Trevor Darrell): a multiview learning algorithm that uses sparse coding to learn shared as well as private features of different views of the data. Word Features for Latent Dirichlet Allocation (James Petterson, Alexander Smola, Tiberio Caetano, Wray Buntine, Shravan Narayanamurthy): extends lda for the case when you have access to features for each word in the vocabulary 4 comments: Hi Hal, thanks for the blurb about our apprenticeship learning paper. The paper by Drew Bagnell that is closely related is: Ross, Stephane & Bagnell, Drew "Efficient Reductions for Imitation Learning", AISTATS 2010 Thanks for the notes! As usual, this is very helpful! I have found the videolecture, the link is: http://videolectures.net/nips2010_wright_oaml/ @honglangwang: thanks, i've updated the post!
{"url":"http://nlpers.blogspot.com/2011/01/nips-2010-retrospective.html","timestamp":"2014-04-20T18:23:13Z","content_type":null,"content_length":"112101","record_id":"<urn:uuid:7c7db774-547a-44a2-9edf-85aec36fd351>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate the volume of a sphere In this calculation you can calculate the volume of a sphere with a number of given input values, such as radius, diameter, circumference. You also have a number of different input units and can choose output unit according to your likings. Would your friends like this calculation? About this calculation Is something broken? | Last updated: 2011/09/29 | No known bugs Related calculations Some random calculations
{"url":"http://www.countcalculate.com/geometry/volume-sphere","timestamp":"2014-04-17T14:08:38Z","content_type":null,"content_length":"33883","record_id":"<urn:uuid:c858340a-71a0-454a-8fb0-3f39289c3eb4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"}