content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Electron.n. J. Diff. Eqns., Vol. 1999(1999), No. 16, pp. 1-13.
Persistence of invariant manifolds for perturbations of semiflows with symmetry
Chongchun Zeng Abstract:
Consider a semiflow in a Banach space, which is invariant under the action of a compact Lie group. Any equilibrium generates a manifold of equilibria under the action of the group. We prove that, if
the manifold of equilibria is normally hyperbolic, an invariant manifold persists in the neighborhood under any small perturbation which may break the symmetry. The Liapunov-Perron approach of
integral equations is used.
Submitted in April 1995, revised April 6, 1999, Published May 18, 1999.
Math Subject Classification: 58F15, 58F35, 58G30, 58G35, 34C35.
Key Words: Semiflow, invariant manifold, symmetry.
Show me the PDF file (154K), TEX file, and other files for this article.
Chongchun Zeng
Department of Mathematics
Brigham Young University
Provo, UT 84602, USA
e-mail: zengc@math.byu.edu Return to the EJDE web page
|
{"url":"http://ejde.math.txstate.edu/Volumes/1999/16/abstr.html","timestamp":"2014-04-17T07:09:04Z","content_type":null,"content_length":"1561","record_id":"<urn:uuid:56175bc2-4eaf-4c32-b0e5-7df940f20039>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Approximation of functional spatial regression models using bivariate splines
Seminar Room 1, Newton Institute
We consider the functional linear regression model where the explanatory variable is a random surface and the response is a real random variable, with bounded or normal noise. Bivariate splines over
triangulations represent the random surfaces. We use this representation to construct least squares estimators of the regression function with or without a penalization term. Under the assumptions
that the regressors in the sample are bounded and span a large enough space of functions, bivariate splines approximation properties yield the consistency of the estimators. Simulations demonstrate
the quality of the asymptotic properties on a realistic domain. We also carry out an aplication to ozone forecasting over the US that illustrates the predictive skills of the method.
This is joint work with Ming-Jun Lai.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/SCH/seminars/2008060511001.html","timestamp":"2014-04-19T02:01:09Z","content_type":null,"content_length":"6935","record_id":"<urn:uuid:266d2374-9a14-457d-8fdc-15df3e265f68>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Work and Power
Mechanical systems, an engine for example, are not limited by the amount of work they can do, but rather by the rate at which they can perform the work. This quantity, the rate at which work is done,
is defined as power.
Equations for Power
From this very simple definition, we can come up with a simple equation for the average power of a system. If the system does an amount of work, W , over a period of time, T , then the average power
is simply given by:
It is important to remember that this equation gives the
power over a given time, not the instantaneous power. Remember, because in the equation
increases with
, even if a constant force is exerted, the work done by the force increases with displacement, meaning the power is not constant. To find the instantaneous power, we must use calculus:
In the sense of this second equation for power, power is the rate of change of the work done by the system.
From this equation, we can derive another equation for instantaneous power that does not rely on calculus. Given a force that acts at an angle θ to the displacement of the particle,
P = F cosθ
Since v ,
Though the calculus is not necessarily important to remember, the final equation is quite valuable. We now have two simple, numerical equations for both the average and instantaneous power of a
system. Note, in analyzing this equation, we can see that if the force is parallel to the velocity of the particle, then the power delivered is simply
P = Fv
Units of Power
The unit of power is the joule per second, which is more commonly called a watt. Another unit commonly used to measure power, especially in everyday situations, is the horsepower, which is equivalent
to about 746 Watts. The rate at which our automobiles do work is measured in horsepower.
Power, unlike work or energy, is not really a "building block" for further studies in physics. We do not derive other concepts from our understanding of power. It is far more applicable for practical
use with machinery that delivers force. That said, power remains an important and useful concept in classical mechanics, and often comes up in physics courses.
|
{"url":"http://www.sparknotes.com/physics/workenergypower/workpower/section3.rhtml","timestamp":"2014-04-17T04:28:14Z","content_type":null,"content_length":"56185","record_id":"<urn:uuid:14a58496-cc0b-42c7-8c36-bf0e066116da>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bell Gardens Calculus Tutor
Find a Bell Gardens Calculus Tutor
I'm a former tenured Community College Professor with an M.A. degree in Math from UCLA. I have also taught university level mathematics at UCLA, the University of Maryland, and the U.S. Air Force
9 Subjects: including calculus, geometry, algebra 1, algebra 2
...A few years later, I scored a 5 on the AP Calculus BC exam in 10th grade. I also took AP Chemistry, AP Physics B, and AP Physics C: Mechanics during my senior year, scoring a 5 on each of them.
In 9th grade, I took the old SAT and scored 1580, with an 800 on the Verbal section and a 780 on the Math section.
6 Subjects: including calculus, physics, SAT math, trigonometry
...I have completed student teaching and hope to teach in my own class soon. I am aware of Common Core standards and can assist students in a variety of ways. I have two kids of my own, so timely
cancellation is always appreciated.
7 Subjects: including calculus, chemistry, geometry, algebra 1
...I am currently finishing up my freshman year of college. I am pursuing a major in chemistry, as well as a minor in mathematics and possibly a minor in physics as well. Upon completion of this
school year, I will have taken and passed: Calculus 3, General Chemistry 1 and 2, Physics 1, Differential Equations, and Linear Algebra.
24 Subjects: including calculus, chemistry, reading, anatomy
...I earned a Master's in Applied Physics from Caltech. I work in a plasma physics laboratory. I teach laboratory physics.
11 Subjects: including calculus, physics, algebra 2, SAT math
|
{"url":"http://www.purplemath.com/Bell_Gardens_calculus_tutors.php","timestamp":"2014-04-19T17:44:56Z","content_type":null,"content_length":"23771","record_id":"<urn:uuid:12fcab5a-06a2-4f58-9ef3-e297c38e2627>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Piedmont, CA Geometry Tutor
Find a Piedmont, CA Geometry Tutor
...My teaching method is emphasized on conceptual learning rather than rote memorization. I would like to teach students how to think and how to be engaged in a fun mathematics world. My goal is
to build a firm foundation for students' future learning.
22 Subjects: including geometry, calculus, statistics, biology
...I manage the databases, including creating new Access files, editing the existing ones, altering formats and views, generating reports, printing mailing lists and labels, etc. I have been using
Microsoft Windows since getting my first PC in 1990 with Windows 3.0. Since then, I have owned, used,...
43 Subjects: including geometry, Spanish, chemistry, precalculus
...I'm excited to jump in, and I look forward to hearing from you. InDesign was covered in my landscape architecture classes in college, and we used it to compose large-format boards for
presentations, including text, photos, and graphics. My current portfolio was composed in InDesign.
34 Subjects: including geometry, Spanish, reading, English
...I've also taught people how to do good research so they could find solutions for themselves. I've mentored young people in the Salt Lake Peer Court system as they transformed various life
difficulties into academic accomplishments. I have a passion about learning and now I want to offer it to y...
32 Subjects: including geometry, reading, chemistry, English
...My doctoral degree is in psychology. I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply understand math
and take great joy in communicating this to reluctant and struggling students, as well as to able student...
20 Subjects: including geometry, calculus, statistics, biology
|
{"url":"http://www.purplemath.com/piedmont_ca_geometry_tutors.php","timestamp":"2014-04-16T10:57:24Z","content_type":null,"content_length":"24039","record_id":"<urn:uuid:5eeadab7-16c3-4c25-83ef-48ee910b394d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"But a
powerful new type of computer
that is about to be commercially deployed by a major American military contractor is taking computing into the strange, subatomic realm of quantum mechanics. In that infinitesimal neighborhood,
common sense logic no longer seems to apply. A one can be a one, or it can be a one and a zero and everything in between - all at the same time. [...] Now, Lockheed Martin - which bought an early
version of such a computer from the Canadian company D-Wave Systems two years ago - is confident enough in the technology to upgrade it to commercial scale, becoming the first company to use quantum
computing as part of its business." I always get a bit skeptical whenever I hear the words 'quantum computing', but according to NewScientist, this is
pretty legit
Permalink for comment 556406
To read all comments associated with this story, please
I actually think you suffer from an understanding of language.
I can define a set as all possible numbers.
And then you cannot tell me there is an additional number outside the set.
Perhaps (I'd actually dispute that by asking you precisely what type of numbers are in your set, but I digress).
Regardless, let me be kind and assume that you've got a set of numbers that is uncountable; perhaps you're thinking of, say, the set of all real numbers. You're now quite pleased with yourself
because you've got a set with an infinite number of elements.
However, I come along and claim that I can define a set with even more elements in it than yours. I can even be kind to you and say that I'll restrict myself to working with a set containing real
numbers. The question therefore is: do you believe that I can construct a set of real numbers that contains more elements than your - already infinitely large - set of real numbers?
Because I can do so quite simply by taking my set to consist of all possible subsets of the real numbers. Both of our sets have infinitely many elements, but mine has more than yours.
|
{"url":"http://www.osnews.com/permalink?556406","timestamp":"2014-04-18T09:08:08Z","content_type":null,"content_length":"18325","record_id":"<urn:uuid:59717231-a0e0-425a-b646-22fc2c23395c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
@code{Q} exponent-letter - The GNU Fortran Compiler
6.1.8 Q exponent-letter
GNU Fortran accepts real literal constants with an exponent-letter of Q, for example, 1.23Q45. The constant is interpreted as a REAL(16) entity on targets that support this type. If the target does
not support REAL(16) but has a REAL(10) type, then the real-literal-constant will be interpreted as a REAL(10) entity. In the absence of REAL(16) and REAL(10), an error will occur.
|
{"url":"http://gcc.gnu.org/onlinedocs/gcc-4.7.2/gfortran/_003ccode_003eQ_003c_002fcode_003e-exponent_002dletter.html","timestamp":"2014-04-21T06:03:03Z","content_type":null,"content_length":"3682","record_id":"<urn:uuid:86b93085-559d-43cb-a694-de06d7823572>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newest 'arithmetic terminology' Questions
Any class $X$ of order $j$ in HOA is in bijection with the order $j+1$ class built up from singletons $\{x\}$ of natural numbers $x$ just the way that $X$ is built up from the numbers $x$. And of ...
That is, the sum of squares of some numbers divided by the sum of the numbers. The term "anti-harmonic mean" has been coined for this quantity. I'm hoping there is a better name.
|
{"url":"http://mathoverflow.net/questions/tagged/arithmetic+terminology","timestamp":"2014-04-18T00:27:13Z","content_type":null,"content_length":"33269","record_id":"<urn:uuid:35f37655-e7b9-4d87-8e8a-ef8b34367fcf>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PDEs as a tool in other domains in mathematics
up vote 21 down vote favorite
According to the large number of paper cited in MathSciNet database, Partial Differential Equations (PDEs) is an important topic of its own. Needless to say, it is an extremely useful tool for
natural sciences, such as Physics, Chemistry, Biology, Continuum Mechanics, and so on.
What I am interested in, here, is examples where PDEs were used to establish a result in an other mathematical field. Let me provide a few.
1. Topology. The Atiyah-Singer index theorem.
2. Geometry. Perelman's proof of Poincaré conjecture, following Hamilton's program.
3. Real algebraic geometry. Lax's proof of Weyl-like inequalities for hyperbolic polynomial.
Only one example per answer. Please avoid examples in the reverse sense, where an other mathematical field tells something about PDEs (examples: Feynman-Kac formula from probability, multi-solitons
from Riemann surfaces). This could be the matter of an other MO question.
big-list ap.analysis-of-pdes
6 "According to MathSciNet, PDEs is an important topic of its own." I wish I can use that for the Intellectual Merit section of my next NSF proposal... – Willie Wong Oct 11 '10 at 11:41
3 @Dennis: big-list questions should always be community wiki (edit the question, tick the small check box to the bottom right of the text-field). – Willie Wong Oct 11 '10 at 11:42
4 +1. I wonder if any PDE has been used in Number Theory, even in the Analytic branch. – Unknown Oct 11 '10 at 11:42
2 @Willie. OK. I'll correct it if I can. But please, write Denis with one N (I'm French, nobody's perfect). – Denis Serre Oct 11 '10 at 11:57
12 "I'm French, nobody's perfect" are four of the funniest words I've read in some time. – Pete L. Clark Oct 11 '10 at 15:29
show 3 more comments
18 Answers
active oldest votes
Maybe ... Cauchy-Riemann equations ... they may have been used a time or two over the years ...
up vote 11 down vote
I vaguely remember that these names were mentioned once or twice in a complex analysis course.. – timur Dec 21 '13 at 3:35
add comment
The diffeomorphism group of a closed surface of negative Euler characteristic has contractible components. This is theorem by Earle and Eells (Journal of Differential Geometry 3, 1969).
The crucial ingredient for their proof is the solvability of the Beltrami differential equation. Later, Gramain found a purely topological, rather elementary proof of that result but - at
up vote 10 least for me - the proof using PDEs is much easier to understand.
down vote
add comment
The Hodge theorem (each de Rham cohomology class on a compact Riemann manifold has a unique harmonic representative) has a wide range of applications in complex algebraic geometry, much
deeper than showing the finite-dimensionality of the cohomology. One of my favorite results that depend on the Hodge theorem is the Kodaira embedding theorem, which characterizes those
up vote compact complex manifolds that can be embedded holomorphically into projective space. See Griffiths-Harris. That a compact manifold has finite-dimensional cohomology groups can be shown in a
9 down more elementary way. I am sure this is somewhere in Bott-Tu's book.
add comment
The work of Uhlenbeck, Taubes, Donaldson, and others on Yang-Mills connections is a gorgeous application of nonlinear elliptic PDE theory.
up vote 9 down
Always a good thing to have an expert in the field a question is rooted in present to answer it,Deane. Me,I'm just trying to find the time to finish Evans' text on the
subject......... – Andrew L Oct 12 '10 at 7:35
add comment
PDEs are massively used in the theory of harmonic maps.
My personal favourite is a nice theorem by Lemaire and Sacks-Uhlenbeck.
Theorem. Suppose $M$ is a compact Riemann surface, possibly with boundary, $N \subset \mathbb R^n$ is compact. If $\pi_2(N) = 0$, then any map $u_0: M \to N$ is homotopic to a smooth
up vote 8 harmonic map.
down vote
The key ingredient of the proof relies on existence and uniqueness of global weak "energy" solutions $u:\ M\times[0,\infty])\to N$ to a nonlinear Cauchy problem for the $L^2$-gradient flow
$$\begin{cases} u_t-\triangle_M u=A(u)(\nabla u,\nabla u)_M & \mbox{in }M\times[0,\infty),\\\ u=u_0 & \mbox{at }t=0\mbox{ and on }\partial M\times[0,\infty)\end{cases}$$ which converge to
a smooth harmonic map $u_{\infty}:\ M\to N$ as $t\to\infty$.
1 There's also a similar result for the Yang-Mills heat flow. – Willie Wong Oct 11 '10 at 17:39
add comment
Another not-quite-yet connection which I learned from Lax's Hyperbolic PDE book: one can, technically speaking, extract the Riemann hypothesis from the scattering rates of certain
up vote 7 "automorphic waves". (This is where my knowledge ends; those interested can look at Chapter 9 of the the book.)
down vote
add comment
The Nash isometric embedding theorem
up vote 7 down vote
add comment
How about Hodge theory. I.e. that each DeRham cohomology class of a smooth compact manifold has a harmonic representative (one has to of course choose a Riemannian metric to make sense of
up vote 6 harmonic). This for instance allows one to show that the Betti numbers of a compact manifold are all finite and is the usual way to show this (the only way?).
down vote
A compact manifold is homeomorphic to a finite CW complex (take a Morse function, or a triangulation, etc.) and thus has finite Betti numbers. – Tom Church Mar 19 '12 at 18:07
add comment
Riemann's existence theorem which states that every compact Riemann surface has a non-constant meromorphic function (and hence is an algebraic curve). Standard proofs use harmonic
up vote 5 down functions, i.e., solutions of the Laplace equation.
add comment
Some other results in Geometry that do not require reaching very far to see its connection to PDEs: the resolution of the Yamabe Problem, the proof of the Calabi Conjecture (now the
Calabi-Yau theorem), and the proof of Positive Energy Theorem.
(I violate the 1 example per answer rule, since these three are all from geometry, and involve the same mathematician.)
Edit: As Deane pointed out below, I should be more precise about the attribution. A well known contributor to the solution of those three problems above is S.T. Yau. Others who have worked
up vote 5 on those problems include Rick Schoen, who collaborated with Yau on the proof of the Positive Energy theorem and (hence) the Yamabe problem, and Thierry Aubin who also contributed much to
down vote the understanding of the Yamabe Problem, as well as making significant progress toward the Calabi conjecture.
Edit 2: And of course, as Timur pointed out below, I inadvertantly left out Neil Trudinger as one of the main contributors to the Yamabe problem. (One of the reasons I didn't want to be too
precise on references in the beginning was to avoid mistakes like this.) Also please note that this is a Community Wiki article, so please feel free to just edit it to fix any
insufficiencies you see.
2 Isn't that a little unfair? The Calabi conjecture is Aubin and Yau, but the other two are Schoen and Yau. – Deane Yang Oct 11 '10 at 16:14
1 That's why I said "involve" and not "due to". =) And large parts of Yamabe is also due to Aubin, no? But the "same mathematician" comment is more meant to illustrate how connected
geometry and PDE actually are (through geometric analysis, which the winds of fortune seems to prefer to classify nowadays as geometry, and not PDEs), that the division into "different
fields" is somewhat illusory. – Willie Wong Oct 11 '10 at 17:29
(In other words, that was suppose to be a subtle hint that the answer I gave above is not a very good answer at all, since I was able to cheat by thinking to myself: geometry and PDEs,
hum, what did Yau do?) Aldo, I deliberately left the name out because, if you knew who worked on all three, more likely you also knew who his co-authors/competitors were. If you didn't,
well, after looking it up it will be clear those were not one-man efforts. Maybe I should've written "the same mathematicians"? – Willie Wong Oct 11 '10 at 17:34
2 It's not a big deal, but I like to try to give explicit attribution as much as possible, because most people will not dig into these things carefully but they should be given some sense
of who were the main people involved. – Deane Yang Oct 11 '10 at 19:02
I would say Yau was not directly involved with the solution of the Yamabe problem. He is involved only in the sense that the Positive Mass Theorem is used by Schoen in the final
1 solution. One must also mention Trudinger's contribution to the Yamabe problem, who pointed out the flaw in the original argument by Yamabe and repaired the proof for some cases. – timur
Oct 12 '10 at 4:53
show 3 more comments
Some other probability PDE techniques:
1) Percolation: The Aizenman Barsky proof of exponential decay in subcritical percolation hinged on establishing a number of differential inequalities.
up vote 5 2) Conformal Invariance and SLE: Many conformal invariance proofs reduce to showing that the discrete stochastic process in question satisfies a Riemann Hilbert boundary value problem
down vote along with defining a flow on the state space which is divergence and curl free. This makes it clear how Cardy's formula arises as the hypergeometric function which solves the appropriate
differential equation.
add comment
The work of Meeks and Yau using minimal surfaces is a beautiful application of nonlinear elliptic PDE's to low-dimensional topology.
up vote 4 down vote
add comment
The elliptic regularity theorem can be used to establish the classical result that holomorphic (and harmonic) functions are $C^\infty$.
up vote 2
down vote
2 Two comments (don't take me too seriously, this is all in good fun): (a) See Gerald Edgar's answer (b) When we loop all the way back to Analysis as an application of PDEs to a
different field, something should be said about how fragmented knowledge in mathematics has become. – Willie Wong Oct 12 '10 at 10:40
add comment
Great! Since this has somehow bubbled to the top, I have yet-another occasion for a not-entirely-selfish rant! :)
Although @WillieWong's answer was a few years ago, it does point (if a bit obscurely) in a significant direction, for example. Immediately, one should note that there is no known (to me,
and ... I care) proof of RH that immediately/simply uses PDE ideas on modular curves.
But those ideas, from Haas, Hejhal, Faddeev, Colin-de-Verdiere, and Lax-Phillips, do solidly establish a connection of spectral properties of (certain "customized" extensions of)
Laplacians on modular curves and related canonical objects.
A more obviously legit example of interaction of number-theoretic "physical objects" and "PDE" was Milnor's example of two tori with the same spectrum for the Laplacian. But, yes, that
wasn't really about PDE.
A problem with reporting modern relevance of PDE to number theory is that there are "complicating" additions, ... :) ... often under-the-radar amounting to things stronger than Lindelof
Hypothesis, if not actually the Riemann Hypothesis.
Rather than recap things better documented elsewhere (many peoples' arXiv preprints, my own vignettes and talks various places, ...) please forgive my returning to the homily that "PDE"
up vote 2 are merely assertions of relations, as Newton intuited for the planets, and others have observed/inferred for many more things.
down vote
(Any hysterically provincial remarks about turf, or "specialties-as-ignorance-of-other" are obviously toxic... despite their dangerous prevalence and popularity...)
Thus, srsly, people, "PDE" means "a kind of condition on functions..." ... If people weren't so caught up in ... oop, sorry, the kind of people do get caught up in... :) ... it'd be
obvious that "infinitesimal" conditions would be natural...
Thus, an explanatory but not really useful answer is, that we seem to see that This is not related to That because the respective proprietores have no vested interest in letting on that
anyone else could ... perform their guild's function.
(Yes, it is informative to review Europe's late-renaissance guild-culture...)
And, as in many rants, I wish to reassure everyone by my disclaimer, "wait, what was the question, ... again...? " :)
(But, yes, this is a serious-and-important issue, in many ways, so, yeah, just some kidding-at-the-end.)
add comment
Graph theory, e.g. http://arxiv.org/abs/math/0009120
up vote 1 down vote
add comment
Reilly used PDEs to give a very elegant proof that spheres are the only embedded hypersurfaces of constant mean curvature in $\mathbb{R}^n$.
up vote 1 Let $\Sigma$ be such a hypersurface, bounding a region $\Omega$. He showed that any solutions to the PDE $\Delta u = -1$ in $\Omega$ with $u=0$ on $\Sigma$ must be a second order
down vote polynomials with leading term proportional to $|x|^2$. One sees that level sets of this function are spheres by completing the square.
add comment
The $\bar{\partial}$ and $\bar{\partial}$neumann problems which are of importance in integrable systems and complex analysis.
up vote 1 down vote
add comment
Dynamical systems. Roughly speaking, a dynamical system $\dot{x} = a(x)$ is stable if and only if the 1st order linear partial differential equation $\mathcal{L}_a v + \ell = 0$ has a
positive solution $v$. Here $v$ is called a Lyapunov function for the system, $\mathcal{L}$ is the Lie derivative, and $\ell > 0$ has to be chosen so that the equation has a solution but
up vote 0 is otherwise arbitrary.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged big-list ap.analysis-of-pdes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/41771/pdes-as-a-tool-in-other-domains-in-mathematics/41825","timestamp":"2014-04-16T13:32:55Z","content_type":null,"content_length":"124799","record_id":"<urn:uuid:6edc5542-e204-4907-911d-6cadb167c2b6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Short Simple Continued Fractions
Play with the continued fraction:
Notice that it increases with , , and decreases with and .
The numbers π, ϕ, , are given by the sequences .
"Short Simple Continued Fractions" from the Wolfram Demonstrations Project
Contributed by: George Beck
|
{"url":"http://demonstrations.wolfram.com/ShortSimpleContinuedFractions/","timestamp":"2014-04-18T03:02:58Z","content_type":null,"content_length":"42674","record_id":"<urn:uuid:21a55f63-3aad-4c73-90bd-f13ecf77e568>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Hempstead Math Tutor
...Specialized program for students with learning differences. Tutoring for home-schooled individuals. Enrichment instruction for parents who feel that average is just not enough.
30 Subjects: including prealgebra, geometry, reading, statistics
...I accomplish this by focusing on improving a student's problem solving ability, a skill that is not often taught well in school. I have worked with students with learning disabilities as well
as gifted students taking advanced classes or classes beyond their grade level. I enjoy helping students achieve their maximum potential!
34 Subjects: including algebra 1, algebra 2, calculus, ACT Math
...I have been playing guitar for over 14 years and I compose and perform regularly. I have taught private guitar lessons to students of all ages. When I'm not thinking about music or math I love
reading Philosophy and playing Halo 4.Algebra becomes second nature when you start doing upper-level mathematics.
22 Subjects: including calculus, composition (music), ear training, precalculus
I hold a bachelors degree in microbiology, a masters degree in biochemistry and molecular biology, and have five years of Laboratory experience in medical centers like UT Southwestern and Mount
Sinai medical center. These experiences have helped me to understand the subject matters in depth. As a ...
16 Subjects: including algebra 1, chemistry, trigonometry, SAT math
...I'll make sense out of it for you or your child. You won't go wrong with me as a tutor, especially if you work with me long term. Content knowledge is key to passing tests, but there are also
techniques that can optimize test taking, and I can work with you on those if that is the priority.
12 Subjects: including calculus, geometry, guitar, probability
|
{"url":"http://www.purplemath.com/west_hempstead_ny_math_tutors.php","timestamp":"2014-04-21T11:00:32Z","content_type":null,"content_length":"23907","record_id":"<urn:uuid:0a508b17-303a-4011-a52b-f23da0bda029>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In August of last year, mathematician Shinichi Mochizuki reported that he had solved one of the great puzzles of number theory: the ABC conjecture (
previously on Metafilter
). Almost a year later, no one else knows whether he has succeeded.
No one can understand his proof. posted by painquale on May 10, 2013 - 59 comments
Did you know that you can create a simple set of directions to your house that works
no matter where the recipient starts from?
After 38 years this
remarkable conjecture
has now been
by a
63-year old former security guard
posted by unSane on Mar 21, 2008 - 46 comments
|
{"url":"http://www.metafilter.com/tags/math+proof","timestamp":"2014-04-17T13:56:42Z","content_type":null,"content_length":"27515","record_id":"<urn:uuid:ced40611-0b00-4d3a-af16-5381d021fda9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Geometry Textbook Suggestions for Gifted Math Program?
Replies: 9 Last Post: Mar 14, 2005 12:57 PM
Messages: [ Previous | Next ]
Geometry Textbook Suggestions for Gifted Math Program?
Posted: Dec 28, 2004 8:59 PM
Hi, everyone,
I wonder what you would suggest as a best textbook for a first
high-school-level geometry course for talented students? I'm trying to
gather information to make a suggestion to a local mathematics program.
I am aware of one program that formerly used Gene Murrow and Serge
Lang's textbook Geometry (Springer-Verlag) and in recent years has used
Michael Serra's textbook Discovering Geometry (Key Curriculum Press).
Another textbook I have never seen, but have heard of being used at
gifted magnet schools is Geometry for Enjoyment and Challenge (McDougall
Littell), which has VERY MIXED reviews on Amazon.com
What do you think? If you were recommending a textbook for a class of
students selected for high math ability taking a first secondary-level
geometry course, what would you recommend? What do you like about your
preferred textbook?
Karl M. Bunday P.O. Box 1456, Minnetonka MN 55345
Learn in Freedom (TM) http://learninfreedom.org/
remove ".de" to email
submissions: post to k12.ed.math or e-mail to k12math@k12groups.org
private e-mail to the k12.ed.math moderator: kem-moderator@k12groups.org
newsgroup website: http://www.thinkspot.net/k12math/
newsgroup charter: http://www.thinkspot.net/k12math/charter.html
Date Subject Author
12/28/04 Geometry Textbook Suggestions for Gifted Math Program? Karl M. Bunday
12/29/04 Re: Geometry Textbook Suggestions for Gifted Math Program? Nat Silver
12/29/04 Re: Geometry Textbook Suggestions for Gifted Math Program? Karl M. Bunday
12/29/04 Re: Geometry Textbook Suggestions for Gifted Math Program? Nat Silver
12/29/04 Re: Geometry Textbook Suggestions for Gifted Math Program? Sky Rookie
12/29/04 Re: Geometry Textbook Suggestions for Gifted Math Program? Karl M. Bunday
12/30/04 Re: Geometry Textbook Suggestions for Gifted Math Program? Sky Rookie
1/7/05 Re: Geometry Textbook Suggestions for Gifted Math Program? Gideon
1/19/05 Re: Geometry Textbook Suggestions for Gifted Math Program? Bill Dubuque
3/14/05 Re: Geometry Textbook Suggestions for Gifted Math Program? Tim Perry
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1088439","timestamp":"2014-04-17T07:40:55Z","content_type":null,"content_length":"28346","record_id":"<urn:uuid:a47035c1-ddc0-4047-95e6-95d19a01c0c5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer Science Archive | August 10, 2011 | Chegg.com
Computer Science Archive: Questions from August 10, 2011
• SeriousMailbox7383 asked
0 answers
• Anonymous asked
1 answer
• RustyElbow46 asked
0 answers
• GreenLunch7790 asked
0 answers
• MellowKnight1882 asked
0 answers
• ValuableBacon316 asked
1 answer
• ValuableBacon316 asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
You are to design a program that will allow some number of grades (up to a max of 100) to be input b... Show more
You are to design a program that will allow some number of grades (up to a max of 100) to be input by the user. After the data has been collected, your program should calculate and output the
mean and median of the collected data, as well as the sorted grade information.
Design Constraints
1. Use an integer constant of 100 to specify the number of elements in the array you will use to collect the grade information.
2. Do not use any global variables in your program.
3. Declare any arrays you need in your main function and pass the arrays as needed into the functions described below.
4. The main function is the only function permitted to do any output to the console!!! Do not do cout operations inside of any other function.
5. Your data collection loop in your main function must allow the user to enter less than 100 grades. It must also make sure that the user does not try to enter more than 100 grades.
6. Each data value entered should be checked to make sure it is between 0 and 100. Any other value entered should be considered invalid and ignored (ie. not counted as a valid input and not
stored in an array).
7. Once the data is collected, the array and the number of grades collected must be passed to a function called mean.
8. The mean function must loop through the values in the array, summing them together. The result of the function is the sum divided by the number of grades collected. The result must be returned
from the mean function to the main function, where is it output in an appropriate manner (two digits after the decimal point).
9. The main function should then pass the array and the number of grades collected to the median function.
10. The median of a set of numbers is the number in the set where half the numbers are above it and half the numbers are below it. In order to find the median, this function will need to sort the
original data.
11. The simplest sorting procedure is called bubble sorting. The following pseudocode describes bubble sorting for X valid array elements.
for outer = 0; outer < X; outer++
for inner = 0; inner < X-1; inner++
if array[inner] > array[inner+1]
swap(array[inner], array[inner+1]);
12. After the data has been sorted, the median value can be found. If the array has an odd number of elements the median is the value of the middle element (Hint: arraySize/2 is the middle
element). If the array has an even number of elements then the median is the average of the middle two elements (Hint: arraySize/2 and ( arraySize/2) - 1 are the two middle elements). The median
value should be returned by the median function.
13. The main routine should output the median value in an appropriate manner.
14. The main routine should also output the sorted array with 5 grades per line.
15. Carefully develop test cases for your program. Most of your test cases do not need to contain lots of values. Make sure to include incorrect inputs such as negative grade values. Calculate
what your mean and median values should be for your test cases. Document your test cases in a Word document.
16. Run your test cases with your program to see if your program generates the expected output. If not, troubleshoot your program and fix the problem. When your program executes a test case
correctly, take a screen shot of the program output and paste it into your Word document to prove that your test case executed correctly with your program.
17. Make sure that your code is properly formatted! You also need to make sure you include a comment block for each function which documents the purpose, inputs, and outputs of each function!
• Show less
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• SeriousMailbox7383 asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• InfamousBandana6908 asked
). You need to make this a friend... Show more
Using the PhoneData class below, overload the ostream operator (<<). You need to make this a friend function of the PhoneData class.
For example, in your PhoneData header file under the public section put this:
friend ostream& operator<<(ostream&, const PhoneData&);
Then define it in the implementation file.
To test your class create a main file that does the following:
Creates 2 PhoneData objects and assigns values to data members.
For example, when creating your objects do the following:
PhoneData pd1("Susan Meyers", "414 444-4444");
PhoneData pd2("Joy Rodgers", "888 888-8888");
Then use the overloaded ostream function to display the values. For example:
cout << pd1;
cout << pd2;
Attached is an example of the programs output.
Files to turn in:
PhoneData.h // phoneData header file
PhoneData.cpp // phoneData implementation file
PhoneDataMain.cpp // main file
Header File:
// PhoneData.h
#ifndef PHONE_DATA_H
#define PHONE_DATA_H
#include <iostream>
#include <string>
using namespace std;
/******* Class PhoneData Specification ********/
class PhoneData
string name;
string phoneNumber;
// 2 parameter constructor
PhoneData(string, string);
// defualt constructor
// mutator functions
void setName(string);
void setPhoneNumber(string);
// accessor functions
string getName() const;
string getPhoneNumber() const;
Implementation File
// PhoneData.cpp
#include "PhoneData.h"
//no argument constructor
PhoneData ::PhoneData()
//initializes both data members to empty strings
name = "";
phoneNumber = "";
//two argument constructor
PhoneData ::PhoneData(string name,string number)
//initializes both variables with appropriate
//passed arguments
this->name = name;
phoneNumber = number;
//setter method for name
void PhoneData :: setName(string name)
this->name = name;
//setter method for phoneNumber
void PhoneData :: setPhoneNumber(string number)
this->phoneNumber = number;
//getter method for name
string PhoneData :: getName() const
return name;
//getter method for PhoneNumber
string PhoneData :: getPhoneNumber() const
return phoneNumber;
Main File:
// PhoneDataMain.cpp
#include "PhoneData.h"
//main function
int main()
//two PhoneData objects
PhoneData pd1("Susan Meyers", "414 444-4444");
PhoneData pd2("Joy Rodgers", "888 888-8888");
//displaying objects details using getter functions
cout<<"\nPhone Data 1: "<<endl;
cout<<"\nName: "<<pd1.getName()<<endl;
cout<<"Phone Number: "<<pd1.getPhoneNumber()<<endl;
cout<<"\n\nPhone Data 2: "<<endl;
cout<<"\nName: "<<pd2.getName()<<endl;
cout<<"Phone Number: "<<pd2.getPhoneNumber()<<endl;
return 0;
} //end of the main
• Show less
1 answer
• Anonymous asked
2 answers
• Anonymous asked
1 answer
• Anonymous asked
Write a static method named count that accepts an array of integers and a target integer value as pa... Show more
Write a static method named count that accepts an array of integers and a target integer value as parameters and returns the number of occurrences of the target value in the array.
For example, if a variable named list refers to an array containing values {3, 5, 2, 1, 92, 38, 3, 14, 5, 73, 5} then the call of count(list, 3) should return 2 because there are 2 occurrences of
the value 3 in the array. • Show less
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• PlainCarrot9644 asked
1 answer
• PlainCarrot9644 asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
3 answers
• WhisperingFrog1202 asked
-fillArray-ask user for total names to be entered. Check to ensure total to be entered is not greate... Show more
-fillArray-ask user for total names to be entered. Check to ensure total to be entered is not greater than MAX. If total is not greater than MAX, the function prompts the user to enter a name( no
spaces, all lower case guaranteed), and a 4 digit phone number(guarenteed). the function returns the total number of entered info.
-sortArray-sorts the array by name 1st. If two names are same, then sorts 1st by phone number.
-menu-prompts the user for menu selection. Menu selection is
1) add 2) find 3) print 4) delete 5) quit. This function ensures the number entered is between 1 and 5 inclusive. This function returns menu choice.
-add-checks to see if info can be added to array. If info can be added, user is prompted for a name(no spaces, all lower case, guaranteed), a 4 digit phone number(guaranteed), and it updates the
total. If info can't be added then appropriate error message is displayed.
-find-prompts user for a name(no spaces, all lower case, guaranteed)-the function then searches through the array and displays all names found. If the name is not in the array the appropriate
error message is displayed.
-print-prints all names in the array in the following wayl
Name: the name
Phone: the number
Here is the main for the program and I will be working on it too so maybe we can compare notes too.
struct final
char name[100];
int phone;
typedef struct final Final;
#define MAX 15
int fillArray(Final array[]);
void sortArray(Final array[], int total);
int menu();
void add(Final array[], int * total);
void find(Final array[], int total);
void print(Final array[], int total);
int main()
Final array[MAX];
int choice;
int total;
total = fillArray(array);
sortArray(array, total);
choice = menu();
case 1: add(array, &total);
case 2: find(array, total);
case 3: print(array, total);
case 4: delete(array, &total);
default: printf("Thanks\n");
}// end switch
}while(choice != 5);
return 0;
}// end main
Here is what I have done so far if someone can help me fill in the gaps
#define MAX 15
struct final
char name[100];
int phone;
typedef struct final Final;
int fillArray(Final array[]);
void sortArray(Final array[], int total);
int menu();
void add(Final array[], int * total);
void find(Final array[], int total);
void print(Final array[], int total);
void delete(Final array[], int* total);
int main()
Final array[MAX];
int choice;
int total;
total = fillArray(array);
sortArray(array, total);
choice = menu();
//case 1: add(array, &total);
//case 2: find(array, total);
// break;
case 3: print(array, total);
// Extra Credit
// case 4: delete(array, &total);
// break;
default: printf("Thanks\n");
}// end switch
}while(choice !=5);
return 0;
}// end main
//fills array
int fillArray(Final array[])
int numberNames;
int phoneNumber;
char temp[MAX];
int i;
printf("How many names and numbers would you like to enter? ");
scanf("%i", &numberNames);
for(i=0;i < numberNames; i++)
printf("Please enter the name (no spaces,all lower case) ");
scanf("%s", &temp);
strcpy(array[i].name, temp);
printf("Please enter a 4 digit phone number ");
scanf("%i", &phoneNumber);
}//end for loop
}//end if
return numberNames;
}//end fill array
//sorts Array
void sortArray(Final array[], int total)
int i,j;
char temp[MAX];
for(i=0; ifor(j=0; jif(array[i].name>array[j].name)
}//End For
}//End Function
int menu()
int choice;
printf("What would you like to do?\n");
printf("1. Add a new person and phone number to the book.\n");
printf("2. Enter the name of the person with no spaces in all lower case letters.\n");
printf("3. Print all of the names and numbers in the directory.\n");
printf("4. Delete a name from the book.\n");
printf("5. Quit the program.\n");
printf("your choice--> ");
scanf("%i", &choice);
}//end while
return choice;
}//end menu
//adds to array
/*void add(Final array[], int * total)
int len=lenstrlen(array[x]);
char temp[100];
int x, value;
printf("\nPlease enter the name you wish to add with no spaces and all lower case.");
fgets(temp, 100, stdin);
scanf("%s ", &temp);
for(x=0;xscanf("%c ", &temp[x]);
printf("\nPlease enter %i characters.");
fgets(temp, (15-len), stdin);
printf("\nArray is full.");
}//end add*/
//finds something in the array
/*void find(Final array[], int total)
char x;
char temp;
printf("\nPlease enter the name you wish to find(no spaces, all lower case.");
scanf("%c", &temp);
for(x=0; x < strlen(array); x++)
printf("\nFound %c\n", array[x]);
else if(array[x]!=temp)
printf("\nThat name does not appear in the phone book sorry.");
printf("\nPlease enter a new name(no spaces, all lower case.");
scanf("%c", &temp);
//prints the array to the screen
void print(Final array[], int total)
int x;
for(x=0; x{
printf("Name: %s\n", array[x].name);
printf("Phone: %i\n", array[x].phone);
}//end for loop
}//end print
//deletes something in the array
/*void delete(array, &total)
int x,y,total;
for(x=0; x<total; x++)
for(y=0; y<total; y++)
• Show less
1 answer
• WhisperingFrog1202 asked
-fillArray-ask user for total names to be entered. Check to ensure total to be entered is not greate... Show more
-fillArray-ask user for total names to be entered. Check to ensure total to be entered is not greater than MAX. If total is not greater than MAX, the function prompts the user to enter a name( no
spaces, all lower case guaranteed), and a 4 digit phone number(guarenteed). the function returns the total number of entered info.
-sortArray-sorts the array by name 1st. If two names are same, then sorts 1st by phone number.
-menu-prompts the user for menu selection. Menu selection is
1) add 2) find 3) print 4) delete 5) quit. This function ensures the number entered is between 1 and 5 inclusive. This function returns menu choice.
-add-checks to see if info can be added to array. If info can be added, user is prompted for a name(no spaces, all lower case, guaranteed), a 4 digit phone number(guaranteed), and it updates the
total. If info can't be added then appropriate error message is displayed.
-find-prompts user for a name(no spaces, all lower case, guaranteed)-the function then searches through the array and displays all names found. If the name is not in the array the appropriate
error message is displayed.
-print-prints all names in the array in the following wayl
Name: the name
Phone: the number
Here is the main for the program and I will be working on it too so maybe we can compare notes too.
struct final
char name[100];
int phone;
typedef struct final Final;
#define MAX 15
int fillArray(Final array[]);
void sortArray(Final array[], int total);
int menu();
void add(Final array[], int * total);
void find(Final array[], int total);
void print(Final array[], int total);
int main()
Final array[MAX];
int choice;
int total;
total = fillArray(array);
sortArray(array, total);
choice = menu();
case 1: add(array, &total);
case 2: find(array, total);
case 3: print(array, total);
case 4: delete(array, &total);
default: printf("Thanks\n");
}// end switch
}while(choice != 5);
return 0;
}// end main
#define MAX 15
struct final
char name[100];
int phone;
typedef struct final Final;
int fillArray(Final array[]);
void sortArray(Final array[], int total);
int menu();
void add(Final array[], int * total);
void find(Final array[], int total);
void print(Final array[], int total);
void delete(Final array[], int* total);
int main()
Final array[MAX];
int choice;
int total;
total = fillArray(array);
sortArray(array, total);
choice = menu();
case 1: add(array, &total);
case 2: find(array, total);
case 3: print(array, total);
case 4: delete(array, &total);
default: printf("Thanks\n");
}// end switch
}while(choice !=5);
return 0;
}// end main
//fills array
int fillArray(Final array[])
int numberNames;
int phoneNumber;
char temp[MAX];
int i;
printf("How many names and numbers would you like to enter? ");
scanf("%i", &numberNames);
for(i=0;i < numberNames; i++)
printf("Please enter the name (no spaces,all lower case) ");
scanf("%s", &temp);
strcpy(array[i].name, temp);
printf("Please enter a 4 digit phone number ");
scanf("%i", &phoneNumber);
}//end for loop
}//end if
return numberNames;
}//end fill array
//sorts Array
/*void sortArray(Final array[], int total)
int i,j;
char temp[MAX];
for(i=0; ifor(j=0; jif(array[i].name>array[j].name)
}//End For
}//End Function*/
int menu()
int choice;
printf("What would you like to do?\n");
printf("1. Add a new person and phone number to the book.\n");
printf("2. Enter the name of the person with no spaces in all lower case letters.\n");
printf("3. Print all of the names and numbers in the directory.\n");
printf("4. Delete a name from the book.\n");
printf("5. Quit the program.\n");
printf("your choice--> ");
scanf("%i", &choice);
}//end while
return choice;
}//end menu
//definition to add
void add(Final phBook[], int * total)
printf("Enter a name( no spaces, all lower case guaranteed): ");
printf("Enter 4 digit phone number(guarenteed): ");
}//end add
///definition to find
void find(Final phBook[], int total)
char name[100];
int found=0;
int i;
printf("Enter a name( no spaces, all lower case): ");
printf("Name: %s",phBook[i].name);
printf("\nPhone Number: %d\n",phBook[i].phone);
printf("Name doesn't exists in the array");
}//end find
//prints the array to the screen
void print(Final array[], int total)
int x;
for(x=0; x{
printf("Name: %s\n", array[x].name);
printf("Phone: %i\n", array[x].phone);
}//end for loop
}//end print
//deletes something in the array
void delete(Final array[], int* total)
int x, y, catdog;
for(x=0; x< catdog; x++)
for(y=0; y<catdog;y++)
• Show less
1 answer
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• Anonymous asked
1 answer
• home44 asked
1 answer
• Anonymous asked
2 answers
Get the most out of Chegg Study
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2011-august-10","timestamp":"2014-04-18T19:42:04Z","content_type":null,"content_length":"90712","record_id":"<urn:uuid:c9966405-e9cc-4183-b3a6-cae2ed4557df>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding the intersecting node from two intersecting linked lists
up vote 17 down vote favorite
Suppose there are two singly linked lists both of which intersect at some point and become a single linked list.
The head or start pointers of both the lists are known, but the intersecting node is not known. Also, the number of nodes in each of the list before they intersect are unknown and both list may have
it different i.e. List1 may have n nodes before it reaches intersection point and List2 might have m nodes before it reaches intersection point where m and n may be
One known or easy solution is to compare every node pointer in the first list with every other node pointer in the second list by which the matching node pointers will lead us to the intersecting
node. But, the time complexity in this case will O(n^2) which will be high.
What is the most efficient way of finding the intersecting node?
c algorithm linked-list
add comment
4 Answers
active oldest votes
This takes O(M+N) time and O(1) space, where M and N are the total length of the linked lists. Maybe inefficient if the common part is very long (i.e. M,N >> m,n)
1. Traverse the two linked list to find M and N.
2. Get back to the heads, then traverse |M − N| nodes on the longer list.
up vote 29 down vote 3. Now walk in lock step and compare the nodes until you found the common ones.
Edit: See http://richardhartersworld.com/cri/2008/linkedlist.html
I like this! +1. – j_random_hacker Feb 7 '10 at 18:16
Accepting this answer as this doesn't need modification of the list and also does not eat extra space. But, I am still wondering if there aren't any better solutions than
this. Anyways, Thanks a lot for your reply and others also. – Jay Feb 12 '10 at 14:18
Doesn't the answer by @Jakob Borg incur one less round of iteration that calculating the difference in lengths of the two linked lists? – user1071840 Mar 17 '13 at 0:01
@user1071840: Yes, the difference is that Jakob's answer needs O(N) extra space to store the colors. – KennyTM Mar 17 '13 at 6:46
add comment
If possible, you could add a 'color' field or similar to the nodes. Iterate over one of the lists, coloring the nodes as you go. Then iterate over the second list. As soon as you reach
up vote 9 down a node that is already colored, you have found the intersection.
add comment
Dump the contents (or address) of both lists into one hash table. first collision is your intersection.
up vote 3 down
If there is 2 linkedlist like 1-2-3-4-3-5 and 9-8-3-5. And the 2nd linkedlist intersect the 1st one in 2nd 3rd. But if we use some hash table then it will clash at first position
of 3. – Pritam Karmakar Aug 13 '10 at 5:36
one linked list can't contain 3 twice unless it's already twisted into a "6" instead of a linerar list. – ddyer Aug 17 '10 at 17:15
add comment
Check last nodes of each list, If there is an intersection their last node will be same.
up vote 2 down vote
The question is intended to find the intersecting node. – JavaDeveloper Aug 25 '13 at 22:03
add comment
Not the answer you're looking for? Browse other questions tagged c algorithm linked-list or ask your own question.
|
{"url":"http://stackoverflow.com/questions/2216666/finding-the-intersecting-node-from-two-intersecting-linked-lists","timestamp":"2014-04-23T09:46:32Z","content_type":null,"content_length":"84501","record_id":"<urn:uuid:2ff35a60-fe78-4254-96a8-fb2cad525bcb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The extreme eigenvalues and stability of real symmetric interval matrices
, 2000
"... The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control
theory, and (b) to survey the relatively recent research activity lying at the interface between these fi ..."
Cited by 116 (21 self)
Add to MetaCart
The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control
theory, and (b) to survey the relatively recent research activity lying at the interface between these fields. We begin with a brief introduction to models of computation, the concepts of
undecidability, polynomial time algorithms, NP-completeness, and the implications of intractability results. We then survey a number of problems that arise in systems and control theory, some of them
classical, some of them related to current research. We discuss them from the point of view of computational complexity and also point out many open problems. In particular, we consider problems
related to stability or stabilizability of linear systems with parametric uncertainty, robust control, time-varying linear systems, nonlinear and hybrid systems, and stochastic optimal control.
, 1997
"... In this paper, the deterministic global optimization algorithm, αBB, (α-based Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily
close to the global minimum for the large class of twice-differentiable NLPs. The key idea is the constru ..."
Cited by 52 (3 self)
Add to MetaCart
In this paper, the deterministic global optimization algorithm, αBB, (α-based Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily
close to the global minimum for the large class of twice-differentiable NLPs. The key idea is the construction of a converging sequence of upper and lower bounds on the global minimum through the
convex relaxation of the original problem. This relaxation is obtained by (i) replacing all nonconvex terms of special structure (i.e., bilinear, trilinear, fractional, fractional trilinear,
univariate concave) with customized tight convex lower bounding functions and (ii) by utilizing some α parameters as defined by Maranas and Floudas (1994b) to generate valid convex underestimators
for nonconvex terms of generic structure. In most cases, the calculation of appropriate values for the α parameters is a challenging task. A number of approaches are proposed, which rigorously
generate a set of α par...
- Journal of Global Optimization , 1996
"... . In order to generate valid convex lower bounding problems for nonconvex twice--differentiable optimization problems, a method that is based on second-- order information of general
twice--differentiable functions is presented. Using interval Hessian matrices, valid lower bounds on the eigenvalues ..."
Cited by 35 (15 self)
Add to MetaCart
. In order to generate valid convex lower bounding problems for nonconvex twice--differentiable optimization problems, a method that is based on second-- order information of general
twice--differentiable functions is presented. Using interval Hessian matrices, valid lower bounds on the eigenvalues of such functions are obtained and used in constructing convex underestimators. By
solving several nonlinear example problems, it is shown that the lower bounds are sufficiently tight to ensure satisfactory convergence of the ffBB, a branch and bound algorithm which relies on this
underestimation procedure [3]. Key words: convex underestimators; twice--differentiable; interval anlysis; eigenvalues 1. Introduction The mathematical description of many physical phenomena, such as
phase equilibrium, or of chemical processes generally requires the introduction of nonconvex functions. As the number of local solutions to a nonconvex optimization problem cannot be predicted a
priori, the identifi...
, 1998
"... Following a polynomial approach to control design, the co-stabilization by a fixed controller of a family of SISO linear systems is interpreted as an LMI feasibility problem with a rank-one
constraint. An LMI relaxation algorithm and a potential reduction heuristic are then proposed for addressing t ..."
Cited by 21 (17 self)
Add to MetaCart
Following a polynomial approach to control design, the co-stabilization by a fixed controller of a family of SISO linear systems is interpreted as an LMI feasibility problem with a rank-one
constraint. An LMI relaxation algorithm and a potential reduction heuristic are then proposed for addressing this key optimization problem. This work was supported by the Barrande Project No. 97/
005-97/026, by the Grant Agency of the Czech Republic under contract No. 102/97/0861, by the Ministry of Education of the Czech Republic under contract No. VS97/034 and by the French Ministry of
Education and Research under contract No. 10-INSA-96. y Corresponding author. E-mail henrion@laas.fr. FAX 33 5 61 33 69 69. 1 Introduction We consider the problem of simultaneously stabilizing, or
co-stabilizing, a family of singleinput single-output (SISO) linear systems by one fixed controller of given order. This fundamental problem, recognized as one of the difficult open issues in linear
system theory, ar...
- SIAM J. Matrix Anal. Appl
"... Abstract. We study bounds on real eigenvalues of interval matrices, and our aim is to develop fast computable formulae that produce as-sharp-as-possible bounds. We consider two cases: general
and symmetric interval matrices. We focus on the latter case, since on the one hand such interval matrices h ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract. We study bounds on real eigenvalues of interval matrices, and our aim is to develop fast computable formulae that produce as-sharp-as-possible bounds. We consider two cases: general and
symmetric interval matrices. We focus on the latter case, since on the one hand such interval matrices have many applications in mechanics and engineering, and on the other hand many results from
classical matrix analysis could be applied to them. We also provide bounds for the singular values of (generally nonsquare) interval matrices. Finally, we illustrate and compare the various
approaches by a series of examples.
"... We study the problem of estimating the epipolar geometry from apparent contours of smooth curved surfaces with affine camera models. Since apparent contours are viewpoint dependent, the only
true image correspondences are projections of the frontier points, i.e., surface points whose tangent planes ..."
Add to MetaCart
We study the problem of estimating the epipolar geometry from apparent contours of smooth curved surfaces with affine camera models. Since apparent contours are viewpoint dependent, the only true
image correspondences are projections of the frontier points, i.e., surface points whose tangent planes are also their epipolar planes. However, frontier points are unknown a priori and must be
estimated simultaneously with epipolar geometry. Previous approaches to this problem adopt local greedy search methods which are sensitive to initialization, and may get trapped in local minima. We
propose the first algorithm that guarantees global optimality for this problem. We first reformulate the problem using a separable form that allows us to search effectively in a 2D space, instead of
on a 5D hypersphere in the classical formulation. Next, in a branch-andbound algorithm we introduce a novel lower bounding function through interval matrix analysis. Experimental results on both
synthetic and real scenes demonstrate that the proposed method is able to quickly obtain the optimal solution. 1.
"... apport de recherche ..."
, 2009
"... Bounds on eigenvalues and singular values of ..."
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the
authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Add to MetaCart
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors
institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are
prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further
information regarding Elsevier’s archiving and manuscript policies are encouraged to visit:
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=90181","timestamp":"2014-04-21T03:13:26Z","content_type":null,"content_length":"35059","record_id":"<urn:uuid:83a04070-e1f4-4b0a-a068-423cd8dfa7f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how can the reactive power affect the fuel consumption
i'm confused about this plz help me
>how can the reactive power affect the
>fuel consumption
>i'm confused about this plz help me
Reactive power and fuel consumption
the straight and simple answer under all practical consideration is NO. reactive power will not have any effect on fuel consumption. to check this out (if you are in a site ). go to the GCP, decrease
the MVAR from the AVR control to a min value, raise the MVAR to a max value there will no change in the fuel input.
theoretically the exciter which is connected to the turbine shaft must produce more power for a greater excitation this should increase the fuel consumption, but it is so small that it will not
reflect in the fuel flow much.
first, THANKS for your response
so, you say that the exciter of 21 KW rated (one of our plant generators data) will produce a rated reactive power of 2250 KVAR is that right or logical.
and if it is right how can you explain that Q=1.73VI Sin (phi)
as the reactive power will vary with the imaginary part of the current and the torque will response for this current
"so, you say that the exciter of 21 KW rated (one of our plant generators data) will produce a rated reactive power of 2250 KVAR is that right or logical. and if it is right how can you explain that
Q=1.73VI Sin (phi) as the reactive power will vary with the imaginary part of the current and the torque will response for this current"
well i cannot really understand what you are trying to say. is 21 KW the power rating of the exciter?? are you trying to equate between the "exciter" power and the "generator" reactive power ??. it
is really hard to equate between the two without knowing
1. Generator excitation data , excitation curves
2. Excitation type used in generator
3. AVR details
"if it is right how can you explain that Q=1.73VI Sin (phi)" well this is a very generic equation , generator active and reactive power equation is different
E - The generator Excitation voltage , which is governed by the generator excitation system
V - The terminal voltage of the generator
I - The stator current
Xd - The generator Reactance
The Active power output from the generator is given by
P = E V / Xd Sin delta where , delta is the load angle , the angle between the voltage vectors E and V .
The reactive power output from the generator is given by
Q = EV/Xd cos delta - V^2/Xd where , delta is the load angle , the angle between the voltage vectors E and V .
in every text book the derivation of the above formula will be there. basically the generator is modeled as a power transfer through a inductive circuit.
"as the reactive power will vary with the imaginary part of the current and the torque will response for this current"
no as the reactive power varies the torque will not vary. have a go through of the generator real and reactive power equations , generator operating diagram to get a clear view. there is just too
much to explain and with no diagrams it is more or less a lost cause.
don't get me wrong here, but in my opinion a read through of a book in electrical machines, especially chapters like ac machine fundamentals and synchronous machines will be helpful. you need to get
the basics right :) and a textbook will be ideal for that :). of course if you have doubts you can always ask here.
I agree with you partially but not fully with your comment "this should increase the fuel consumption, but it is so small that it will not reflect in the fuel flow much." Actually this amount depends
on the Alternator Efficiency Change. The Alternator efficiency changes @ of change of power factor. Suppose for p.f 0.95 the efficiency = 97% but for pf 0.80 it is 96% (can check from the alternator
test results). So, this change in efficiency needs more power from the prime mover (Engine or turbine). Hence it requires more fuel consumption accordingly. So, you cannot say this is not so much. it
is eventually a lot for large scale generation.
Oh, you want to be really, really careful when using the words 'reactive' and 'power' together in a sentence on control.com.
As for how "it" affects fuel consumption, it doesn't--for all intents and purposes. Fuel produces torque, and torque is used to produce real power (watts). That reactive thing you are referring to is
apparent power, which is necessary for the magnetization of induction electric motors but which doesn't produce any real work (power; watts; torque).
I suggest you use your preferred Internet search engine and look up the definition of (and I'm only repeating the original poster's usage here!) reactive power. While I didn't find one relevant
search result that had a really good definition, I did find several that were, together, very good.
But, as for any effect on fuel consumption, again, for all intents and purposes there is no effect on fuel consumption. Efficiency, yes, but fuel consumption, virtually none.
If you have more questions after doing your Internet research, please be more specific about the nature of your confusion. Tell us how you think fuel consumption would be impacted by (and, again, I'm
just using the original poster's term!) reactive power.
the simple answer is that the generator efficiency drops so you require more fuel for the same energy delivered
Mohamed Ragab... paraphrasing your specific question, "How does kVAr affect fuel consumption?", then 'd-' provided the right answer, "... more fuel... required... !"
Now for the "How" part! Any increase in kVAr, either lagging or leading, will increase stator-current (ac) and rotor current (dc), thus increasing generator stator-winding and rotor-winding losses,
and in-turn, decrease generator efficiency. The magnitude of the loss-increase will negatively impact the generator's rated efficiency, being less of an influence on a generator with a rated
efficiency of 80 percent, then one rated at 95%.
CSA is right, of course, when he stated, "... for all intents and purposes there is no effect on fuel consumption." That is true because the GT has a much, much lower efficiency than the alternator!
Mohamed... if you want a quantitative value of loss increase, contact me. Or, in the interim, you can refer to my paper, "The Physics of... Armature Reaction" which illustrates the impact of lagging
kVAr or leading kVAr on stator-current and excitation-current magnitudes.
Phil Corso (cepsicon[at]AOL[dot]com)
thanks for your short answer
but if it is true why you are paying only for active power while you are using both of them
thanks again
mohamed ragab
thank you very much, and this is the answer i was searching for
i have already finished reading abook of electrical power system that explain how that reactive power is not a power.
I'm very grateful to all of you and happy to find people like you who care with others even if for easy Questions.
again thank you and all who waste time in posting my question
Mohamed Ragab
Because for the average consumer the power utility factors into the price per KWH the price of the reactive power.
For very large industrial plants with lots of inductive loads (induction motors, primarily) power utilities do install VAr-hour meters in addition to the Watt-hour meters, and make those users pay
for the reactive power in addition to the real power they consume. So, many of these large users will install various forms of power factor correction to reduce the amount of VAr-hours they
(We're really treading on thin ice here talking about reactive power and its production or consumption. There are those who usually chide us for using those terms on control.com, even though the rest
of the world uses them frequently and they are found in texts and references everywhere. So, please be very careful!)
Ragab... for the record I would appreciate knowing the title of the text that explains, "... reactive power is not a power."
CSA... Congratulations for your slow, albeit sure, continuing use of the term, VAr, instead of the term, reactive power! Hopefully, you have also noted it has spread in use in the Control.Com forum!
Keeep up the good work!
Special regards, Phil Corso (cepsicon[at]AOL[dot]com)
Power Value... please explain your derivation of the P and Q formulas which involve the product of E and V?
Phil Corso
Phil i am uploading a small pic here. it is taken from a electrical engineering text book i have. this will give you the derivation of the formula.
this is one of the simplest derivation i have seen. the actual rigorous derivation without the assumption of R = 0 is in another text book, but i do not have a soft copy. if i get it i will upload it
Process V... my error! You identified E as Excitation Voltage when you really meant internal Air-gap voltage, Egp!
Regards, Phil
My experience is concurs my theoretical understanding of the subject in above matter. The generator is energy device & it generates electrical energy which is in KVA. The KW & KVAR part depends upon
load. If demand of any of this component is increased, so will be increase in KVA generated.
For a short circuit fault, which is basically inductive in nature, will draw current & we calculate fault KVAr at different points & not KW. So if there is short circuit fault near generator, KW will
be almost zero & KVA= KVAr. Generator will generate high KVA & turbine will need more torque (i.e. fuel) to feed this fault till it is cleared by protection system.
Coming to our normal operation, the change in power factor is generally between 0.02 to 0.12 max. Simply saying, we aim to improve our power factor from 0.83 (on an average) to 0.92 at the most. The
difference is 0.09 between these 2 power factors. So difference in KVA for these 2 loads for same KW will be KW/0.83- KW/0.92= KW[(0.92-083)/(0.92*0.83)]= kw* 0.118.
Thus, for above case KVA generated will be 11.*8% more for operating at 0.83 power factor instead of 0.92. This will be reflected in generator current & subsequent heating in stator winding
Now coming to your question, for this additional 11.8% KVA, additional fuel will be required. However the increase in fuel is seen only marginal. This is because, fuel consumption vs KVA is non
linear characteristic. At no load turbine draws more 50% of fuel compared to full load. As load increase from zero to full load, the increase in fuel consumption slows down. Thus, 11.8% increase in
KVA would not be reflected as 11.8% increase in fuel. The increase in fuel may not be significant but it is there.
You can check this in your plant before taking planned outage of any generator. Bring down the load of generator to nearly 20% of its capacity at some higher power factor ( 0.92 to 0.97). Note down
KW, Amp & winding temperatures & fuel consumption of all running machines. Now raise excitation of above generator & check that its power factor is dropping. Ensure that generator current is less
than overload setting. Try to bring power factor as low as possible while KW remaining constant. The load should be however less than generator capability curve provided by OEM at this reduced power
factor. Now note the readings again. You will notice significant change in amps & winding temperature in all generators. There will be noticeably rise in fuel consumption in machine under test. The
change in fuel consumption in other generators will be not as noticeable as the machine under test but still you can see there will be change in fuel consumption.
Thus, in case we are forced to run generator at lower load, care should be taken that it is running at better power factor to save additional fuel cost,
Please can you tell me how does the generator exciter current affect Power Factor? I have increased the exciter current and the Power factor has dropped but this does not seem right according to
theory. The generator is rated at 0.85 PF.
This topic has been covered many times before on control.com. There is a 'Search' field at the far right of the Menu bar of every control.com page (I recommend using the 'Search' Help before
searching for the best results!).
Basically, Power Factor is a measure of the "efficiency" of the generator at converting the total power being input to the generator into "real" power: watts (or KW or MW). When ALL of the torque
being applied to the generator rotor by the prime mover (turbine or engine) is converted to real power, watts (which implies a resistive load with no reactive current), the Power Factor is 1.0--which
is effectively the same as saying the generator is 100% efficient at converting the torque being applied to the generator rotor into useful power, watts (or KW or MW).
When the excitation is increased or decreased without changing anything else (the fuel or steam being admitted to the generator's prime mover) while the Power Factor is 1.0, then the efficiency of
the generator decreases, and the Power Factor DECREASES below 1.0. So, when the Power Factor is 0.93, the generator is approximately 93% efficient at converting the torque being applied to the
generator rotor into useful power, watts (or KW or MW). When the Power Factor is 0.85, it's 85% efficient.
And that's true if the Power Factor (and VAr) reading is Leading or Lagging when it's less than 1.0. The Power Factor is never greater than 1.0, and can only be 1.0 or less, regardless of whether or
not the Power Factor is Leading or Lagging.
(Sometimes the Power Factor is displayed as being a positive or negative number, and when that's done usually positive means Lagging and negative means Leading. So, if your looking at a display that
is digital, remember: the polarity indicates which "direction" the Reactive Current is flowing but still: the number is never more than 1.0 and can only be 1.0 or something less than 1.0.)
Lastly, decreasing excitation without changing anything else (the fuel or steam being admitted to the generator's prime mover) when the Power Factor is 1.0 will cause the Power Factor to DECREASE
from 1.0--again, because efficiency of the generator is decreasing. The more reactive current (VArs) that are flowing in the generator stator the less watts that can be produced.
To see the effects of reactive current, and a lower Power Factor, on real power output of a generator, operate the generator at a low power output, say 10-15% of rated, adjust the excitation until
the Power Factor is 1.0 (Unity) and record the watts (or KW or MW). Then increase the excitation until the Power Factor reaches, say, 0.85 or less (lower won't hurt the turbine for a few minutes)
--without changing anything else--and record the watts. The watts will decrease slightly as the Power Factor decreases (from 1.0) and the reactive current (VArs) increases.
Do the same by decreasing the excitation when the power factor is 1.0--without changing anything else--and record the watts when the power factor reaches, say, 0.85 (don't go too low when decreasing
the excitation for this test!). The watts will decrease slightly as the Power Factor decreases (from 1.0) and the reactive current (VArs) increases.
The Power Factor rating of a generator just refers to the total mount of power (real and reactive) that can be produced by the generator. It doesn't mean the generator has to be operated at that
power factor, only that rated power can be produced for extended periods of time at that Power Factor.
I would recommend researching 'Power Factor' on sites like www.wikipedia.org (if available in your part of the world).
Hope this helps!
This is correct as per theory. The power factor of generator is proportional to ratio MW/MVAr. So any increase in numerator with denominator remaining constant will result in improved power factor.
similarly any increase in denominator alone (i.e. MVAr) will result in dropping of power factor.
In a grid, the MW is directly proportional to speed reference. Any change in it will not affect MVAr. Similarly MVAr is directly proportional to magnitude of terminal voltage which is directly
proportional to excitation.
So by increasing field excitation, you are increasing terminal voltage of generator. This added terminal voltage will result in generator supplying more MVAr & thus resulting in dropped power factor
Prasad... your explanation is in error because even though kVA, hence armature-current, increases, the amp-to-torque analogy doesn't apply!
In fact, for a 3-ph near-terminal fault the generator will speed up because the load kW is suddenly lost!
Hopefully you will conduct a study to determine if the generator and prime-move are in danger of overspeeding
Regards, Phil Corso
Yes Phil, during a fault near generator terminal will certainly reduce load MW to almost zero depending upon fault impedance. But at the same time the fault will draw large current from generator.
This fault current will be inductive in nature & thus this component will contain only MVAr & no MW. In this case generator MVA will be equal to fault MVAr. This fault MVA will be far too higher than
rated MVA capacity of generator. So obviously turbine has to provide far too mechanical/ kinetic energy to generator beyond its capability. So in such case obviously turbine/ prime mover speed is
going to drop.
Also now that MW is almost zero during fault, fuel required by turbine before tripping will not drop down to no load value. In fact it will increase to cater to fault MVA. And this is what I wanted
to prove (i.e. reactive power will affect fuel consumption of turbine.)
Keep in mind that whatever heat is generated during a fault is real power and real MW's. I have seen an off line fault of a bad lightning arrestor that caused the turbine speed control system to call
for more steam to maintain speed. I agree that most of the current flow is reactive, but not all.
Prasad: remember, current does not "contain" MW.
Watts are produced when current flows thru a resistance and all generators, bus work, transformers etc have some unless they are superconducting. So when current flows thru a resistance, watts are
produced in the form of heat in an amount equal to I-squared-R. Those watts dissipate energy which must have come from somewhere. In this case, "somewhere" is your generator which converted
mechanical energy into electrical energy. Your generator received the mechanical energy from the conversion of thermal energy in your turbine, therefore the turbine must have burned more fuel. So
increased reactive power generation increases fuel consumption.
The law of conservation of energy always applies.
> how can the reactive power affect the fuel consumption
> i'm confused about this plz help me
Reactive power can't affect fuel consumption. Because if reactive power(say lagging) increases then the frequency of excitation voltage decreases, but that doesn't mean that the angular velocity of
rotor changes.
So to get back the actual frequency we use power factor correction and after that automatically the reactive power decreases and the frequency gets back to its actual value.
Simply speaking
Reactive power is the power which is given to the system in the positive half of an AC cycle and received back in the negative half. overall average power through one complete cycle is zero. thus
theoretically speaking there is no increase in fuel when you decrease your PF of simple increase MVAr.
But then why do we try to maintain high PF? It is because any system is not ideal. All system will have resistive losses which are in turn directly proportional to square of current through the
system. even though as mentioned reactive power over a complete cycle is zero, reactive current will still cause a resistive loss in cables, transformers, etc. thus decreasing efficiency. but as
mentioned in earlier posts, the effect is seen over a period of time. when you calculate you will have a substantial saving overall in an annual period.
Your use of this site is subject to the terms and conditions set forth under
Legal Notices
and the
Privacy Policy
. Please read those terms and conditions carefully. Subject to the rights expressly reserved to others under Legal Notices, the content of this site and the compilation thereof is © 1999-2014 Nerds
in Control, LLC. All rights reserved.
Users of this site are benefiting from open source technologies, including PHP, MySQL and Apache. Be happy.
A new dramatist of the absurd
Has a voice that will shortly be heard.
I learn from my spies
He's about to devise
An unprintable three-letter word.
|
{"url":"http://control.com/thread/1307586278","timestamp":"2014-04-19T21:08:01Z","content_type":null,"content_length":"51051","record_id":"<urn:uuid:ade74a2a-b1e8-4ce3-b20e-07d888b7b1a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find a SAT Math Tutor
...I know how to present the material to students in an understandable fashion. I have taught most of my college bound students SAT math. I have had the opportunity to encourage many young adults
to pursue the GED and pass it, so that they can position themselves for a promising future.
23 Subjects: including SAT math, chemistry, calculus, geometry
...I have great math background from working toward master's degree in computer science. The course works included calculus I/II/III, probability, mathematical statistics, graduate statistics,
Multi-variate statistics, discrete math, logics, linear algebra, numerical analysis, and graduate linear algebra. Most of courses were finished with grade of A's and I think three B's.
15 Subjects: including SAT math, chemistry, calculus, physics
I have Bachelor of Science degrees in Physics and Electrical Engineering and PhD in Physics. I have more than 10 years of experience in teaching math, physics, and engineering courses to science
and non-science students at UMCP, Virginia Tech, and in Switzerland. I am a dedicated teacher and I alw...
16 Subjects: including SAT math, calculus, physics, statistics
...I generally have the student work through (or at least make an attempt at) a problem first and then work with them through step-by-step, discussing strategies for approaching similar problems
and concepts. I encourage showing work and developing methods to improve understanding and test-taking a...
13 Subjects: including SAT math, calculus, geometry, GRE
...I have also lead a chemistry co-op that required at least an algebra level of math skills. The math section of the SAT is often a great stress factor to students. Sometimes it is merely the
wording of the question or a misunderstanding of a small concept that can cost the student points.
11 Subjects: including SAT math, English, reading, writing
|
{"url":"http://www.purplemath.com/redondo_sat_math_tutors.php","timestamp":"2014-04-18T18:46:47Z","content_type":null,"content_length":"23972","record_id":"<urn:uuid:8372207a-04d1-4e60-9354-dc8f543d1097>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical English Usage
Mathematical English Usage - a Dictionary
[see also: original]
He was the first to propose a complete theory of triple intersections.
Because N. Wiener is recognized as the first to have constructed such a measure, the measure is often called the Wiener measure.
Let $S_i$ be the first of the remaining $S_j$.
The first two are simpler than the third. [Or: the third one; not: “The first two ones”]
As a first step we shall bound $A$ below.
We do this in the first section, which the reader may skip on a first reading.
At first glance, this appears to be a strange definition.
The first and third terms in (5) combine to give ......
the first author = the first-named author
[see also: initially, originally, beginning, firstly]
First, we prove (2). [Not: “At first”]
We first prove a reduced form of the theorem.
Suppose first that ......
His method of proof was to first exhibit a map ......
In Lemma 6.1, the independence of $F$ from $V$ is surprising at first.
It might seem at first that the only obstacle is the fact that the group is not compact.
[Note the difference between first and at first: first refers to something that precedes everything else in a series, while at first [= initially] implies a contrast with what happens later.]
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z
|
{"url":"http://www.impan.pl/cgi-bin/dict?first","timestamp":"2014-04-21T12:33:32Z","content_type":null,"content_length":"5478","record_id":"<urn:uuid:708f962b-317a-4b9c-b25e-0d453d8ae41c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Importance of Terminology's Quality
I'd like to introduce a blog post by Stephen Wolfram, on the design
process of Mathematica. In particular, he touches on the importance of
naming of functions.
• Ten Thousand Hours of Design Reviews (2008 Jan 10) by Stephen
The issue is fitting here today, in our discussion of “closure”
terminology recently, as well the jargons “lisp 1 vs lisp2” (multi-
meaning space vs single-meaning space), “tail recursion”, “currying”,
“lambda”, that perennially crop up here and elsewhere in computer
language forums in wild misunderstanding and brouhaha.
The functions in Mathematica, are usually very well-name, in contrast
to most other computing languages. In particular, the naming in
Mathematica, as Stephen Wolfram implied in his blog above, takes the
perspective of naming by capturing the essense, or mathematical
essence, of the keyword in question. (as opposed to, naming it
according to convention, which often came from historical happenings)
When a thing is well-named from the perspective of what it actually
“mathematically” is, as opposed to historical developments, it avoids
vast amount of potential confusion.
Let me give a few example.
• “lambda”, widely used as a keyword in functional languages, is named
just “Function” in Mathematica. The “lambda”happend to be called so
in the field of symbolic logic, is due to use of the greek letter
lambda “λ” by happenstance. The word does not convey what it means.
While, the name “Function”, stands for the mathematical concept of
“function” as is.
• Module, Block, in Mathematica is in lisp's various “let*”. The
lisp's keywords “let”, is based on the English word “let”. That word
is one of the English word with multitudes of meanings. If you look up
its definition in a dictionary, you'll see that it means many
disparate things. One of them, as in “let's go”, has the meaning of
“permit; to cause to; allow”. This meaning is rather vague from a
mathematical sense. Mathematica's choice of Module, Block, is based on
the idea that it builds a self-contained segment of code. (however,
the choice of Block as keyword here isn't perfect, since the word also
has meanings like “obstruct; jam”)
• Functions that takes elements out of list are variously named First,
Rest, Last, Extract, Part, Take, Select, Cases, DeleteCases... as
opposed to “car”, “cdr”, “filter”, “filter”, “pop”, “shift”,
“unshift”, in lisps and perl and other langs.
The above are some examples. The thing to note is that, Mathematica's
choices are often such that the word stands for the meaning themselves
in some logical and independent way as much as possible, without
having dependent on a particular computer science field's context or
history. One easy way to confirm this, is taking a keyword and ask a
wide audience, who doesn't know about the language or even unfamiliar
of computer programing, to guess what it means. The wide audience can
be made up of mathematicians, scientists, engineers, programers,
laymen. This general audience, are more likely to guess correctly what
Mathematica's keyword is meant in the language, than the the name used
in other computer languages who's naming choices goes by convention or
(for example, Perl's naming heavily relies on unix culture (grep,
pipe, hash...), while functional lang's namings are typically heavily
based on the field of mathematical logic (e.g. lambda, currying,
closure, monad, ...). Lisp's cons, car, cdr, are based on computer
hardware (this particular naming, caused a major damage to the lisp
language to this day). (Other examples: pop, shift are based on
computer science jargon of “stack”. Grep is from Global Regular
Expression Print, while Regular Expression is from theoretical
computer science of Automata... The name regex has done major hidden
damage to the computing industry, in the sense that if it have just
called it “string patterns”, then a lot explanations, literatures,
confusions, would have been avoided.))
(Note: Keywords or functions in Mathematica are not necessarily always
best named. Nor are there always one absolute choice as best, as there
are many other considerations, such as the force of wide existing
convention, the context where the function are used, brevity,
limitations of English language, different scientific context (e.g.
math, physics, engineering), or even human preferences.)
Many of the issues regarding the importance and effects of
terminology's quality, i've wrote about since about 2000. Here are the
relevant essays:
• Jargons of Info Tech Industry
• The Jargon “Lisp1” vs “Lisp2”
• The Term Curring In Computer Science
• What Is Closure In A Programing Language
• What are OOP's Jargons and Complexities
• Sun Microsystem's abuse of term “API” and “Interface”
• Math Terminology and Naming of Things
http://www.velocityreviews.com/forums/(E-Mail Removed)
|
{"url":"http://www.velocityreviews.com/forums/t907339-the-importance-of-terminologys-quality.html","timestamp":"2014-04-19T19:44:07Z","content_type":null,"content_length":"69050","record_id":"<urn:uuid:2255cb84-eb67-49bd-829b-2b849a65ae3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hurwitz's automorphisms theorem for infinite genus Riemann surfaces
up vote 2 down vote favorite
Hurwitz's automorphisms theorem states that for a compact Riemann surface $X$ the cardinality of $Aut(X)$, the group of holomorphic automorphisms, is bounded above by $84(g(X)-1)$ and is therefore
finite. From an earlier post on MO (Riemann surfaces that are not of finite type) one cannot expect $Aut(X)$ to be finite when $X$ is of infinite genus. When can one expect $Aut(X)$ to be discrete
(we give $Aut(X)$ the compact-open topology)? My conjecture is that when $X$ is hyperbolic and non-prolongable, $Aut(X)$ is discrete. Here non-prolongable means that we cannot imbed $X$ conformally
into another Riemann surface $\tilde{X}$, such that $\tilde{X} - X$ has a non-empty interior.
riemann-surfaces hyperbolic-geometry complex-analysis
add comment
2 Answers
active oldest votes
The action is discrete if $X$ is hyperbolic and is not a disk or annulus. By uniformization, $X=\mathbb{H}^2/\Gamma$ for some discrete subgroup $\Gamma< PSL_2(\mathbb{R})$. Let $\Lambda <
PSL_2(\mathbb{R})$ be the normalizer of $\Gamma$, then $Aut(X)\cong \Lambda/\Gamma$ since any conformal automorphism of $X$ must be an isometry of the hyperbolic metric, and therefore has
some lift to $PSL_2(\mathbb{R})$. Since $\Gamma$ is discrete, $\Lambda$ is a closed subgroup of $PSL_2(\mathbb{R})$. If $\Lambda$ is discrete, then $Aut(X)=\Lambda/\Gamma$ is discrete.
up vote 7 Otherwise, if $\Lambda$ is not discrete, then $\Lambda$ is a Lie subgroup of $PSL_2(\mathbb{R})$. The only (non-discrete) Lie subgroups of $PSL_2(\mathbb{R})$ are elementary or $PSL_2(\
down vote mathbb{R})$. In the first case, $\Gamma$ is elementary, so $\Gamma$ is abelian, and $X$ is a disk or annulus. In the second case, $\Gamma$ must be trivial since $PSL_2(\mathbb{R})$ is
accepted simple.
add comment
It seems that this question is addressed in:
On the action of the mapping class group for Riemann surfaces of infinite type
up vote 0 down vote Ege FUJIKAWA, Hiroshige SHIGA, and Masahiko TANIGUCHI Source: J. Math. Soc. Japan Volume 56, Number 4 (2004), 1069-1086.
(open access)
add comment
Not the answer you're looking for? Browse other questions tagged riemann-surfaces hyperbolic-geometry complex-analysis or ask your own question.
|
{"url":"http://mathoverflow.net/questions/82541/hurwitzs-automorphisms-theorem-for-infinite-genus-riemann-surfaces","timestamp":"2014-04-20T01:35:46Z","content_type":null,"content_length":"54356","record_id":"<urn:uuid:115ab699-09dc-4892-8b65-feef2c3a9cd7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
|
to a first approximation
to a first approximation
1. When one is doing certain numerical computations, an approximate solution may be computed by any of several heuristic methods, then refined to a final value. By using the starting point of a first
approximation of the answer, one can write an algorithm that converges more quickly to the correct result.
2. In jargon, a preface to any comment that indicates that the comment is only approximately true. The remark "To a first approximation, I feel good" might indicate that deeper questioning would
reveal that not all is perfect (e.g. a nagging cough still remains after an illness).
Try this search on Wikipedia, OneLook, Google
Nearby terms: TNX « TNXE6 « to « to a first approximation » toast » toaster » toasternet
Copyright Denis Howe 1985
|
{"url":"http://foldoc.org/to+a+first+approximation","timestamp":"2014-04-21T09:42:06Z","content_type":null,"content_length":"5058","record_id":"<urn:uuid:882100f9-0985-40eb-b5f8-49de50923b3c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: May 2006 [00292]
[Date Index] [Thread Index] [Author Index]
Re: Re: )
• To: mathgroup at smc.vnet.net
• Subject: [mg66442] Re: Re: )
• From: Maxim <m.r at inbox.ru>
• Date: Fri, 12 May 2006 02:03:53 -0400 (EDT)
• References: <200605050902.FAA28575@smc.vnet.net> <e3hf84$m1h$1@smc.vnet.net> <200605090635.CAA18518@smc.vnet.net> <e3sh4a$luu$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
On Wed, 10 May 2006 11:00:26 +0000 (UTC), Andrzej Kozlowski
<akoz at mimuw.edu.pl> wrote:
> On 9 May 2006, at 15:35, Maxim wrote:
>> On Sat, 6 May 2006 06:20:52 +0000 (UTC), Andrzej Kozlowski
>> <akoz at mimuw.edu.pl> wrote:
>>> laws of arithmetic do not hold). Some of them are hard to explain: I
>>> can't see any good reason at all why Infinity^Infinity is
>>> ComplexInfinity, and it seems to contradict the the most basic rule
>>> that x^y is always real when x and y are positive reals. Besides, as
>>> I mentioned earlier, Infinity and ComplexInfinity do not belong
>>> together in any topological model known to me (you need a
>>> "topological model" to be able to consider the issue of continuity)
>>> and should never appear in the same formula. I can only consider this
>>> as a bug, and a rather silly one.
>> I think this is as it should be: we need to consider all the sequences
>> converging to Infinity, and, for example, Limit[(x + I*Pi)^x, x ->
>> Infinity] == -Infinity. So in that sense Infinity^Infinity ==
>> ComplexInfinity: when z = z1^z2 and z1, z2 go to Infinity the absolute
>> value of z always tends to Infinity but the argument can be arbitrary.
>> Maxim Rytin
>> m.r at inbox.ru
> One can perhaps make some sense of this even topologically if one
> assumes that map {z,w}->z^w has as its target space the Riemann
> sphere and as its domain the Cartesian product of 2-discs. In other
> words it is not a map of the form X x X -> X, as one usually
> supposes. But then it is quite a different map from the one that gives
> 2^Infinity
> Infinity
> and so on. I certainly cannot conceive of any sensible topology on
> the set that is the union of the complex plane, the set of
> DirectedInfinities and the point ComplexInfinity, can you? Any such
> topology would have pretty shocking properties.... And if there is
> no topology involved than what is Limit supposed to mean? Still, I
> admit, that even if things that don't make any mathematical sense may
> sometimes be acceptable in a symbolic algebra program, for purely
> practical reasons, but then they are pretty likely to lead to
> confusion and contradictions. This is in fact the current state of
> affairs in this regard and it makes me feel that the whole
> Mathematica approach to this business of Infinities maybe misguided.
> Of course, as we are not talking about mathematics or any kind of
> empirical science this is all ultimately a matter of taste and as we
> well know there is no accounting for tastes... or sense of humour.
> Andrzej Kozlowski
I think what Limit could do here in principle is taking the limits of
Abs[f[z]] and Arg[f[z]] separately. If the limit of the absolute value is
infinite and the limit of the argument exists, return DirectedInfinity. If
the limit of the argument doesn't exist, return ComplexInfinity.
Currently Mathematica just isn't being very consistent here:
In[1]:= Limit[2^(x + I*Pi/(2*Log[2])), x -> Infinity]
Out[1]= DirectedInfinity[I]
In[2]:= Limit[2^(x + I*Pi/(4*Log[2])), x -> Infinity]
Out[2]= Infinity
The explanation for this can be as follows: if we simplify the limit
expressions first, the first one yields I*2^x and the second remains
unchanged. Then substituting x = Infinity yields I*2^Infinity ==
DirectedInfinity[I] and 2^(Infinity + I*Pi/(4*Log[2])) == 2^Infinity ==
Infinity respectively. But in that case it just means that Mathematica
used some unallowed operations (cannot just substitute x = Infinity as we
don't have continuity), not that there really is some contradiction.
If we accept the convention of taking the limits of Abs and Arg, then
Out[1] is correct and Out[2] isn't (should be DirectedInfinity[1 + I]).
Also then you're quite right that the results for Infinity^Infinity and
2^Infinity are inconsistent with each other -- they should be the same
(either both ComplexInfinity or both Infinity).
Another thing that is needed for completeness is specifying that
Mathematica always assumes that the limit variable goes along a straight
line from the origin:
In[3]:= Limit[Re[z], z -> I*Infinity]
Out[3]= 0
This result wouldn't be correct for an arbitrary path: for instance, if z
= 1 + I*t and t is real, then z also goes to I*Infinity and Re[z] == 1.
Maxim Rytin
m.r at inbox.ru
• Follow-Ups:
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/May/msg00292.html","timestamp":"2014-04-18T20:58:45Z","content_type":null,"content_length":"39561","record_id":"<urn:uuid:f3951dee-382b-4263-816b-5cfd608dbf1a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DOCUMENTA MATHEMATICA, Vol. Extra Volume: Andrei A. Suslin's Sixtieth Birthday (2010), 515-523
DOCUMENTA MATHEMATICA
, Vol. Extra Volume: Andrei A. Suslin's Sixtieth Birthday (2010), 515-523
Ivan Panin and Konstantin Pimenov
Rationally Isotropic Quadratic Spaces Are Locally Isotropic: II
The results of the present article extend the results of [Pa]. The main result of the article is Theorem 1.1 below. The proof is based on a moving lemma from [LM], a recent improvement due to O.
Gabber of de Jong's alteration theorem, and the main theorem of [PR]. A purity theorem for quadratic spaces is proved as well in the same generality as Theorem 1.1, provided that $R$ is local. It
generalizes the main purity result from [OP] and it is used to prove the main result in [ChP].
2010 Mathematics Subject Classification:
Keywords and Phrases:
Full text: dvi.gz 20 k, dvi 42 k, ps.gz 668 k, pdf 132 k.
Home Page of DOCUMENTA MATHEMATICA
|
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/DMJDMV/vol-suslin/panin_pimenov.html","timestamp":"2014-04-19T09:44:13Z","content_type":null,"content_length":"1769","record_id":"<urn:uuid:34045a2c-23f8-4373-ba11-90f9443f0753>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantitative Business Methods Using Excel
Volume 12, Issue 2, 1998
Quantitative Business Methods Using Excel
David Whigham
Oxford University Press, June 1998.
Paperback, ISBN 0-19-877545-8. UK Price:£22.99
476 pages, numerous figures and Excel screen-dumps, with accompanying Workbook diskette.
David Whigham has produced the book that I kept meaning to write, that is an up to date textbook for economics and business students introducing them to spreadsheets and their application to relevant
quantitative problems. Just as back in 1990 when I published my book on Quantitative Analysis for Economics and Business I had to decide whether to gear the book to one particular spreadsheet package
(I chose Lotus 1-2-3) and hope that users of other packages would be able to see how to adapt the instructions to work with other programs, David has taken the understandable step of assuming that
students either have access to Excel (version 5.0), or can see how to adapt the instructions given to other similar packages.
There are twelve chapters covering the following topics:
• 1 Introduction to Excel,
• 2 Principles of elementary modelling,
• 3 Business modelling using more advanced functions,
• 4 Equation solving and optimisation using the Excel Solver,
• 5 Financial mathematics,
• 6 Discounting techniques,
• 7 Matrix algebra,
• 8 Introductory Statistical Analysis,
• 9 Further descriptive statistical methods,
• 10 Simple Linear Regression,
• 11 Inferential statistics,
• 12 Multiple Regression
The book covers pretty well all the topics I would want to see. I would perhaps have included a chapter on difference equation modelling but otherwise there is a very nice balance between
mathematical and statistical methods. All the chapters provide plenty of examples of the use of the quantitative methods in their application to economic and business problems. For example the
chapter on matrix algebra has an application to input-output models and the chapter on equation solving and optimisation covers both linear programming and more complicated non-linear optimization
problems subject to constraints.
There is a nice chapter early on in the book which sets out some of the principles of spreadsheet modelling (although in view of points raised by Jocelyn Paine in his paper at the CALECO 98
conference - see the report elsewhere in CHEER - perhaps more should have been said about testing and checking spreadsheet models for errors).
Many of the files relating to the problems described in the book are printed in full - some are just too big - but all are available on the accompanying disk. Each chapter ends with a set of
exercises (solutions provided) and there is plenty of evidence that the material has been thoroughly class-tested. In reviewing this book I only had a chance to work through a small sample of the
problems given, but I found no errors either in the expression of the questions posed or the answers supplied.
This text has been written so as to provide both an explanation of the overall nature of what is to be achieved (that is a full discussion of the economics and business problems and the methods that
can be used to solve them) and also clear instructions on how it is to be done (that is step by step hands-on directions as to how to set up the spreadsheet, shown clearly in bold). Ideally I should
have liked to have seen a little more real (as opposed to fictitious) data being used in the problems, and also something about linking spreadsheet files to other packages. However this is a minor
criticism and I am sure that this book will prove both useful to lecturers teaching quantitative courses and popular with students taking such courses.
Guy Judge
University of Portsmouth
|
{"url":"http://www.economicsnetwork.ac.uk/cheer/ch12_2/ch12_2p40.htm","timestamp":"2014-04-16T21:58:07Z","content_type":null,"content_length":"7118","record_id":"<urn:uuid:09b83e35-e204-418a-9274-794aa0a1ee0e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sausalito Precalculus Tutor
...I have taught economics, operations research and finance related courses. I have acted as a tutor for MBA students in every course they took in their graduate school curriculum. I have a strong
background in statistics and econometrics.
49 Subjects: including precalculus, calculus, physics, geometry
...As part of this job, I was trained in and provided materials for each of these topics. I often find, when working with my students, that an important component of the tutoring is attention to
these skills in addition to the specific subject areas for which tutoring had been requested. I have a ...
20 Subjects: including precalculus, calculus, Fortran, Pascal
...MY BACKGROUND I have a B.S. degree in Mathematics from UC Davis specializing in probability theory and did graduate work in Computer Science at California State University Chico. I retired
after working 30 years as a computer programmer, three years in Silicon Valley and the last 27 years workin...
12 Subjects: including precalculus, calculus, statistics, geometry
...I was originally going to double major in these two subjects. My original interest and intent in majoring in these two subjects was to teach freshman and sophomore level college courses, and I
was enrolled in education to teach public school only as a back-up plan and because I thought I might a...
48 Subjects: including precalculus, reading, Spanish, English
...So I turned to something I had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING! I have been doing it professionally now for over ten years. I love
it when my student's understand a new concept.
10 Subjects: including precalculus, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/sausalito_ca_precalculus_tutors.php","timestamp":"2014-04-18T22:05:37Z","content_type":null,"content_length":"24036","record_id":"<urn:uuid:a78948a0-60f6-4376-b76a-3816e53ef0e6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find Equation of a Sphere Given the Center
1. 32338
Find Equation of a Sphere Given the Center
Find an equation of the largest sphere with center (10, 1 , 3) that is contained completely in the first octant.
Note that you must move everything to the left hand side of the equation that we desire the coefficients of the quadratic terms to be 1.
The Equation of a Sphere is found Given the Center. The solution is well explained.
|
{"url":"https://brainmass.com/math/algebra/32338","timestamp":"2014-04-18T23:58:45Z","content_type":null,"content_length":"25080","record_id":"<urn:uuid:67dca2a3-9469-4179-b719-64d22260525a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Teacher2Teacher - Q&A #17637
View entire discussion
From: Carla Flaherty
To: Teacher2Teacher Service
Date: Sep 22, 2006 at 19:10:59
Subject: Typing software
I am very interested in finding software that will allow math work (from
grades 4 through 12) to be typed on the computer. The software should NOT
assist with solving, only with being able to type expressions, equations,
tables, simple graphs (hand-drawn) and so on. We need this for students
with fine motor difficulties!
I envision a graphics/notepad-sort of thing, with an optional grid,
layered point and click symbol menu, graphics capability (erase, move,
and so on), ability to draw lines (with snap, to make it more friendly)
and zoom to make the current work larger and therefore easier to navigate
physically (larger mouse movements.) Software should NOT do any
calculation, but should be friendly enough to learn comfortably at an
early age and progress through high school.
I know this is not a simple programming task, but it MUST already exist
somewhere. We really need this to help students whose math aptitude is
good, but who are simply choked by their inability to write neatly, or in
columns, and who focus SO HARD on these simple tasks that they don't have
the thinking capacity left to learn the math concepts.
Is anyone aware of such a product which doesn't attempt to SOLVE the
equations for the student? OR a product which does solve, but could be
modified to use the typing capability only? Thank you.
Post a public discussion message
Ask Teacher2Teacher a new question
|
{"url":"http://mathforum.org/t2t/message.taco?thread=17637&message=1","timestamp":"2014-04-20T06:44:13Z","content_type":null,"content_length":"5522","record_id":"<urn:uuid:bc986325-8df7-4ff4-9460-03c3df9732c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Local Methods for Localizing Faults in Electronic Circuits
Results 1 - 10 of 14
- ARTIFICIAL INTELLIGENCE , 1987
"... Suppose one is given a description of a system, together with an observation of the system's behaviour which conflicts with the way the system is meant to behave. The diagnostic problem is to
determine those components of the system which, when assumed to be functioning abnormally, will explain the ..."
Cited by 871 (5 self)
Add to MetaCart
Suppose one is given a description of a system, together with an observation of the system's behaviour which conflicts with the way the system is meant to behave. The diagnostic problem is to
determine those components of the system which, when assumed to be functioning abnormally, will explain the discrepancy between the observed and correct system behaviour. We propose a general theory
for this problem. The theory requires only that the system be described in a suitable logic. Moreover, there are many such suitable logics, e.g. first-order, temporal, dynamic, etc. As a result, the
theory accommodates diagnostic reasoning in a wide variety of practical settings, including digital and analogue circuits, medicine, and database updates. The theory leads to an algorithm for
computing all diagnoses, and to various results concerning principles of measurement for discriminating among competing diagnoses. Finally, the theory reveals close connections between diagnostic
reasoning and nonmonotonic reasoning.
- International Journal of Man-Machine Studies , 1991
"... Model-based reasoning about a system requires an explicit representation of the system's components and their connections. Diagnosing such a system consists of locating those components whose
abnormal behavior accounts for the faulty system behavior. In order to increase the efficiency of model-base ..."
Cited by 31 (2 self)
Add to MetaCart
Model-based reasoning about a system requires an explicit representation of the system's components and their connections. Diagnosing such a system consists of locating those components whose
abnormal behavior accounts for the faulty system behavior. In order to increase the efficiency of model-based diagnosis, we propose a model representation at several levels of detail, and define
three refinement (abstraction) operators. We specify formal conditions that have to be satisfied by the hierarchical representation, and emphasize that the multi-level scheme is independent of any
particular single-level model representation. The hierarchical diagnostic algorithm which we define turns out to be very general. We show that it emulates the bisection method, and can be used for
hierarchical constraint satisfaction. We apply the hierarchical modeling principle and diagnostic algorithm to a medium-scale medical problem. The performance of a four-level qualitative model of the
heart is compared t...
, 1998
"... Various formal theories have been proposed in the literature to capture the notions of diagnosis underlying diagnostic programs. Examples of such notions are: heuristic classification, which is
used in systems incorporating empirical knowledge, and model-based diagnosis, which is used in diagnostic ..."
Cited by 23 (2 self)
Add to MetaCart
Various formal theories have been proposed in the literature to capture the notions of diagnosis underlying diagnostic programs. Examples of such notions are: heuristic classification, which is used
in systems incorporating empirical knowledge, and model-based diagnosis, which is used in diagnostic systems based on detailed domain models. Typically, such domain models include knowledge of
causal, structural, and functional interactions among modelled objects. In this paper, a new set-theoretical framework for the analysis of diagnosis is presented. Basically, the framework
distinguishes between `evidence functions', which characterize the net impact of knowledge bases for purposes of diagnosis, and `notions of diagnosis', which define how evidence functions are to be
used to map findings observed for a problem case to diagnostic solutions. This set-theoretical framework offers a simple, yet powerful tool for comparing existing notions of diagnosis, as well as for
proposing new notions ...
- The Knowledge Engineering Review , 1997
"... Diagnosis was among the first subjects investigated when digital computers became available. It still remains an important research area, in which several new developments have taken place in
the last decade. One of these new developments is the use of detailed domain models in knowledge-based syste ..."
Cited by 23 (6 self)
Add to MetaCart
Diagnosis was among the first subjects investigated when digital computers became available. It still remains an important research area, in which several new developments have taken place in the
last decade. One of these new developments is the use of detailed domain models in knowledge-based systems for the purpose of diagnosis, often referred to as model-based diagnosis. Typically, such
models embody knowledge of the normal or abnormal structure and behaviour of the modelled objects in a domain. Models of the structure and workings of technical devices, and causal models of disease
processes in medicine are two examples. In this article, the most important notions of diagnosis and their formalisation are reviewed and brought in perspective. In addition, attention is focused on
a number of general frameworks of diagnosis, which offer sufficient flexibility for expressing several types of diagnosis.
- Artificial Intelligence , 1998
"... The mathematical foundations of model-based diagnostics or diagnosis from first principles have been laid by Reiter [31]. In this paper we extend Reiter’s ideas of model-based diagnostics by
introducing probabilities into Reiter’s framework. This is done in a mathematically sound and precise way whi ..."
Cited by 22 (16 self)
Add to MetaCart
The mathematical foundations of model-based diagnostics or diagnosis from first principles have been laid by Reiter [31]. In this paper we extend Reiter’s ideas of model-based diagnostics by
introducing probabilities into Reiter’s framework. This is done in a mathematically sound and precise way which allows one to compute the posterior probability that a certain component is not working
correctly given some observations of the system. A straightforward computation of these probabilities is not efficient and in this paper we propose a new method to solve this problem. Our method is
logic-based and borrows ideas from assumption-based reasoning and ATMS. We show how it is possible to determine arguments in favor of the hypothesis that a certain group of components is not working
correctly. These arguments represent the symbolic or qualitative aspect of the diagnosis process. Then they are used to derive a quantitative or numerical aspect represented by the posterior
probabilities. Using two new theorems about the relation between Reiter’s notion of conflict and our notion of argument, we prove that our so-called degree of support is nothing but the posterior
probability that we are looking for. Furthermore, a model where each component may have more than two different operating modes is discussed and a new algorithm to compute posterior probabilities in
this case is presented. Key words: Model-based diagnostics; Assumption-based reasoning; ATMS;
- International Journal of Approximate Reasoning , 2001
"... Model-based diagnosis concerns using a model of the structure and behaviour of a system or device in order to establish why the system or device is malfunctioning. Traditionally, little
attention has been given to the problem of dealing with uncertainty in model-based diagnosis. Given the fact th ..."
Cited by 11 (2 self)
Add to MetaCart
Model-based diagnosis concerns using a model of the structure and behaviour of a system or device in order to establish why the system or device is malfunctioning. Traditionally, little attention has
been given to the problem of dealing with uncertainty in model-based diagnosis. Given the fact that determining a diagnosis for a problem almost always involves uncertainty, this situation is not
entirely satisfactory. This paper builds upon and extends previous work in model-based diagnosis by supplementing the well-known model-based framework with mathematically sound ways for dealing with
, 2008
"... We develop a programming model built on the idea that the basic computational elements are autonomous machines interconnected by shared cells through which they communicate. Each machine
continuously examines the cells it is interested in, and adds information to some based on deductions it can make ..."
Cited by 8 (2 self)
Add to MetaCart
We develop a programming model built on the idea that the basic computational elements are autonomous machines interconnected by shared cells through which they communicate. Each machine continuously
examines the cells it is interested in, and adds information to some based on deductions it can make from information from the others. This model makes it easy to smoothly combine expressionoriented
and constraint-based programming; it also easily accommodates implicit incremental distributed search in ordinary programs.
, 2003
"... We propose to regard a diagnostic system as an ordered logic theory, i.e. a partially ordered set of clauses where smaller rules carry more preference. ..."
Cited by 7 (6 self)
Add to MetaCart
We propose to regard a diagnostic system as an ordered logic theory, i.e. a partially ordered set of clauses where smaller rules carry more preference.
- In Advanced Topics in Artificial Intelligence , 1992
"... Diagnosis is an important application area of Artificial Intelligence. First generation expert diagnostic systems had exhibited difficulties which motivated the development of model-based
reasoning techniques. Model-based diagnosis is the activity of locating malfunctioning components of a system so ..."
Cited by 6 (0 self)
Add to MetaCart
Diagnosis is an important application area of Artificial Intelligence. First generation expert diagnostic systems had exhibited difficulties which motivated the development of model-based reasoning
techniques. Model-based diagnosis is the activity of locating malfunctioning components of a system solely on the basis of its structure and behavior. The paper gives a brief overview of the main
concepts, problems, and research results in this area. 1 Introduction Diagnosis is one of the earliest areas in which application of Artificial Intelligence techniques was attempted. The diagnosis of
a system which behaves abnormally consists of locating those subsystems whose abnormal behavior accounts for the observed behavior. For example, a system being diagnosed might be a mechanical device
exhibiting malfunction, or a human patient. There are two fundamentally different approaches to diagnostic reasoning. In the first, heuristic approach, one attempts to codify diagnostic rules of
thumb and p...
- DX’02 THIRTEENTH INTERNATIONAL WORKSHOP ON PRINCIPLES OF DIAGNOSIS , 2002
"... Technical systems are in general not guaranteed to work correctly. They are more or less reliable. One main problem for technical systems is the computation of the reliability of a system. A
second main problem is the problem of diagnostic. In fact, these problems are in some sense dual to each othe ..."
Cited by 6 (2 self)
Add to MetaCart
Technical systems are in general not guaranteed to work correctly. They are more or less reliable. One main problem for technical systems is the computation of the reliability of a system. A second
main problem is the problem of diagnostic. In fact, these problems are in some sense dual to each other. In this
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=427658","timestamp":"2014-04-19T10:07:17Z","content_type":null,"content_length":"37934","record_id":"<urn:uuid:9100da70-dbd7-4472-af88-0ad499a5a063>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stoughton, MA Prealgebra Tutor
Find a Stoughton, MA Prealgebra Tutor
...Most of my students improve their total SAT scores by 80-250 points! I can also help with the college search and admissions process if necessary. I usually tutor students at their homes, but I
am willing to work at another location (public library, coffee shop, etc.), if preferred.
41 Subjects: including prealgebra, reading, Spanish, English
...My teaching style is clear and focused. I am patient and creative. As a Math teacher for 13 years, I have developed organizational and study skills methods that have been very effective.
9 Subjects: including prealgebra, GRE, algebra 1, GED
...My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in
their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English.
16 Subjects: including prealgebra, French, calculus, algebra 1
...I proofread for grammar, spelling, punctuation, and flow. In addition, I have over 10 years of experience in proofreading numerous high school and college essays and research papers. I
received my TEFL Certification from TEFL Worldwide Prague in 2005.
28 Subjects: including prealgebra, English, reading, writing
...I am very responsible and am always on time for things. I am very positive and friendly and will be a great positive tutor.I have been teaching middle school math for the past 7 years. I
believe I am qualified to teach study skills because I am very good about helping students to find a way they study and learn best.
6 Subjects: including prealgebra, geometry, algebra 1, elementary math
Related Stoughton, MA Tutors
Stoughton, MA Accounting Tutors
Stoughton, MA ACT Tutors
Stoughton, MA Algebra Tutors
Stoughton, MA Algebra 2 Tutors
Stoughton, MA Calculus Tutors
Stoughton, MA Geometry Tutors
Stoughton, MA Math Tutors
Stoughton, MA Prealgebra Tutors
Stoughton, MA Precalculus Tutors
Stoughton, MA SAT Tutors
Stoughton, MA SAT Math Tutors
Stoughton, MA Science Tutors
Stoughton, MA Statistics Tutors
Stoughton, MA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Avon, MA prealgebra Tutors
Braintree prealgebra Tutors
Bridgewater, MA prealgebra Tutors
Brockton, MA prealgebra Tutors
Canton, MA prealgebra Tutors
Dedham, MA prealgebra Tutors
Easton, MA prealgebra Tutors
Holbrook, MA prealgebra Tutors
Mansfield, MA prealgebra Tutors
Mattapan prealgebra Tutors
Milton, MA prealgebra Tutors
Norwood, MA prealgebra Tutors
Randolph, MA prealgebra Tutors
Sharon, MA prealgebra Tutors
Walpole, MA prealgebra Tutors
|
{"url":"http://www.purplemath.com/Stoughton_MA_Prealgebra_tutors.php","timestamp":"2014-04-18T11:22:48Z","content_type":null,"content_length":"24094","record_id":"<urn:uuid:5b836231-f888-454f-bcb4-55ee68381aca>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: RE: Noconst in cointegration relationship in VECM
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: RE: Noconst in cointegration relationship in VECM
From DE SOUZA Eric <eric.de_souza@coleurope.eu>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject RE: st: RE: Noconst in cointegration relationship in VECM
Date Thu, 28 Apr 2011 19:06:36 +0200
I realise what you mean now. What Stata does, following the work of Soeren Johansen is to decompose the constant term into two parts. One part makes the equilibrium error equal to zero on average -- this is the constant in the cointegration relation --; the other is what is left over and appears as the constant in the vecm
For this reason you cannot set the constant in the cointegration relation equal to zero. What you have there is the result of a decomposition, which is why no standard errror, etc., is produced for it.
Eric de Souza
College of Europe
Brugge (Bruges), Belgium
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Svein Olav Krakstad
Sent: 28 April 2011 16:21
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: RE: Noconst in cointegration relationship in VECM
It seems that i was not able to be write what i mean clear enough.
My question is almost the same as:
constraint 1 [_ce1]ln_ne = -1
constraint 2 [_ce1]ln_se = 1
vec ln_ne ln_se, bconstrainsts(1/2)
But what I want (which is different) is that I want the constant to be zero (i.e. dropped in the cointegrating vector).
Below i have copied in my table, where I am saying that lnprice is zero in the cointegrating vector, but I want the constant to be zero.
beta | Coef. Std. Err. z P>|z| [95% Conf. Interval]
_ce1 |
lnprice | (omitted)
lnrent | 2.505605 .8862366 2.83 0.005 .7686126 4.242596
lnAM | -2.238693 .9118165 -2.46 0.014 -4.025821 -.4515656
_cons | -2.000826 . . . . .
Is that possible?
Best regards,
Svein Olav
2011/4/28 DE SOUZA Eric <eric.de_souza@coleurope.eu>:
> It is not clear to me which case you want:
> trend(constant) include an unrestricted constant in model; the
> default
> trend(rconstant) include a restricted constant in model
> trend(trend) include a linear trend in the cointegrating equations
> and a quadratic trend in the undifferenced data
> trend(rtrend) include a restricted trend in model
> trend(none) do not include a trend or a constant See page 477 of the
> time series manual for a detailed explanation
> Eric de Souza
> College of Europe
> Brugge (Bruges), Belgium
> http://www.coleurope.eu
> -----Original Message-----
> From: owner-statalist@hsphsun2.harvard.edu
> [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Svein Olav
> Krakstad
> Sent: 28 April 2011 15:27
> To: statalist@hsphsun2.harvard.edu
> Subject: st: Noconst in cointegration relationship in VECM
> Hi,
> I am trying to have no constant in the cointegration vector in the VEC model.
> I know how to have constraints on the variables, but the constant does not work in the same way. For example if I restrict (i.e.) the variable lnrent:
> constraint define 1 [_ce1]lnrent=1
> vec lnprice lnrent lnAM, rank(1) lags(2) trend(constant)
> bconstraints(1)
> It works fine.
> Then I am thinking that it should only be to restrict the constant in the same way:
> constraint define 1 [_ce1]_cons=0
> vec lnprice lnrent lnAM, rank(1) lags(2) trend(constant)
> bconstraints(1)
> But this do not work.
> Could you please help me to figure this out?
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-04/msg01331.html","timestamp":"2014-04-20T06:24:02Z","content_type":null,"content_length":"13366","record_id":"<urn:uuid:85e8672e-df8a-44f8-ac30-7111a5d2b14a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tools
Estimating and Rounding Decimals
Reviewer: MissMath, Oct 11 2006 12:28:24:377PM
Review based on:
As an assignment in an education class, in order to familiarize myself with MathTools, I was asked to select and rate a tool. I chose this tool because many of the students I work with in my job as a
math tutor often have difficulty with the math skill of rounding.
Appropriate for:
introduction to a concept, practice of skills and understandings
Other Comments:
The concept is presented in a clear and concise way. I also like that students are given a chance to practice the skill at the end of the instruction. This would be a valuable tool to use in the
classroom or home for instruction and/or reinforcement.
What math does one need to know to use the resource?
place value to the millions place
What hardware expertise does one need to learn to use the resource?
Keyboard use
What extra things must be done to make it work?
It requires a lot of reading, which some users may not like. The fact that you can skip to the practice activity, which is very clearly explained, minimizes the need to read.
How hard was it for you to learn?
Very Easy
It was clear, quick, and to the point!
Ability to meet my goals:
Very Effective
Recommended for:
Math 3: Number sense
Math 4: Number sense
Math 5: Number sense
Math 6: Number sense
|
{"url":"http://mathforum.org/mathtools/all_reviews/12657/","timestamp":"2014-04-16T19:31:45Z","content_type":null,"content_length":"13359","record_id":"<urn:uuid:876f9446-7a86-4b4b-a470-c12b92760a3d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics Education Resources on the World Wide Web
"The best aspect of the Internet has nothing to do with technology. It's us." -Steven Levy
If 1995 was The "Year of the Internet" (Newsweek, Jan. 1, 1996), for educators it is quickly becoming the era of information-on-demand and collaboration-at-a-distance. Among the Internet's many
resources is the World Wide Web, a global network of information "servers" provided by individuals, organizations, businesses, and federal agencies who are offering documents, data, images, and
interactive sessions. For teachers, students, and parents, this means access to information not in textbooks or the local library, fast-breaking news, ideas for lessons and activities, and, best of
all, collaboration with others on projects of mutual interest.
Comprehensive Sites
-The Math Forum
Offers a large searchable database and Ask Dr. Math, a question-answering service. For teachers, students, parents, researchers.
-Mathematics Archives
Good place to start a search; includes educational software, laboratory notebooks, problem sets, lecture notes, and an extensive hotlist of K-12 teaching materials. Useful to mathematicians,
educators, students, and researchers.
-The Geometry Center
Offers multimedia documents; a downloadable software graphics archive; course materials; workshops, seminars, and courses; and news about events. Useful to mathematicians, students, and educators.
Provides over 100,000 pages of Mathematica programs, documents, examples, etc. Browse or search the archive by author, title, keyword, or item number. Useful to mathematicians, software users,
students, educators, and engineers.
-Schoolnet: Math Department
Offers a searchable database of teaching materials and a hotlist of math-related Websites, from "Math Art" to "Word Problems for Kids."
-The Schools of California Online Resource for Educators: Mathematics
Extensive, searchable database of lesson plans, activities, projects, etc. by grade level. Other databases of assessment tools and organizations. Submissions are invited.
Organizations & Centers
-American Mathematical Society
Offers an extensive array of services and guides to the literature in mathematics. A resource for mathematicians and mathematics educators.
-MAA Online
Site of the Mathematical Association of America. See the online version of "Quantitative Reasoning for College Graduates: A Complement to the Standards." It addresses the goal of quantitative
literacy for college students. Useful to college mathematics educators and others.
-Eisenhower National Clearinghouse for Mathematics and Science Education
Supported by the U.S. Department of Education, this site offers curriculum resources in support of reform efforts in math and science. Provides an online copy of the NCTM standards for K-12
mathematics. For K-12 educators and students.
-ERIC Clearinghouse for Science, Mathematics, and Environmental Education
Supported by the U.S. Department of Education, this site offers documents, links to other sites in the ERIC system, and access to the world's largest database of education-related materials. Useful
to anyone interested in education.
-NCTM Home Page
The National Council of Teachers of Mathematics offers an array of services and a catalog of materials for teachers and specialists in mathematics education.
-Mathematical Sciences Education Board
This National Research Council site offers general information, publications, and news releases. Useful to teachers and math specialists.
-Institute for Mathematics and Science Education
Devoted to the Teaching Integrated Math and Science (TIMS) project for elementary school teaching and related programs. For mathematics teachers and specialists at all levels.
For Parents and Children
-A Math Website! For Middle School Students
Provides an annotated listing of fun and interesting web sites. A great place to search for resources for children.
-Kids Web--Mathematics
A meta-index of quality resources and links. This site provides a broad range of articles, games, puzzles, and demonstrations to explore math, from the origins of algebra and geometry to new
innovations in chaos theory and fractals.
-Mathematics to Prepare Our Children for the 21st Century
http://dimacs.rutgers.edu/nj^math^coalition/pguide/pguide. html
An electronic booklet that includes activities for parents to do with their children, ways of helping children with mathematics, and more.
-Treasure Trove of Mathematics
One person's virtual "encyclopedia" of mathematics information and tidbits arranged alphabetically. Fascinating to browse.
Lessons, Activities, and Resources
-Appetizers and Lessons for Math and Reason
A massive collection of short items, sometimes funny and very clear. The site also offers reflections on teaching and using the site in lessons. Useful to high school math students and teachers.
-The Schoolhouse Classroom: Mathematics
Searchable by menu, topic, and keyword, this site provides lesson plans, materials, papers, projects, and more. Large hotlist of links to state, national, and international standards.
-AIMS Education Foundation
The Activities Integrating Mathematics and Science Education Foundation publishes books and a magazine for educators. This site includes sample activities and lesson plans, puzzles for classroom use,
and links to other education sites.
-Busy Teacher's Website K-12: Mathematics
Fascinating collection of links to sites on fractals, games and toys, history, lesson plans, and classroom activities. Great resource for teachers.
-Secondary Mathematics Assessment and Resource Database
Offers a database of authentic assessment tools and activities submitted by teachers, searchable by keywords and menus. Also, tips for teachers, puzzles, software, and links to related Websites.
-Math Teacher Link
Web page for a professional development forum for mathematics teachers.
-Mathematics, Science, and Technology Education
Offers a directory of centers, graduate programs, and other internet resources. Of particular interest to teachers are the K-12 statistics page and the K-12 mathematics lessons.
Interactive Sites & Collaborative Projects
-Science And Math Initiatives (SAMI)
Part of a project to improve math and science education and resource access in rural settings. This database serves as a clearinghouse of resources, funding, and curriculum for rural math and science
-Cornell Math and Science Gateway for Grades 9-12
Useful page for teachers with computers in the classroom. Most links are for teens, but some are on curriculum and teaching strategies. Useful to high school teachers, students, and parents.
-Midlands Improving Math and Science
Offers a searchable database of plans, puzzles, and other resources useful to teachers, K-12 administrators, and other specialists.
-The Mathematics On-Line Bookshelf
Offers a searchable database of math books.
Interesting and Unique Sites
-Mega Mathematics
Engages elementary school teachers and students with unusual and important ideas on the frontier of mathematics.
-Favorite Mathematical Constants
-About Today's Date
Facts and events associated with the numbers of each date. Of interest to curious people of all ages.
-Fibonacci Numbers and Nature
References to technical papers, activities with vegetables and fruit, and links to other Websites.
-MathMol Home Page
Software, activities, hypermedia and more relating to the emerging field of molecular modeling. For teachers, developers of instructional materials, and others interested in molecular modeling.
-MacTutor History of Mathematics
Provides the biographies of over 550 mathematicians. Of particular interest to mathematicians, math educators, historians, and biographers.
-History of Mathematics
Over 1,000 important mathematicians, searchable by name, chronology, or geographic region. Hotlist of sites containing bibliographies and historical references. Useful to mathematicians, math
educators, historians, and enthusiasts.
-A Catalog of Mathematics Resources on WWW and the Internet
A massive list of links to mathematics and mathematics education sites worldwide. One section of the "catalog" is devoted to math teaching, math education, and math student servers.
-Yahoo! Mathematics Resources
Offers a searchable database of links of value to anyone searching for mathematics resources.
-Directory of Mathematics Resources
Part of the Galaxy guide to WWW information and services. Provides an extensive directory of web links to collections of materials, periodicals, organizations, and other directories.
-Blue Web'n Applications: Mathematics WebResource
http://www.kn.pacbell.com/cgi-bin/listApps.pl?Mathematics& WebResource
This site by Pacific Bell Knowledge Network provides an annotated list of sites, with ratings.
-Directory of Mathematics Links
Part of the WWW Virtual Library, this site offers an extensive collection of links to online materials, including high school servers, gopher servers, newsgroups, electronic journals, a software
index, bibliographies, and more.
-Mathematics Sources
Provides sources grouped into Preprint Archives, Database Gateways, Organizations, Mathematics Departments, and other categories. Useful for educators and specialists.
-Mathematics Information Servers
Perhaps the most comprehensive listing of electronic resources worldwide for mathematics. Particularly useful for mathematicians, educators, and specialists in mathematics education.
-Mathematics Education International Directory
http://acorn.educ.nottingham.ac.uk//SchEd/pages/gates/name s.html
Contact information for many mathematics educators worldwide, with e-mail addresses. Sponsored by the University of Nottingham, UK; searchable by country and organization.
About the Authors
David Haury is Director of the ERIC Clearinghouse for Science, Mathematics and Environmental Education, and Associate Professor of Mathematics, Science, and Technology Education at The Ohio State
University. Linda Milbourne is Associate Director of the ERIC Clearinghouse for Science, Mathematics, and Environmental Education.
This digest was funded by the Office of Educational Research and Improvement, U.S. Department of Education under contract no. RR93002013. Opinions expressed in this digest do not necessarily reflect
the positions or policies of OERI or the Department of Education.
Back to the Table of Contents
Here we provide an annotated listing of Web resources relating to mathematics education. Though not an exhaustive list of what is available, these sites represent the range of resources, and they are
excellent places to begin your own journey through the web of interconnected sites.
ED402157 Sep 96 Mathematics Education Resources on the World Wide Web. ERIC Digest.
Authors: Haury, David L.; Milbourne, Linda A.
ERIC Clearinghouse for Science, Mathematics, and Environmental Education, Columbus, Ohio.
|
{"url":"http://www.kidsource.com/kidsource/content4/math.resources.html","timestamp":"2014-04-17T21:59:39Z","content_type":null,"content_length":"23525","record_id":"<urn:uuid:4e99f83e-647c-4b4c-b3ac-3b504d48e335>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
interpretive validity definition
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
Probability interpretations
The word probabilityhas been used in a variety of ways since it was first coined in relation togames of chance. Does probability measure the real, physical tendency of something to occur, or is it
just a measure of how strongly one believes it will occur? In answering such questions, we interpret the probability values of probability theory.
There are two broad categories of probability interpretations which can be called 'physical' and 'evidential' probabilities. Physical probabilities, which are also called objective or frequency
probabilities, are associated with random physical systems such as roulette wheels, rolling dice and radioactive atoms. In such systems, a given type of event (such as the dice yielding a six) tends
to occur at a persistent rate, or 'relative frequency', in a long run of trials. Physical probabilities either explain, or are invoked to explain, these stable frequencies. Thus talk about physical
probability makes sense only when dealing with well defined random experiments. The two main kinds of theory of physical probability are frequentist accounts (such as those of Venn, Reichenbach and
von Mises) and propensity accounts (such as those of Popper, Miller, Giere and Fetzer).
Evidential probability, also called Bayesian probability (or subjectivist probability), can be assigned to any statement whatsoever, even when no random process is involved, as a way to represent its
subjective plausibility, or the degree to which the statement is supported by the available evidence. On most accounts, evidential probabilities are considered to be degrees of belief, defined in
terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical (e.g. Laplace's) interpretation, the subjective interpretation (de Finetti and Savage), the
epistemic or inductive interpretation (Ramsey, Cox) and the logical interpretation (Keynes and Carnap).
Some interpretations of probability are associated with approaches to statistical inference, including theories of estimation and hypothesis testing. The physical interpretation, for example, is
taken by followers of "frequentist" statistical methods, such as R. A. Fisher, Jerzy Neyman and Egon Pearson. Statisticians of the opposing Bayesian school typically accept the existence and
importance of physical probabilities, but also consider the calculation of evidential probabilities to be both valid and necessary in statistics. This article, however, focuses on the interpretations
of probability rather than theories of statistical inference.
The terminology of this topic is rather confusing, in part because probabilities are studied within so many different academic fields. The word "frequentist" is especially tricky. To philosophers it
refers to a particular theory of physical probability, one that has more or less been abandoned. To scientists, on the other hand, "frequentist probability" is just what philosophers call physical
(or objective) probability. Those who promote Bayesian inference view "frequentist statistics" as an approach to statistical inference that recognises only physical probabilities. Also the word
"objective", as applied to probability, sometimes means exactly what "physical" means here, but is also used of evidential probabilities that are fixed by rational constraints, such as logical and
epistemic probabilities.
Classical definition
The first attempt at mathematical rigour in the field of probability, championed by Pierre-Simon Laplace, is now known as the classical definition. Developed from studies of games of chance (such as
rolling dice) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely .
This can be represented mathematically as follows: If a random experiment can result in N mutually exclusive and equally likely outcomes and if N[A] of these outcomes result in the occurrence of the
event A, the probability of A is defined by P(A) = {N_A \over N} .
There are two clear limitations to the classical definition. Firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. But some important random
experiments, such as tossing a coin until it rises heads, give rise to an infinite set of outcomes. And secondly, you need to determine in advance that all the possible outcomes are equally likely
without relying on the notion of probability to avoid circularity—for instance, by symmetry considerations.
Frequentists posit that the probability of an event is its relative frequency over time, i.e., its relative frequency of occurrence after repeating a process a large number of times under similar
conditions. This is also known as aleatory probability. The events are assumed to be governed by some random physical phenomena, which are either phenomena that are predictable, in principle, with
sufficient information (see Determinism); or phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel; an example of the second kind
is radioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated
series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity.
If we denote by \textstyle n_a the number of occurrences of an event \mathcal{A} in \textstyle n trials, then if \lim_{n \to \infty}{n_a \over n}=p we say that \textstyle P(\mathcal{A})=p
The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determ
In mathematical logic, satisfiability and validityare elementary concepts concerninginterpretation. A formula is satisfiable with respect to a class of interpretations if it is possible to find an
interpretation that makes the formula true. A formula is valid if all such interpretations make the formula true. These notions can be relativised to satisfiability and validity within an axiomatic
theory, where we count only interpretations that make all axioms of that theory true.
The opposites of these concepts are unsatisfiability and invalidity, that is, a formula is unsatisfiable if none of the interpretations make the formula true, and invalid if some such interpretation
makes the formula false.
These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition.
The four concepts can be raised to apply to whole theories: a theory is satisfiable (valid) if one (all) of the interpretations make(s) each of the axioms of the theory true, and a theory is
unsatisfiable (invalid) if all (one) of the interpretations make(s) each of the axioms of the theory false.
Reduction of validity to satisfiability
For classical logics, it is generally possible to reexpress the question of the validity of a formula to one involving satisfiability, because of the relationships between the concepts expressed in
the above square of opposition. In particular φ is valid if and only if ¬φ is unsatisfiable, which is to say it is not true that ¬φ is satisfiable.
Propositional satisfiability
In the case of classical propositional logic, satisfiability is decidable for propositional formulae. In particular, satisfiability is an NP-complete problem, and is one of the most intensively
studied problems in computational complexity theory.
Satisfiability in first-order logic
Satisfiability is undecidable and indeed it isn't even a semidecidable property of formulae in first-order logic (FOL). This fact has to do with the undecidability of the validity problem for FOL.
The universal validity of a formula is a semi-decidable problem. If satisfiability were also a semi-decidable problem, then the problem of the existence of counter-models would be too (a formula has
counter-models iff its negation is satisfiable). So the problem of logical validity would be decidable, which contradicts the Church-Turing theorem.
From Yahoo Answers
Question:i have seen people use this argument to prove that there is no god. they say that god cannot be omnipotent, as he cannot create a stone that he cannot lift. if he could, there would be
something he couldn't do, ie. lift the stone. thus, he cannot be omnipotent, and the Judeo-christian god cannot be. i think this is a fallacy. it has a premise which makes no sense. it is like the
similar question, what happens when an unstoppable force hits an unbreakable shield? does it break (thus not unbreakable) or does the force dissipated (thus not unstoppable). this is a ridiculous
question, because the premise is self contradictory. either there is an unstoppable force, so there is no shield that cannot be broken, or there is an unbreakable shield, hence no unstoppable force.
the two are mutually exclusive. all the question is asking is "what happens when something impossible occurs?" this is an illogical question. the definition of impossible is that it will never occur,
so asking what happens when it does is ridiculous. this obviously only applies when natural laws are followed. when they are broken, then there is no reason to worry about what happens: we cannot
know without knowing which laws were broken, how, and what the effect is. thus, the question cannot be answered, as it is a fallacious question. the same goes for the original question. you are
presupposing that there is such a theoretical concept as a stone God cannot lift. this is a fallacy. there is no such thing as a problem an omnipotent God cannot do: trying to think of such a concept
is a waste of time. the words "a stone God cannot lift" mean nothing real. it is like asking "what happens when something both occurs and doesn't occur?" as it defies logic, the answer is not going
to be logical. thus, the answer to the question "can God create a stone he cannot lift?" is that there is no question, and that the premise is fatally flawed. this is all assuming that we mean
"cannot lift with no miracles." however, with a miracle (ie. a breach of natural laws) anything is possible: there are no rules, not even logical ones.
Answers:"There are no rules, not even logical ones." is itself a rule. So your proposed solution defeats itself. A retreat into irrationality is not the proper response to this sort of fallacy since
rationality and logic can deal with it perfectly well. The problem is two things. Firstly, that you think there's actually a stone. And secondly, that you think that it's a limitation on power to be
unable to do things with nothing. But neither of these are true. Elaborating on this we can say: 1) The phrase "a stone created by an omnipotent being that is so heavy that that being can't lift it"
has no referent in logical space. That is, it picks out no possible thing. When we make the sound "the stone" followed by a certain description we are inclined to think, because we can form clear
mental images of various sorts of stones, that the description must actually be of some sort of possible stone or another ... but it's not. It's not a stone and neither is it anything else. It's
utterly and absolutely nothing whatsoever ... not anywhere or anytime or anyhow. It is no more real than round squares or four-cornered triangles. and 2) A being's potency is not diminished by being
unable to do things with impossibilia (necessary nonexistents), only with possibilia. That is, the space of possible actions is the only space relevant to determining how powerful a being is; not the
null space of impossible actions. Power is about efficacious actions, and a being is powerful exactly insofar as possible actions are available to them, not insofar as nonactions are available.
Action on nothing is a nonaction. So, in summary, no such stone exists anywhere or anytime to be or have been either created or lifted or for any other positively specified thing whatsoever to be
done to it or be true of it. If there are any created things, then an omnipotent being can create them. If there are any lifted things, then an omnipotent being can lift them. But for things that are
neither stones nor any other thing at all then neither creating nor lifting nor anything else is done with or to them by anything at all, omnipotent or otherwise. None of the above constitutes a
proof of an omnipotent being of any sort. It only shows that the stone-paradox and similar arguments can't show the concept of such a being to be inconsistent.
Question:Democrats get called socialist, instead of defending themselves on why they are not socialist they simply state: "learn the definition of socialism" Are they only capable of repeating what
obama whispers in their ears? well here it is for everyone The definition of socialism as Stated by the Mariam Webster dictionary "Main Entry: so cial ism Pronunciation: \ s -sh - li-z m\ Function:
noun Date: 1837 1 : any of various economic and political theories advocating collective or governmental ownership and administration of the means of production and distribution of goods 2 a : a
system of society or group living in which there is no private property b : a system or condition of society in which the means of production are owned and controlled by the state 3 : a stage of
society in Marxist theory transitional between capitalism and communism and distinguished by unequal distribution of goods and pay according to work done" I love how a majority of the answers to this
question were instead about whether or not democrats are socialist instead of if the argument they use is valid.
Answers:Yes, isn't it funny that they spend all day saying "You don't know what socialism is," then they NEVER demonstrate that they know, either. Most internet discussion forums seem to be heavily
populated with individuals who go to great lengths to extol the virtures of socialism. The problem is, none of them seem to agree about what socialism actually is. See the answers to my question:
What IS socialism? An escape from poverty, or a luxury for prosperous exporter nations? I have asked this question several times. Each time, I get answers from around a dozen persons. And no two of
them give the same answer. But each person is supremely confident that his or her definition is the right one, and that this is the authoritative definition that we should all view as the true
meaning of socialism. Even more amusing is, even though the other answers are plainly visible to them, each person seems blissfully aware that nobody else agrees with their chosen definition. Less
amusingly, even though they consistently demonstrate a total lack of consensus about what socialism is, they frequently mock and berate opponents of socialism for "not understanding" what socialism
is, then they invariably fail to demonstrate that they understand, either. Their answers tend to fall into four general categories: The purist view: Socialism has never existed anywhere. No nation
has ever established a pure socialist state according to the original definition of socialism. Subgroups of this view include those who declare that socialism is a stepping stone to communism, which
is correct according to the original definition; others dutifully recite the definition from the dictionary. The indiscriminate vew: Socialism already exists almost everywhere. All forms of public
works and charity are socialist, because they are done to benefit all of society. Roads, bridges, schools, police/fire departments, national defense, and charities to help the poor are all forms of
socialism. And almost all nations have at least some of these, therefore all nations are socialist. The progressive view: Socialism exists in certain "progressive" nations, such as Canada and many
countries in Europe. Some individuals count Japan as a "progressive" socialist nation. The individuals who hold to this view seem unaware that most of these so called "progressive" nations are highly
capitalistic exporter nations, which seems to contradict the idea of having some form of Marxism. And the "progressive" nations that are not strong exporters are debtor nations. See this website to
get the economic statstics for these nations: http://www.tradingeconomics.com/Economics/Balance-Of-Trade.aspx?Symbol=NOK The remnant of communism view: Socialism exists in formerly communist nations
that have not yet fully abandoned Marxism, and they are making a gradual transition to capitalism. This view is rarely cited, but I include it here for completeness. Of course, there are also various
answers from those who strenuously oppose socialism. I am focused here on the answers from those who advocte socialism, or who give a neutral but objective answer. Tne only thing that seems to be the
common thread of socialist advocacy is: taking money from those who have it, and giving money to those who don't. And we're all supposed to believe that the advocates have no motive other than pure
unaduterated altruism in their hearts. So the fundamental question is: how can we regard socialism as a viable political cause when even its most ardent advocates can't agree on what it is? And when
there is scant actual evidence that it exists or ever has existed, other than as a remnant of a failed communist state?
Question:I'm talking about the technical definition of validity, which says: An argument is valid if and only if the truth of its premises entails the truth of its conclusion. Heeltap: I meant
exactly what I said. An inductive argument with true premises cannot entail the truth of its conclusion. It can only make the conclusion more probable. That is why the notion of validity doesn't
apply to inductive arguments. Inductive arguments can be considered "inductively strong" or "inductively weak."
Answers:This is a constructive comment to your Q and is intended to add to it's educational value. I'm sure you meant to say: "A deductive argument is valid if and only if the truth of its premises
entails the truth of its conclusion. Not so, if we refer to *inductive* arguments where relevant premises do not entail, but only provide a degree of support for the conclusion. For more
understanding see: http://www.jimpryor.net/teaching/vocab/validity.html "Most of the arguments philosophers concern themselves with are--or purport to be--deductive arguments. Mathematical proofs are
a good example of deductive argument. Most of the arguments we employ in everyday life are not deductive arguments but rather inductive arguments. Inductive arguments are arguments which do not
attempt to establish a thesis conclusively. Rather, they cite evidence which makes the conclusion somewhat reasonable to believe. The methods Sherlock Holmes employed to catch criminals (and which
Holmes misleadingly called "deduction") were examples of inductive argument. Other examples of inductive argument include: concluding that it won't snow on June 1st this year, because it hasn't
snowed on June 1st for any of the last 100 years; concluding that your friend is jealous because that's the best explanation you can come up with of his behavior, and so on. It's a controversial and
difficult question what qualities make an argument a good inductive argument. Fortunately, we don't need to concern ourselves with that question here. In this class, we're concerned only with
deductive arguments. Philosophers use the following words to describe the qualities that make an argument a good deductive argument: Valid Arguments We call an argument deductively valid (or, for
short, just "valid") when the conclusion is entailed by, or logically follows from, the premises. Validity is a property of the argument's form. It doesn't matter what the premises and the conclusion
actually say. It just matters whether the argument has the right form. So, in particular, a valid argument need not have true premises, nor need it have a true conclusion. The following is a valid
argument: All cats are reptiles. Bugs Bunny is a cat. So Bugs Bunny is a reptile. Neither of the premises of this argument is true. Nor is the conclusion. But the premises are of such a form that if
they were both true, then the conclusion would also have to be true. Hence the argument is valid. To tell whether an argument is valid, figure out what the form of the argument is, and then try to
think of some other argument of that same form and having true premises but a false conclusion. If you succeed, then every argument of that form must be invalid. A valid form of argument can never
lead you from true premises to a false conclusion. " nb: I too would like to ask: "Why is the thumbs-down dwarf here? You would think that an *ignorant*dwarf would have better things to do..."
Question:A rational number is an element belonging to Q , the set of all numbers which can be represented in the form p/q where q!=0. Why do people stress on this condition so much ? p/q where q!=0 ,
cant they just tell fractions ? This gets me into this question , p/0 is not a fraction ? What about p/1 ? And all decimals are fractions , right ? If at all p/1 is a fraction , then it is a decimal
p.000 , so does it belong to the natural/whole/integer(Z) set ? Thanks!
Answers:We use the concepts of ratios and rational numbers in cases where we're not exactly talking about fractions, although there is a mathematical equivalence. We could refer to any portion less
than the whole of something as a "fraction" of it, but some fractions (by that definition) are not rational numbers. Between 0 and 1, there are actually an infinite number of rational numbers, but
there are also an infinite number of irrational ones. The rational and irrational numbers together make up the real numbers. Back to this in a moment, but first let me deal with specific cases you
brought up. x/0 is not a fraction for any real number x, nor is it a rational or irrational number. Its value is undefined. p/1 is a rational number for any integer p. The integers are a subset of
the rational numbers. All decimals are fractions. Decimals are not a separate type of number, but just a particular way of writing numbers. But decimals only represent rational numbers if they meet
one of two conditions: (1) they have a finite length, or (2) at some point, they repeat a sequence infinitely. For example, 2.1234 represents the ratio 21,234 / 100,000 and is therefore rational.
3.63636363.... (infinitely repeating "63"s forever) represents the ratio 40/11 and is therefore rational. But there are other numbers which, if converted to decimal form, would continue infinitely
and never repeat. These include the square root of 3 or PI, the ratio of a circle's circumference to its diameter. These are irrational. They are real numbers, but they cannot be represented as a
ratio of integers. So the concept of rationality turns out to be a bit more far-reaching than the notion of "fractions." That's why the emphasis. Just as the set of natural numbers was basic
knowledge for a lot of math, the set of rationals becomes the basis for a lot more.
From Youtube
Time-Dependent Scattering from a Rectangular Barrier Potential in the Causal Interpretation :The quantum wave (the squared wavefunction or probability amplitude in the Born interpretation) propagates
freely through two-dimensional configuration space until it hits the potential barrier. The potential barrier acts as beam splitter. In the case of the barrier one finds that in addition to
reflection and transmission there is a finite probability of a particle being trapped inside the barrier for a finite time. The initial quantum wave function is a Gaussian packet: In the causal
interpretation of quantum theory (often called de Broglie Bohm interpretation), every quantum particle has a definite position and momentum at all times, but the trajectories are not measurable
directly. In the Bohm theory, the velocities of the particles are given by the wavefunction. The video shows the squared wavefunction, the particles and the trajectories in the configuration space.
Programmed by: Klaus von Bloh
Dr Michael Wolf - Case Validation by Philippe de la Messuziere COMETA associate 2011 - Part 1 of 3 :Neil Gould, founder of Exopolitics Hong Kong, Director of the Exopolitics Institute interviews
French aristocrat Philippe de la Messuziere; friend and confidant of the late Dr Michael Wolf [Kruvant]; a scientist associated with the secret government. First hand witness testimony from Philippe
brings forth amazing disclosures about ETs working with Humans in underground bases, Dr Wolf cloning a human from his own biological material and a picture of an ET called Kolta photographed with his
ET boss in reflection; at Dr Wolf's apartment. Philippe met Kolta and describes his ET biological makeup including his neurological advantages. This film is the definitive back breaker of the
debunking campaign [Cointelpro] against the revelations of Dr Michael Wolf [Kruvant]. Debunking of this story continues today - Filmed with an Iphone
|
{"url":"http://www.edurite.com/kbase/interpretive-validity-definition","timestamp":"2014-04-16T07:18:10Z","content_type":null,"content_length":"95053","record_id":"<urn:uuid:0ee09691-4792-432d-8d0a-e970e42d2736>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is the property of not containing $\mathbb{F}_2$ invariant under quasi-isometry?
up vote 16 down vote favorite
Is the property of not containing the free group on two generators invariant under quasi-isometry? Amenability is, so if there is a counterexample it is also a solution to the von Neumann-Day problem
(which of course already has a solution).
gr.group-theory geometric-group-theory
add comment
1 Answer
active oldest votes
It is a famous open problem. Akhmedov in MR2424177 claimed he could prove that the answer is "no". No proof exists, so I guess he discovered a gap in his argument.
up vote 15 down
vote accepted
Mark, is the supposed proof contained in that Thompson F preprint, or is it something separate? – Yemon Choi Feb 1 '12 at 2:35
Thanks Mark.-- Justin – Justin Moore Feb 1 '12 at 2:43
4 @Yemon: That is separate. The paper MR2424177 (see MathSci) actually contains the claim, but proves a much weaker (still nice, though!) result where "free subgroups" are replaced
by "free subsemigroups" or "no non-trivial law". He says that the "big example" will be in the sequel of that paper but the sequel never happened. – Mark Sapir Feb 1 '12 at 2:55
@Mark: thank you for the information. – Yemon Choi Feb 1 '12 at 3:00
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory geometric-group-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/87188/is-the-property-of-not-containing-mathbbf-2-invariant-under-quasi-isometry?sort=newest","timestamp":"2014-04-18T10:42:43Z","content_type":null,"content_length":"55920","record_id":"<urn:uuid:df438966-1404-474d-b89b-f3df5056949c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Recursive descent and left recursion
fjh@murlibobo.cs.mu.OZ.AU (Fergus Henderson)
16 Jan 1997 20:09:01 -0500
From comp.compilers
| List of all articles for this month |
From: fjh@murlibobo.cs.mu.OZ.AU (Fergus Henderson)
Newsgroups: comp.compilers
Date: 16 Jan 1997 20:09:01 -0500
Organization: Comp Sci, University of Melbourne
References: 97-01-099 97-01-126
Keywords: parse, LL(1), LR(1)
mfinney@lynchburg.net wrote:
>> I have noticed the occassional post here, as well as assertions in
>> various texts, that left recursion is not usable with recursive
>> descent (and LR parsers in general).
cfc@world.std.com (Chris F Clark) writes:
>I thought certainly someone else would catch this mis-impression, but
>since no one has mentioned it. LR parsers allow left-recursion as
>well as right recursion and any other recursion. It is LL parsers
>which don't like left recursion. In fact, that is the main reason for
>using LR (or LALR) parsing over LL.
All true... but now you go on to add a mis-impression of your own,
because you confuse LR parsing with operator precedence parsing:
>You can write your expression
>grammars "naturally" with out inventing extra non-terminals to handle
>precedence levels.
Yes, but only if your LR parsing tool/technique allows operator
precedence grammars -- and you can do that with LL parsing too, if you
have an LL parsing tool/technique that supports operator precedence
grammars. Certainly it's not that hard to write an operator
precedence recursive descent parser.
Perhaps you meant "... without having to left-factor your grammar"?
Fergus Henderson <fjh@cs.mu.oz.au>
WWW: <http://www.cs.mu.oz.au/~fjh>
PGP: finger fjh@128.250.37.3
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/97-01-129","timestamp":"2014-04-20T20:58:27Z","content_type":null,"content_length":"6967","record_id":"<urn:uuid:fc44c554-e3b5-4878-b18f-58d0d3812553>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
Most Active Subjects
Questions Asked
Questions Answered
Medals Received
Questions Asked
Questions Answered
Medals Received
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/0001007831/asked","timestamp":"2014-04-18T19:07:52Z","content_type":null,"content_length":"76177","record_id":"<urn:uuid:10939dc4-93ad-4ad5-b7ae-5af1d3101c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
I met up with some friends yesterday for lunch. On the table was a good big block of cheese. It looked rather like a cube. As the meal went on we started cutting off slices, but these got smaller and
smaller! It got me thinking ...
What if the cheese cube was $5$ by $5$ by $5$ and each slice was always $1$ thick?
It wouldn't be fair on everyone else's lunch if I cut up the real cheese so I made a model out of multilink cubes:
You could of course, just make $5$ slices but I wanted to have a go at something else - keeping what is left as close to being a cube as possible.
You can see that it's a $5$ by $5$ by $5$ because of the individual cubes, so the slices will have to be $1$ cube thick.
So let's take a slice off the right hand side, I've coloured it in so you can see which bit I'm talking about:
The next slice will be from the left hand side (shown in a different colour again):
Remember I'm setting myself the task of cutting so that I am left with a shape as close to a cube shape as possible each time.
So the next cut is from the top. Hard to cut this so I would have put it on its side!
I do three more cuts to get to the $3$ by $3$ by $3$ and these leave the block like this:
I'm sure you've got the idea now so I don't need to talk as much about what I did:
That leaves you with two of the smallest size cube $1$ by $1$ by $1$.
If we keep all the slices and the last little cube, we will have pieces that look like (seen from above):
C H A L L E N G E
Now we have thirteen objects to explore.
• What about the areas of these as seen from above?
• What about the total surface areas of these?
• What about their volumes of the pieces?
A L S O
Investigate sharing these thirteen pieces out so that everyone gets an equal share.
What about ...?
I guess that once you've explored the pattern of numbers you'll be able to extend it as if you had started with a $10$ by $10$ by $10$ cube of cheese.
|
{"url":"http://nrich.maths.org/78/index?nomenu=1","timestamp":"2014-04-18T03:09:48Z","content_type":null,"content_length":"6597","record_id":"<urn:uuid:2ccebb65-87a2-43ab-ba18-21e47d9f8b86>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Michael K.
I have tutored in mathematics and physics during my graduate years while earning my PhD in experimental particle physics. I have a passion for mathematics and physics and love continuing to learn
myself. I find it rewarding to see others learn in these areas as well. I have a large base of subjects for which I am qualified to assist in. My motto as a tutor is "to be able to put myself out of
business". My style is to empower the student to become self-sufficient and confident in their own skills to master the subject.
Michael's subjects
|
{"url":"http://www.wyzant.com/Tutors/CO/Loveland/7919933/?g=3JY","timestamp":"2014-04-19T13:53:47Z","content_type":null,"content_length":"123131","record_id":"<urn:uuid:3701c8fc-1719-454e-849b-c9ef586aed01>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Directory tex-archive/macros/latex/contrib/mh
The mh bundle
Morten Hoegholm (c) 2002-2011
Lars Madsen (c) 2012
email: mh.ctan@gmail.com
License: LaTeX Project Public License
The files in the mh bundle are:
and derived files. The derived files of each .dtx-file are listed
at the top of the respective .dtx-file.
Running TeX on each dtx file extracts the runtime files. See the dtx
files for details.
The breqn package facilitates automatic line-breaking of displayed
math expressions. The package was originally developed by Michael
J. Downes.
This package turns math symbols into macros.
Is is required by breqn so that breqn can make intelligent decisions
with respect to line-breaking and other details.
Ensures uniform syntax for math subscript (_) and superscript (^)
operations so that they always take exactly one argument.
Grants access to the current mathstyle which eases several tasks such
as avoiding the many pitfalls of \mathchoice and \mathpalette.
This package is used by flexisym.
The mathtools package provides many useful tools for mathematical
typesetting. It is based on amsmath and fixes various deficiencies
of amsmath and standard LaTeX. It provides:
-- Extensible symbols, such as brackets, arrows, harpoons, etc.
-- Various symbols such as \coloneqq (:=).
-- Easy creation of new tag forms.
-- Showing only the referenced equations.
-- Extensible arrows, harpoons and hookarrows.
-- Starred versions of the amsmath matrix environments for
specifying the column alignment.
-- More building blocks: multlined, cases-like environments, new
gathered environments.
-- Math versions of \makebox, \llap, \rlap etc.
-- Cramped math styles.
-- and more...
mathtools requires mhsetup.
The empheq package is a visual markup extension designed to
function on top of amsmath. It features:
-- Boxing multi line math displays while leaving equation
numbers untouched at the margin. Any kind of box will do.
-- Making the ntheorem package place end-of-theorem markers
-- Placing arbitrary material on either side of math displays.
This includes delimiters that automatically scale to the
correct size.
empheq requires mathtools.
The mhsetup package defines various programming tools needed by
both empheq and mathtools. In the future, most of these tools will
probably be an integral part of LaTeX3.
A package for producing split level fractions in both text and
math. This package started as a part of the MH bundle, but is not
integrated into the l3packages bundle.
The bundle is maintained by:
Lars Madsen <daleif@imf.au.dk>
Will Robertson <wspr81@gmail.com>
Joseph Wright <joseph.wright@morningstar2.co.uk>
Please report bugs mh.ctan@gmail.com (or the entire team).
This README file was last revised 2013/02/12.
Name Size Date Notes
README 3117 2013-03-16 07:12:33
breqn-technotes.pdf 173858 2013-03-16 07:12:24
breqn-technotes.tex 5029 2013-03-16 07:12:24
breqn.dtx 235036 2013-03-16 07:12:24
breqn.pdf 543444 2013-03-16 07:12:24
empheq.dtx 150793 2013-03-16 07:12:33
empheq.pdf 311681 2013-03-16 07:12:33
flexisym.dtx 64417 2013-03-16 07:12:24
flexisym.pdf 243515 2013-03-16 07:12:24
mathstyle.dtx 15654 2013-03-16 07:12:24
mathstyle.pdf 173352 2013-03-16 07:12:24
mathtools.dtx 188679 2013-03-16 07:12:33
mathtools.pdf 415097 2013-03-16 07:12:33
mhsetup.dtx 20718 2013-03-16 07:12:33
mhsetup.pdf 233706 2013-03-16 07:12:33
Download the complete contents of this directory in one zip archive (2.2M).
mh – The MH bundle
The mh bundle is a series of packages designed to enhance the appearance of documents containing a lot of math. The main backbone is amsmath, so those unfamiliar with this required
part of the LaTeX system will probably not find the packages very useful. Component parts of the bundle are:
breqn, empheq, flexisym, mathstyle, mathtools and mhsetup,
The empheq package is a visual markup extension of
. Empheq allows sophisticated boxing and other marking of multi-line maths displays, and fixes problems with the way that the
package places end-of-theorem markers. The mathtools package provides many useful tools for mathematical typesetting. It fixes various deficiencies of amsmath and standard L
X. The mhsetup package defines various programming tools needed by both empheq and mathtools. The breqn package makes more easy the business of preparing displayed equations in L
X, including permitting automatic line-breaking within displayed equations. (Breqn uses the mathstyle package to keep track of the current maths typesetting style, something that raw T
X hides from the programmer.)
Version 2013-02-12
License The LaTeX Project Public License 1.3
Copyright 2002-2010 Morten Høgholm
Will Robertson
Maintainer Morten Høgholm
Lars Madsen
Joseph Wright
TDS archive mh.tds.zip
Contained in TeXLive as mh
MiKTeX as mh
Topics a collection of packages
support for typesetting mathematics
|
{"url":"http://ctan.org/tex-archive/macros/latex/contrib/mh","timestamp":"2014-04-18T18:43:35Z","content_type":null,"content_length":"18842","record_id":"<urn:uuid:29a7d9a8-90d8-4d76-9914-32c313f5724b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Deformed products and maximal shadows of polytopes
Results 1 - 10 of 23
, 2003
"... We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum
over inputs of the expected performance of an algorithm under small random perturbations of that input. We me ..."
Cited by 146 (14 self)
Add to MetaCart
We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over
inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations.
We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of
- SIAM J. Comput , 1996
"... We show that in the worst case, Ω(n dd=2e\Gamma1 +n log n) sidedness queries are required to determine whether the convex hull of n points in R^d is simplicial, or to determine the number of
convex hull facets. This lower bound matches known upper bounds in any odd dimension. Our result follow ..."
Cited by 26 (7 self)
Add to MetaCart
We show that in the worst case, Ω(n dd=2e\Gamma1 +n log n) sidedness queries are required to determine whether the convex hull of n points in R^d is simplicial, or to determine the number of
convex hull facets. This lower bound matches known upper bounds in any odd dimension. Our result follows from a straightforward adversary argument. A key step in the proof is the construction of a
quasi-simplicial n-vertex polytope with Ω(n dd=2e\Gamma1 ) degenerate facets. While it has been known for several years that d-dimensional convex hulls can have Ω(n bd=2c ) facets, the
previously best lower bound for these problems is only Ω(n log n). Using similar techniques, we also obtain simple and correct proofs of Erickson and Seidel's lower bounds for detecting affine
degeneracies in arbitrary dimensions and circular degeneracies in the plane. As a related result, we show that detecting simplicial convex hulls in R^d is ⌈d/2⌉-hard, in the in the sense
of Gajentaan and Overmars.
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman
and Teng ..."
Cited by 23 (4 self)
Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and
, 2007
"... We show that for polytopes P1, P2,..., Pr ⊂ Rd, each having ni ≥ d + 1 vertices, the Minkowski sum P1 + P2 + · · · + Pr cannot achieve the maximum of ∏ i ni vertices if r ≥ d. This complements a
recent result of Fukuda & Weibel (2006), who show that this is possible for up to d − 1 summands. The r ..."
Cited by 11 (1 self)
Add to MetaCart
We show that for polytopes P1, P2,..., Pr ⊂ Rd, each having ni ≥ d + 1 vertices, the Minkowski sum P1 + P2 + · · · + Pr cannot achieve the maximum of ∏ i ni vertices if r ≥ d. This complements a
recent result of Fukuda & Weibel (2006), who show that this is possible for up to d − 1 summands. The result is obtained by combining methods from discrete geometry (Gale transforms) and topological
combinatorics (van Kampen–type obstructions) as developed in Rörig, Sanyal, and Ziegler (2007).
, 2007
"... We introduce a deformed product construction for simple polytopes in terms of lowertriangular block matrix representations. We further show how Gale duality can be employed for the construction
and for the analysis of deformed products such that specified faces (e.g. all the k-faces) are “strictly p ..."
Cited by 10 (1 self)
Add to MetaCart
We introduce a deformed product construction for simple polytopes in terms of lowertriangular block matrix representations. We further show how Gale duality can be employed for the construction and
for the analysis of deformed products such that specified faces (e.g. all the k-faces) are “strictly preserved ” under projection. Thus, starting from an arbitrary neighborly simplicial (d−2)
-polytope Q on n−1 vertices we construct a deformed n-cube, whose projection to the last d coordinates yields a neighborly cubical d-polytope. As an extension of the cubical case, we construct matrix
representations of deformed products of (even) polygons (DPPs), which have a projection to d-space that retains the complete ( ⌊ d 2 ⌋ − 1)-skeleton. In both cases the combinatorial structure of the
images under projection is completely determined by the neighborly polytope Q: Our analysis provides explicit combinatorial descriptions. This yields a multitude of combinatorially different
neighborly cubical polytopes and DPPs. As a special case, we obtain simplified descriptions of the neighborly cubical polytopes of Joswig & Ziegler (2000) as well as of the projected deformed
products of polygons that were announced by Ziegler (2004), a family of 4-polytopes whose “fatness ” gets arbitrarily close to 9. 1
- IN 5TH ANNUAL EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA'97 , 1996
"... We develop lower bounds on the number of primitive operations required to solve several fundamental problems in computational geometry. For example, given a set of points in the plane, are any
three colinear? Given a set of points and lines, does any point lie on a line? These and similar question ..."
Cited by 8 (0 self)
Add to MetaCart
We develop lower bounds on the number of primitive operations required to solve several fundamental problems in computational geometry. For example, given a set of points in the plane, are any three
colinear? Given a set of points and lines, does any point lie on a line? These and similar questions arise as subproblems or special cases of a large number of more complicated geometric problems,
including point location, range searching, motion planning, collision detection, ray shooting, and hidden surface removal. Previously these problems were studied only in general models of
computation, but known techniques for these models are too weak to prove useful results. Our approach is to consider, for each problem, a more specialized model of computation that is still rich
enough to describe all known algorit...
, 2000
"... In this paper we address the problem of computing a minimal H-representation of the convex hull of the union of k H-polytopes in R^d. Our method applies the reverse search algorithm to a
shelling ordering of the facets of the convex hull. Efficient wrapping is done by projecting the polytopes onto t ..."
Cited by 6 (1 self)
Add to MetaCart
In this paper we address the problem of computing a minimal H-representation of the convex hull of the union of k H-polytopes in R^d. Our method applies the reverse search algorithm to a shelling
ordering of the facets of the convex hull. Efficient wrapping is done by projecting the polytopes onto the two-dimensional space and solving a linear program. The resulting algorithm is polynomial in
the sizes of input and output under the general position assumption.
"... In this paper, we resolve the smoothed and approximative complexity of low-rank quasi-concave minimization, providing both upper and lower bounds. As an upper bound, we provide the first
smoothed analysis of quasi-concave minimization. The analysis is based on a smoothed bound for the number of extr ..."
Cited by 4 (2 self)
Add to MetaCart
In this paper, we resolve the smoothed and approximative complexity of low-rank quasi-concave minimization, providing both upper and lower bounds. As an upper bound, we provide the first smoothed
analysis of quasi-concave minimization. The analysis is based on a smoothed bound for the number of extreme points of the projection of the feasible polytope onto a k-dimensional subspace, where k is
the rank (informally, the dimension of nonconvexity) of the quasi-concave function. Our smoothed bound is polynomial in the original dimension of the problem n and the perturbation size ρ, and it is
exponential in the rank of the function k. From this, we obtain the first randomized fully polynomialtime approximation scheme for low-rank quasi-concave minimization under broad conditions. In
contrast with this, we prove log n-hardness of approximation for general quasi-concave minimization. This shows that our smoothed bound is essentially tight, in that no polynomial smoothed bound is
possible for quasi-concave functions of general rank k. The tools that we introduce for the smoothed analysis may be of independent interest. All previous smoothed analyses of polytopes analyzed
projections onto two-dimensional subspaces and studied them using trigonometry to examine the angles between vectors and 2-planes in R n. In this paper, we provide what is, to our knowledge, the
first smoothed analysis of the projection of polytopes onto higher-dimensional subspaces. To do this, we replace the trigonometry with tools from random matrix theory and differential geometry on the
Grassmannian. Our hardness reduction is based on entirely different proofs that may also be of independent interest: we show that the stochastic 2-stage minimum spanning tree problem has a
supermodular objective and that su-
- Siam J. Discrete Math , 1999
"... A dissection of a convex d-polytope is a partition of the polytope into d-simplices whose vertices are among the vertices of the polytope. Triangulations are dissections that have the additional
property that the set of all its simplices forms a simplicial complex. The size of a dissection is the nu ..."
Cited by 3 (2 self)
Add to MetaCart
A dissection of a convex d-polytope is a partition of the polytope into d-simplices whose vertices are among the vertices of the polytope. Triangulations are dissections that have the additional
property that the set of all its simplices forms a simplicial complex. The size of a dissection is the number of d-simplices it contains. This paper compares triangulations of maximal size with
dissections of maximal size. We also exhibit lower and upper bounds for the size of dissections of a 3-polytope and analyze extremal size triangulations for specific non-simplicial polytopes: prisms,
antiprisms, Archimedean solids, and combinatorial d-cubes.
, 2005
"... Abstract. We prove that the Random-Edge simplex algorithm requires an expected number of at most 13n / √ d pivot steps on any simple d-polytope with n vertices. This is the first nontrivial
upper bound for general polytopes. We also describe a refined analysis that potentially yields much better bo ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. We prove that the Random-Edge simplex algorithm requires an expected number of at most 13n / √ d pivot steps on any simple d-polytope with n vertices. This is the first nontrivial upper
bound for general polytopes. We also describe a refined analysis that potentially yields much better bounds for specific classes of polytopes. As one application, we show that for combinatorial
d-cubes, the trivial upper bound of 2 d on the performance of Random-Edge can asymptotically be improved by any desired polynomial factor in d. 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=277520","timestamp":"2014-04-21T06:09:24Z","content_type":null,"content_length":"37745","record_id":"<urn:uuid:2850e474-3a5a-4710-87b0-1c61fa40a3c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
logarithm question - WyzAnt Answers
logarithm question
Write log_8??x^2/(yz^2 )? as a sum, difference, or product of logarithms. Simplify the result if possible
Tutors, please
to answer this question.
Let me make sure I understand the question correctly: You need to write the following logarithm as a sum, difference, or product of logs...
log[8] (x^2)/(y*z^2)
Before I get to answering your question, let me review the log rules quickly:
1. Product rule: log (a*b) = log (a) + log (b)
2. Division rule: log (a/b) = log (a) - log (b) (ORDER MATTERS!)
3. Exponent rule: lob (a^2) = log (a)^2 = 2*log (a) (This is NOT the same as (log a)^2!!!)
Now that we have those rules, let's apply them to this question. Start with the math operation that is connecting all parts of (x^2)/(y*z^2) together: the division sign.
Apply Rule #2:
log[8] (x^2)/(y*z^2) --> log[8 ](x^2) - log[8] (y*z^2)
Now we have two separate logs to work with. Let's focus on log[8 ](x^2) first.
Apply Rule #3:
log[8] (x^2) --> 2*log[8] (x)
Now for the second part: log[8] (y*z^2).
Apply Rule #1:
log[8] (y*z^2) --> log[8] (y) + log[8] (z^2)
If you notice, this part can be simplified further. Apply Rule #3 to the log containing z^2.
log[8] (y*z^2) --> log[8] (y) + log[8] (z^2) --> log[8] (y) + 2*log[8] (z)
Now that everything has been simplified down as far as we can go, let's update the whole expression:
log[8] (x^2)/(y*z^2) --> log[8] (x^2) - log[8] (y*z^2) --> 2*log[8] (x) - [log[8] (y) + 2*log[8] (z)]
If you're comparing this answer to the answers in the back of a textbook, they may take this one step further and distribute the subtraction sign through to the log[8](y) and the 2log[8](z), in which
case, your answer would look like this:
2*log[8] (x) - log[8] (y) - 2*log[8] (z)
Hope this is helpful!!
|
{"url":"http://www.wyzant.com/resources/answers/21445/logarithm_question","timestamp":"2014-04-18T04:07:04Z","content_type":null,"content_length":"44871","record_id":"<urn:uuid:8545e451-115f-490f-8715-80ca20be73cc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How on Earth?
Replies: 6 Last Post: Sep 12, 2012 3:29 AM
Messages: [ Previous | Next ]
Re: 2 Assumptions for one parameter?
Posted: Sep 12, 2012 3:29 AM
On 9/11/12 at 2:32 AM,
=?ISO-8859-1?Q?Andreas_Talmon_l=27Arm=E9e?=@smc.vnet.net wrote:
>What is the right typing to make two assumptions for one parameter?
>Something like this:
either what you have or:
$Assumpitions={a>0, Element[a,Reals]}
but note, by default Mathematica makes a variable as general as
possible. So, since a>0 isn't meaningful for complex a, it follows:
$Assumptions={a>0, Element[a,Reals]}
all achieve exactly the same thing.
>And is there a way to to control my assumptions made for the
>parameters I use something like
It is unclear to me what you are trying to do here. Setting the
value of $Assumptions impacts those functions that look at the
value of $Assumptions when you use them but has no effect on the
value of other symbols such as a. That is you can do:
In[6]:= Clear[a]; $Assumptions = {a > 0};
Simplify@Element[Sqrt[a], Reals]
Out[7]= True
then assign a value to a that contradicts your assumptions and
work with it
In[8]:= a = -2;
Element[Sqrt[a], Reals]
Out[9]= False
but this definitely causes problems for functions that look at
the value of $Assumptions since now
In[10]:= Simplify@Element[Sqrt[a], Reals]
Out[10]= True
and generates an warning stating one or more assumptions
evaluated to False
Date Subject Author
9/10/12 Chris Arthur
9/10/12 Bob Hanlon
9/11/12 2 Assumptions for one parameter? Andreas Talmon l'Armée
9/12/12 Re: 2 Assumptions for one parameter? Bill Rowe
9/12/12 Re: 2 Assumptions for one parameter? Bob Hanlon
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7888860","timestamp":"2014-04-19T23:18:49Z","content_type":null,"content_length":"22184","record_id":"<urn:uuid:e99b7ee7-bfd8-43ee-85a9-47abe61b5117>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limit help, radical in the numerator!!! (Answer provided but idk how it comes about)
the problem is
the lim as x approaches inifinity of
√(9x^6 - 1)/(x^3 - 1)
Correct Answer: 3
the 1's are constants being subtracted, not being subtracted from the exponents themselves.
I tried mulitplying both by 9x^6-1 but it just wasn't a good time and I couldn't see it ever working out.
I tired multiplying by 1/x but that wasn't really a tea party either. I'm really at a loss of what to do but I figure i'll end up with 3 in the num. and 1 in the denom since the final answer is
|
{"url":"http://mathhelpforum.com/calculus/203291-limit-help-radical-numerator-answer-provided-but-idk-how-comes-about.html","timestamp":"2014-04-20T02:12:56Z","content_type":null,"content_length":"43397","record_id":"<urn:uuid:a5813b23-1eaf-47af-8f9f-aef6ea581688>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to Set Theory - PowerPoint
1. Based on the diagram, what is the total number of
students who did participate in volleyball?
Set Theory
*Make sure you leave a few empty line under each word & definition to
provide examples and/or illustrations
A set is any well defined collection of “objects.”
The elements of a set are the objects in a set.
Subsets consists of elements from the given set.
Empty set/Null set is the set that contains no elements.
Universal set is the set of all possible elements.
Ways of Describing Sets
List the elements
A= 1,2,3,4,5,6
Give a verbal description
“A is the set of all integers from 1 to 6,
Give a mathematical inclusion rule
A= Integers x 1 x 6
Some Special Sets
The Null Set or Empty Set. This is a set with no
elements, often symbolized by
or {}
The Universal Set. This is the set of all elements
currently under consideration, and is often
symbolized by
Universal Sets
The universal set is the set of all things
pertinent to a given discussion
and is designated by the symbol U
U = {all students at Brandeis}
Some Subsets:
A = {all Computer Technology students}
B = {freshmen students}
C = {sophomore students}
Find the Subsets
What are all the subsets of {3, 4, 5}
{} or Ø
{3}, {4}, {5}
{3,4}, {3,5}, {4,5}
Try it with a partner
Page 197 (20, 21)
Venn Diagrams
Venn diagrams show relationships between
sets and their elements
Sets A & B
Universal Set
Venn Diagram Example
Set Definition
U= {1, 2, 3, 4, 5, 6, 7, 8}
Set Complement
~A or
“A complement,” or “not A” is the set of all
elements not in A.
*What the others have that you don’t*
Types of color
purple A red
white blue green
Universal set U =
What is the complement of set A?
More Practice:
U = {1, 2, 3, 4, 5} is the universal set and
A = {2, 3}. What is A′?
U = {a, b} is the universal set and
T = {a}. What is T′?
U = {+, -, x, ÷, =} is the universal set and
A = {÷, =}. What is A′?
Try it with a friend
Page 197 (26, 27)
Page 198 (39)
Venn Diagrams
Here is another one
A B
What is the A′?
A moment to Breath
The moment is over
Combining Sets – Set Union
A B
“A union B” is the set of all elements that
are in A, or B, or both.
This is similar to the logical “or” operator.
Combining Sets – Set
A B
“A intersect B” is the set of all elements that
are in both A and B.
This is similar to the logical “and”
Venn Diagrams
Venn Diagrams use topological areas to
stand for sets. I’ve done this one for you.
A B
Venn Diagrams
Try this one!
A B
A {1,2,3} B {3,4,5,6}
• A B {3}
• A B {1,2,3,4,5,6}
Try it on your own!
Let P = {b, d, f, g, h}, M = {a, b, c, d, e, f, g, h, i, j},
N = {c, k}
P M
Try it on your own!
Page 218 (10, 12, 14, 16, 18, 20)
Given set D and F, find D x F
D = {1, 3, 5}, F = {0, 2}
Given set R and S, find R x S
R = {Bob, Rose, Carlos}, S = {Sheila}
Pair in-class-mini-project
Please pick a student with whom you KNOW you CAN
work and be PRODUCTIVE
Develop/Create a book explaining all four Vocabulary words
from the SET THEORY topic (Complement, Union, Intersection,
Use a self-created example for each concept.
Your audience - a group of elementary students who learn better
when the teacher utilizes images/drawings.
Be creative!!! Make sure your work makes sense, you
might have to present it!
Home-Learning Assignment #2:
Page 198 (46)
Page 199 (53)
Page 219 (22)
Page 220 (40, 46)
|
{"url":"http://www.docstoc.com/docs/132375636/Set_Theoryppt","timestamp":"2014-04-20T04:30:45Z","content_type":null,"content_length":"56868","record_id":"<urn:uuid:2a780b73-7a5e-462d-8d74-7a7ff67b05b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding What Numbers Are In One Column But Not The Other
I have a large list of numbers in two columns. I need to know what numbers are in column A but don't appear in column B. For example if the numbers are 1 2 and 3 in column A but column B only has 1
and 2 I need to know that 3 is missing.
the only thing I can think of is doing =IF(A1=B1,"TRUE","FALSE") however typing this a thousand times would not be practical.
Is there a macro that can check column A against column B and tell me which ones are missing?
View Complete Thread with Replies
Sponsored Links: Related Forum Messages:
Finding The Middle Cell In A Column Of Numbers
I am looking for a formulas to first find the middle number in a column of numbers eg 1,2,3,4,5 3 is the middle (similar to median) thats where the calculations start...
it then assigns values of minus to the numbers above the middle and plus values to the numbers below the middle
1 -50
2 -50
4 +50
5 +50
now when it comes to even numbers eg 1,2,3,4,5,6 if i use median it divide 3 & 4 and comes up with 3.5 ........ i want it to recognize 3 and 4 as the middle numbers
and assign plus and minuses above and below the middle numbers
1 -50
2 -50
3 -25
4 +25
5 +50
6 + 50
View Replies! View Related
Finding Common (repeated) Numbers In Columns Of Numbers
I work for a charity and I have to cancel the donations of people whose credit card donations have been declined in three consecutive months.
If in Column A I have a list of donor IDs whose credit cards were declined in Jan 2008, in Column B I have a list of donor IDs whose credit cards were declined in Feb 2008 and in Column C I have a
list of donor IDs whose credit cards were declined in Mar 2008, is there a way of showing in a fourth column which donor IDs were common (repeated) in Columns A, B and C? I would have a title for
each column in A1, B1 and C1, and also the column where the repeated donor IDs would be displayed.
View Replies! View Related
Finding Non Zero Smallest Numbers
Find smallest numbers in the range ignoring zeros.
I want it returned as value "1", all others - as value "0".
Example: range A1:J1 contains - 5,0,7,3,6,5,8,0,9,3
desirable result in the range A2:J2 - 0,0,0,1,0,0,0,0,0,1
View Replies! View Related
Finding Numbers From String
I am trying to find numbers from a string. I have for example words "EUR Fwd 9x12" and "Eur Fwd 11x15" And i want to write a function that reads the first number from a string if there is only one
number before "x" or two numbers if there are 2 numbers. So I have tried to build following function:
Function NumbersInString(Word As String) As Integer
Dim i As Integer
Dim FirstNumberInString As Integer, SecondNumberInString As Integer
For i = 1 To Len(Word)
If IsNumeric(Mid(Word, i, 1)) Then
FirstNumberInString = Mid(Word, i, 1)
If IsNumeric(Mid(Word, i + 1, 1)) = False Then
Exit Function
SecondNumberInString = Mid(Word, i + 1, 1)
End If
End If
NumbersInString = FirstNumberInString & SecondNumberInString
End Function
View Replies! View Related
Finding A Value Between Two Given Numbers In One Cell
I have another post here on this forum, but I'm afaid the formula is getting so complex that nobody is able to fully understand what I want. Instead I want to find a value between two numbers and add
it to some IF sentences. It will do what I want, even if it's not that elegant.
I've looked at the SUMIF function but it did not do exactly what I wanted. It finds a number or adds numbers only if they are in range it seems.
What I want is the following:
Return sum between 500 and 1000 in one cell.
View Replies! View Related
Finding An Average Couple Numbers
I am trying to get an average of a couple numbers, but I have to enter both numbers in one cell.
I have to enter the numbers in a cell as a range (ex. "1000-3000"). I need to convey it as a range in the spreadsheet I am doing, but in a separate cell I need the average of the extremes (1000 &
3000). Is there a formula or anything that would let me get the average of those two numbers(2000) directly from that one cell? If needed, I could make the cell "1000,3000" instead. I just don't want
to make two separate cells, one saying 1000 and the other saying 3000.
View Replies! View Related
Finding The Sum Within A Range Of Numbers
So I have multiple columns of numerical data,
I have to find the sum of all numbers between 10 and 40 within a column, NOT cell 10 to 40 but the actual number 10 and 40.
How do I do that?
Is it SUMIF? I cannt seem to grasp this.
View Replies! View Related
Finding Which Numbers Make A Total
I do a lot of work in excel to do with accounts and this often needs checking against sage. When the invoices/petty cash sheets are put into sage the total amount is put in, but in my spreadsheets I
need to split the reciepts. So I was wondering if there was a formula/VBA code, that if I only knew the total of the invoice would find which cells added up to this total?
View Replies! View Related
Finding Sums Of Different Numbers Of Cells
I am building an inventory simulation and have run into a problem. What i want is, when i change a number in cell H4, i want excel to find the sum of C25 and the cells "H4" up. If H4 is 5 then i need
the sum of C20:C25...if H4 is 10 i need C15:C25. Does anyone have any thoughts on how to do this? I have attached a sample sheet to make it more clear.
View Replies! View Related
Finding Country Based On Phone Numbers
I have a list of mobile phone numbers from various countries. However, I do not know which country each entry is from. Ideally I would like to have a macro that looks at each number, compares to a
global list of PSTN structure to determine which part of the phone number is the country code (generally the first 1-3 digits), and then put the country in a separate column.
I am certain all numbers are formatted correctly, so it is only a matter of finding out which part is the country code and putting a value for the country.
View Replies! View Related
Finding The Sum For Values In One Column That Are Connect To A Value In The First Column
I have two columns. One column has UPCs - some of which are duplicates. The second column just has number values. I'm trying to add the sum of all of the numbers in column two which are attached to
their respective UPC. For example,
COL A///// Col B
11111111111///// 10
00000000000///// 15
11111111111///// 10
11111111111///// 4
00000000000///// 2
So, I need a third and fourth column to give me the total value for a single SKU(col A) of all the values in col B. In this example the Third column would contain the SKU, and the fourth column would
contain the sum of all values in column B that are associated with the single SKU in column three. The third and fourth column would look like this:
COL C///// COL D
11111111111///// 24
00000000000///// 17
View Replies! View Related
Finding Number Closest To Zero (including Negative Numbers)
a spreadsheet in Excel. I have names with scores. Then I have the winning score. I need a formula to find the score closest to zero and to display the name of the winner.
Ex: Names A1:A4 and Scores B1:B4. Winning Score in B6 and list name in B7.
Ana 16
Bob 2
Charles 8
David 11
Winning Score 10
Answer should be 11 which is David, since David is only -1 away compared to the others.
View Replies! View Related
Gathering The Sum Of Negative Numbers & Positive Numbers In A Column
I have a column of variances, these contain both negative numbers and positive numbers. I want to gather a sum of all the negative numbers and positive numbers separtely. Basically saying all the
positive overeages = this amount And all the negative shortages = this amount. you can see the attached sample.
View Replies! View Related
Finding Last Column
Using and array to go through a series of sheets and do stuff (Thanks Gerald and Von Pookie BTW). I have used code which finds the last row (varies from sheet to sheet), but not the last column
(which also varies sheet to sheet).
Finding Last row
LastRow = Range("A" & Rows.Count).End(xlUp).Row
However this code doesn't seem to work for last column...sample...
LastCol = Range("A" & Columns.Count).End(xlLeft).Column
Is there a trick I'm not seeing? Does this only work using the 'Cell' function in VBA? If so, how would the line of code look? I'd really prefer finding the column letter as opposed to using the
'Cell' method if possible.
View Replies! View Related
Finding Last Used Column
I am currently using the following code to find the last cell in a column that contains data.
lLastRow = ActiveSheet.Cells(Rows.Count, "a").End(xlUp).Row
Can anyone give me the version of this that would find the last cell in a row that contains data. The code would be used in a loop so I would need the row reference to be a variable.
View Replies! View Related
Finding Names In A Third Column
I have three columns of names. I need to loop through the first two checking to see if any name is in the third column and if so, place a 'found' in a fourth column.
View Replies! View Related
Finding Smallest Value In A Column
In column A, I have a list of names.
In column B, I have a list of values.
I want column C to show me which name has the smallest value from column B.
In other words, if there are 5 names, I want ONE name to give me a value of 10 (for the smallest value), and the rest of the names I want to show zero.
I also have in column D another list of values.
In column E, I want to show first, second and third place amongst the list of values from column D. The rest of the names I want to show zero.
View Replies! View Related
Finding Whether Value In One Column Exists Or Not In Another
I have serial numbers say 1 to 100 in column A. In column B i have values which are text and numbers combined. In column C I have similar values as in B.
Now, if any value in Column B appears in Column C, then in Column D It should tell me Yes or No or 0 or 1.
I basically want to know whether any value from Column B exists or not in column C. I tried Countif and Vlookup but didn't work.
View Replies! View Related
Finding Next Empty Column
I need to append this macro to find the next empty column to place the data in. The orignal VBA works fine, but I need to go into the editor, and repalce the offset number every time I add a new row
Here's the orignal coding:
View Replies! View Related
Finding Numeric Value Of Column
how I can get the userform to close when another worksheet is selected. what I really need is for the userform to just show on one worksheet (not close) Is that possible? If not I want to be able to
re-size the userform when another worksheet is selected (like getting it to minimise)
View Replies! View Related
Finding The Min. In Column B Between Two Occurances
Is there a way to find the minimum value in column B that corresponds to the two occurances of "Yes" in column A.
Column A Column B
NO 1
NO 4
NO 7
YES 6
NO 3
NO 9
Yes 2
[Note: the numbers are in column B.]
I basically want to return the # 3 from Column B.
View Replies! View Related
Finding A Cell Based On Column
I'm looping through and finding a cell based on Column A, and I .resize(,5).select and from that selection I want to create a range called "LCrng"
View Replies! View Related
Finding Same Strings In The Same Column But Different Row
Lets say I have a column of data and there are many sub parts.
How do I modify the range. find to locate the 2nd "Apples" , 29th "Banana" and so on or is there another method ?
View Replies! View Related
Finding The Minimum Value In A Column That Comes After The Maximum Value
Say I have 2 columns that in basic form look like this:
Column A Column B
Jan 1
Feb 0
Mar 7
Apr 4
May 15
Jun 2
Jul 5
Aug 4
First I want to look up the max value in this column. This is easy =max(b1:b8)
Then I want to know the minimum value that occurs after the maximum value. Thus the answer would be 2.
View Replies! View Related
Finding Last Cell With Data In A Column
I have the following code. Is there any way to select a range once the last cell with data is found. I would like to be able to select whatever cell in column A is selected with the code below
through E2.
View Replies! View Related
Finding Data From A Four Column Table
I have a table (Sheet 1) with four columns data, A,B,C and D. There are about 60,000 entries in them. In Sheet 2, I wish to enter a value in A1 which will be from A OR C columns of Sheet1 and get its
corresponding value from B or D (Sheet1) in B1 (Sheet2) with the help of a formula. i.e. IF(A1, Sheet1!A60000:C60000, then B1 = B or D of Sheet1).
View Replies! View Related
Finding The Last Filled Cell In A Column
I have written several pieces of VBA code which produce a sequence of tables on a single worksheet (with the rather original title "Tables"). The code often adds tables to the end of the current set
of tables, and to do this, I need to know where the next available space is.
I have a solution which I have been using for ages now, which checks each cell in an appropriate column until a sequence of 3 blank cells has been found as I can guarentee that the tables are at most
2 cells apart. It then sets i=i-3 to give me the location of the first empty cell.
Blankcount = 0
i = 3
While Blankcount < 3
If Cells(i, 3) = "" Then
Blankcount = Blankcount + 1
Blankcount = 0
End If
i = i + 1
i = i - 3
View Replies! View Related
Extracting Numbers :: Pull Numbers From Another Column
I'm trying to pull some numbers from another column. I want to pull the numbers that have an X separating them like 7X125, 48X192, and 27X90.
FA, VF-2000-3-7X125-18-A, AFS
FA, VF-2350-48X192-6-RGB, FC
FA, VF-2020-27X90-18-A,RFI, FEX, ACP, 2IT
View Replies! View Related
Finding The Last Populated Cell In A Column Array
I have a column array with various cells in that array populated. In every subsequent cell in that array I want a formula that finds the previously populated cell and that value added a cell that is
in the same row but two columns to the left.
View Replies! View Related
Finding The Maximum Value In A Column Based On A Condition
This should be simple to do but I can't figure it out. I have a database that lists operating room numbers in one column and the length of the surgeries performed in those rooms in another column.
I need a formula that will give me the longest OR time for a given room. For example the room numbers are in column A and the OR times are in Column B. I've tried something like
View Replies! View Related
Finding A Missing Date And Listing Name From Next Column
I am try to show a list of all rows that have a missing date in column "B" and then show the corrasponding name in the next column "C". I can find the first one on the sheet and how many have missing
dates using:
View Replies! View Related
Finding A String In A Column, Displaying YES On The Same Row
I am trying to search for a string of numbers (column 2) in an array, and have "YES" be written on the same line in column 3 if the string is found in the names ANYWHERE in column 1. Please see the
desired results on the picture in column 3.
I have tried many things, including SEARCH function which can only work with 1 cell not many, COUNTIF and more advanced functions, but I think have not succeeded because of my lack of knowledge in
View Replies! View Related
Finding The Nth Value If Another Column Meets A Criteria
here is a sample of the data
I know if I use dmax for only where first column equals 13 I get 460 but how do I get the second highest value for only those rows that have 13 in the first column (expect the answer to be 268). Then
I want to do the same for 3rd, 4th highest etc.
I know large does it for one column and not only when the first column matches a designated criteria.
View Replies! View Related
Finding Max Column Width Without A Loop?
I have a range of cells which I wish to print to a .txt document. However, I would like these cells to stay aligned, one on top of the other. I am currently doing that by finding the cell with the
widest piece of data for each column, and storing the width that each column needs to be to an array of integers. Then, when printing out the range, I simply add spaces to each piece of data its
width is the same as the max column width. I am finding the max column width using the following loop:
'find the width for each column and store in col_width()
For cur_col = 1 To total_cols
'skip the tag switches column
If cur_col <> 3 Then
max_data_width = 0
For cur_row = 1 To total_rows
cell_data_width = Len(Find_String_Diagnostics(diagnostic_range.Cells(cur_row, cur_col)))
If cell_data_width > max_data_width Then.................
View Replies! View Related
Finding Last Cell In A Column, Select, Copy/paste
I am making a worksheet that I intend to use to track my money. When I first open the worksheet, it opens on a tab where I can click a button to report a type of transaction. For example, if I make a
withdrawl from the bank for $50, I click the button, it takes me to the sheet that tracks my bank-related stuff, selects a cell and opens up a form, at which point I type in what the transaction
consisted of. However, the sheet also tracks what is in my wallet, so I'd like to finish reporting the bank transaction in the form, and have a button to click that reports the wallet part
So, essentially what I need to do is select several non-contiguous cells that are in the last row of the bank sheet, copy them, switch to the wallet-tracking sheet, and paste them in a row that is
one past the last row of that sheet. The paste should keep the cells next to each other, even if they were non-contiguous when they were being copied.
View Replies! View Related
Finding Minimum Difference Of All Elements In One Row Or Column?
I need to find the minimum difference between any two elements in a row or a column. While it's easy to do for a 3-4 elements by doing subtractions for all elements in the array, doing it for more
elements leads to a very long formula.
For example, I need to find the difference between any two elements between C5 and C9: ....
View Replies! View Related
Finding All Dates In A Column That Are In The Range Today To A Week From Now
Im trying to search a column (A), that has a list of dates (not in order), for the row in which the dates are equal to or greater than today and less than or equal to a week from today. I then want
the information contained in the rows with these dates to be transferred to another sheet and ordered by date.
View Replies! View Related
Finding Withing Part Of Unknown Sized Column
im trying to find and delete records within a column if they occur twice. this works great right now but I want it to exclude the top 8 rows... i think it might have something to do with the LookAt:=
xlPart constraint ...
View Replies! View Related
Finding Duplicate Entries In Column Considering Indirect Referencing
I wonder if there is any easy way of findinig (numerical) duplicate entries in a column? Some cells are empty, in case this might cause a problem. I do not wish to delete duplicate rows
automatically, just to find them. Why not just sort it? Because indirect referenceing is used where each row corresponds to a separate spreadsheet in the workbook. What I need is to find the
duplicate so that I manually can erase one of the spreadsheets for the particular case and adjusting a reference list.
View Replies! View Related
Finding Data Based On Row & Column Criteria
I have a main soure data which consist of row & column information. What i want to do is search the data from the source data into my result data as per the attachment file. Example: I want to
information of Jan & banana from the main source file to appear in the XXXX
Result data(criteria base on Month & type)
View Replies! View Related
Show Sum Of Numbers In One Column IF Another Column Has A Specific Category
I have a worksheet which basically tracks time. the time is reported in Column C. In that row in Column E, there is a validation list with about 6 different categories in it. On the side of this
"table" I have a list of all the categories and I want a value to be next to it that reports the sum of time (C) for each category (E).
So for the "Routing" category, I would want the value to be the sum of just data on the timesheet that have "routing" in Column E.
View Replies! View Related
Sum Numbers In Column If Date In Other Column Matches
I have a spreadsheet which will be completed by numerous users, with a worksheet reserved for each area. The spreadsheet is to record the number of days lost to training etc on a weekly basis.
Each worksheet has 3 columns – column A DESCRIPTION, column B WEEK COMMENCING DATE and column C DAYS LOST.
The table will be completed by the manager’s as the info becomes available to them.
I will be collating the data on another worksheet and need a formula that will look in column B for all instances of 01/10/07 and then sum the corresponding cells in column C, then do the same for 08
/10/07 and so on.
I have attached an example of a page.
I thought it may be VLookup or Sumif, but I don’t know how to go about it.
View Replies! View Related
|
{"url":"http://excel.bigresource.com/Finding-what-numbers-are-in-one-column-but-not-the-other-pWMmOffP.html","timestamp":"2014-04-19T19:53:09Z","content_type":null,"content_length":"66835","record_id":"<urn:uuid:990afe14-f6b9-4f10-aee7-3602f59dcd3a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A model-based approach to football strategy.
Analysis Of Coaching Decisions From Super Bowl XXXVIII
Carolina Goes For Two: Part 1
Super Bowl XXXVIII featured one of the more controversial coaching decisions from any Super Bowl: Panthers' coach John Fox's decision to go for two with 12:39 remaining in the game, trailing 21-16.
Though opinion is divided, the critics are more vehement, often basing their argument on the actual sequence of touchdowns and field goals that followed Carolina's two-point try.
The fallacy of this argument is that the actual sequence of scores was just one of many possible scenarios -- and an improbable one at that. To analyze this decision and others more systematically,
we will use the footballcommentary.com Dynamic Programming Model.
Following the attempt at an extra point or points, Carolina will be kicking off with 12:39 remaining. According to the Model, their probability of winning the game will be either 0.2965, 0.2304, or
0.2113 depending on whether they are behind by 3, 4, or 5 points. Assuming that a kicked extra point succeeds with probability 0.985, Carolina's probability of winning the game if they go for one is
On the other hand, if two-point conversions succeed with probability 0.4, then Carolina's probability of winning the game if they go for two is
So, according to the Model, Fox was right: going for two is the better choice. Indeed, since
Carolina's probability of success on the two-point conversion needs to be only 0.22 to justify going for two.
Notice that whether or not Fox goes for two affects Carolina's probability of winning the game by about 0.015. But once the decision is made to go for two, the success or failure of the try affects
Carolina's probability of winning the game by more than 0.08. Overwhelmingly, it's the execution on the field that determines the outcome of the game.
We should mention that a 5-point deficit -- or lead, for that matter -- is particularly favorable for two-point conversions. If you lead or trail by 5 following a TD, the Model says you should go for
two as early as four minutes into the second quarter -- although the benefit of doing so at that point is extremely small. (The same result emerges in Harold Sackrowitz's model.)
Carolina Goes For Two: Part 2
In a much less controversial decision, Carolina also went for two with 6:53 remaining in the game, after scoring a TD to go ahead 22-21. According to the Model, going into the subsequent kickoff,
Carolina's probability of winning the game will be 0.6456, 0.6553, or 0.7057 according to whether they lead by 1, 2, or 3 points. Calculations analogous to those performed above show that Carolina's
probability of winning the game is 0.6696 if they go for two, and 0.6552 if they go for one. In fact, in this case the probability of a successful two-point conversion needs to be only 0.16 to
justify going for two. So, once again, it appears that Fox made the correct decision.
New England Goes For It On 4th and 1
With 9:04 remaining in the first half of a scoreless game, New England had 4th and 1 at the Carolina 38 yard line. Bill Belichick decided to go for it.
Under these circumstances, according to the Model, if New England goes for it and makes the first down, their probability of winning the game is 0.5872, whereas if they fail, their probability of
winning is 0.4833. If instead they punt, and we assume Carolina's expected field position is their own 12 yard line, then New England's probability of winning the game is 0.5238. (The reason these
probabilities are skewed in New Englands's favor, notwithstanding the tie score, is that Carolina will be kicking off to start the second half.) Since
it follows that New England's likelihood of gaining the first down must only exceed 39% to make going for it preferable to punting. According to data assembled by Football Outsiders, during the 2003
season teams converted successfully on 68% of attempts on 3rd or 4th and 1. So, it appears that Belichick's decision was correct. In fact, assuming a 68% success rate for the conversion, New
England's probability of winning the game if they go for it is
New England Punts On 4th and 1
Leading 14-10 with 6:05 remaining in the third quarter, New England faced 4th and 1 at their own 31 yard line. This time, Belichick chose to punt.
According to the Model, if New England goes for it, their probability of winning the game will be 0.7159 or 0.5725, depending on whether they succeed or fail. If they punt, their probability of
winning the game is 0.6566 (assuming Carolina's expected field position is their own 30). Calculations analogous to those performed above then show that New England needs at least a 0.59 probability
of making the first down to justify going for it. If the true likelihood of picking up the first down is about 68%, as the data assembled by Football Outsiders suggest, then going for it gives New
England a 0.6700 probability of winning the game. In this case, Belichick should have gone for it. Of course, if Belichick feels that his chances of picking up the yard against Carolina are
materially worse than against a league-average defense, punting is justified.
On two other occasions during the game, New England faced 4th and 1 and chose to punt. In both cases, according to the Model, by going for it they could have increased their probability of winning
the game by about 0.015. But once again, Belichick might not have liked his chances against Carolina.
Brady Throws An Interception
With 7:48 remaining in the game, New England led 21-16, and had 3rd and goal at the Carolina 9 yard line. Tom Brady then threw an interception.
This isn't really a decision, of course; no quarterback chooses to get picked off. Nevertheless, countless Patriots fans must have been thinking that the one thing you have to avoid in that situation
is an interception. At least in principle, one can use the Model for guidance as to how much risk it would be worth taking in an effort to get a TD rather than a FG.
According to the Model, a FG would have given the Patriots a 0.9178 probability of winning the game. A TD and extra point makes that probability 0.9770. On the other hand, if a pass is intercepted in
the end zone for a touchback, New England's probability of winning the game is 0.7906.
To illustrate how these numbers might be used, suppose Brady has no open receiver. He can throw the ball away, settling for a chip-shot FG. Alternatively, he can force a pass. Suppose that if he does
so, there is a 0.3 probability of a TD, a 0.5 probability of an incomplete pass (leading to a FG), and a 0.2 probability of an interception.
If Brady throws it away, New England's probability of winning is 0.9178. On the other hand, if he forces the pass, the probability of winning is
So, with these assumptions, it's better to throw the ball away, although of course the actual probabilities Brady faced were presumably different from what we assumed for this example.
Copyright © 2004 by William S. Krasker
|
{"url":"http://www.footballcommentary.com/sb38.htm","timestamp":"2014-04-20T13:18:26Z","content_type":null,"content_length":"9459","record_id":"<urn:uuid:decbd320-2ae6-4db7-b16d-cb2723eee850>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Emmy Noether
Emmy Noether
Amalie Emmy Noether
Born 23 March 1882
Erlangen, Bavaria, Germany
Died 14 April 1935 (aged 53)
Bryn Mawr, Pennsylvania, USA
Citizenship Germany (1882–1933)
United States (1933–35)
Fields Mathematics and Physics
Institutions University of Göttingen
Bryn Mawr College
Alma mater University of Erlangen
Doctoral advisor Paul Gordan
Grete Hermann
Doctoral students Max Deuring
Hans Fitting
Zeng Jiongzhi
Known for Abstract algebra
Theoretical physics
Amalie Emmy Noether, IPA: [ˈnøːtɐ], (23 March 1882 – 14 April 1935) was a German mathematician known for her groundbreaking contributions to abstract algebra and theoretical physics. Described by
Albert Einstein and others as the most important woman in the history of mathematics, she revolutionized the theories of rings, fields, and algebras. In physics, Noether's theorem explains the
fundamental connection between symmetry and conservation laws.
She was born to a Jewish family in the Bavarian town of Erlangen; her father was the mathematician Max Noether. Emmy originally planned to teach French and English after passing the required
examinations, but instead studied mathematics at the University of Erlangen where her father lectured. After completing her dissertation in 1907 under the supervision of Paul Gordan she worked at the
Mathematical Institute of Erlangen without pay for seven years. In 1915 she was invited by David Hilbert and Felix Klein to join the mathematics department at the University of Göttingen, a
world-renowned centre of mathematical research. The philosophical faculty objected, however, and she spent four years lecturing under Hilbert's name. Her habilitation was approved in 1919, allowing
her to obtain the rank of privatdozent.
Noether remained a leading member of the Göttingen mathematics department until 1933; her students were sometimes called the "Noether boys". In 1924 Dutch mathematician B. L. van der Waerden joined
her circle and soon became the leading expositor of Noether's ideas: her work was the foundation for the second volume of his influential 1931 textbook, Moderne Algebra. By the time of her plenary
address at the 1932 International Congress of Mathematicians in Zürich, her algebraic acumen was recognized around the world. The following year Germany's Nazi government dismissed Jews from
university positions, and Noether moved to the United States to take up a position at Bryn Mawr College in Pennsylvania. In 1935 she underwent surgery for an ovarian cyst and, despite signs of a
recovery, died four days later at the age of 53.
Noether's mathematical work has been divided into three "epochs". In the first (1908–1919), she made significant contributions to the theories of algebraic invariants and number fields. Her work on
differential invariants in the calculus of variations, Noether's theorem, has been called "one of the most important mathematical theorems ever proved in guiding the development of modern physics".
In the second epoch, (1920–1926), she began work that "changed the face of [abstract] algebra". In her classic paper Idealtheorie in Ringbereichen (Theory of Ideals in Ring Domains, 1921) Noether
developed the theory of ideals in commutative rings into a powerful tool with wide-ranging applications. She made elegant use of the ascending chain condition, and objects satisfying it are named
Noetherian in her honour. In the third epoch, (1927–1935), she published major works on noncommutative algebras and hypercomplex numbers and united the representation theory of groups with the theory
of modules and ideals. In addition to her own publications, Noether was generous with her ideas and is credited with several lines of research published by other mathematicians, even in fields far
removed from her main work, such as algebraic topology.
Emmy's father, Max Noether, was descended from a family of wholesale traders in Germany. He had been paralyzed by poliomyelitis at the age of fourteen. He regained mobility, but one leg remained
affected. Largely self-taught, he was awarded a doctorate from the University of Heidelberg in 1868. After teaching there for seven years, he took a position in the Bavarian city of Erlangen, where
he met and married Ida Amalia Kaufmann, the daughter of a prosperous merchant. Max Noether's mathematical contributions were to algebraic geometry mainly, following in the footsteps of Alfred
Clebsch. His best known results are the Brill–Noether theorem and the residue, or AF+BG theorem theorem; several other theorems are associated with him, including Max Noether's theorem.
Emmy Noether was born on 23 March 1882, the first of four children. Her first name was Amalie, after her mother and paternal grandmother, but she began using her middle name at a young age. As a
girl, she was well-liked. She did not stand out academically although she was known for being clever and friendly. Emmy was near-sighted and talked with a minor lisp during childhood. A family friend
recounted a story years later about young Emmy quickly solving a brain teaser at a children's party, showing logical acumen at that early age. Emmy was taught to cook and clean—as were most girls of
the time—and she took piano lessons. She pursued none of these activities with passion, although she loved to dance.
Of her three brothers, only Fritz Noether, born in 1884, is remembered for his academic accomplishments. After studying in Munich he made a reputation for himself in applied mathematics. Her eldest
brother, Alfred, was born in 1883, was awarded a doctorate in chemistry from Erlangen in 1909, but died nine years later. The youngest, Gustav Robert, was born in 1889. Very little is known about his
life; he suffered from chronic illness and died in 1928.
University of Erlangen
Emmy Noether showed early proficiency in French and English. In the spring of 1900 she took the examination for teachers of these languages and received an overall score of sehr gut (very good). Her
performance qualified her to teach languages at schools reserved for girls, but she chose instead to continue her studies at the University of Erlangen.
This was an unconventional decision; two years earlier, the Academic Senate of the university had declared that allowing coeducation would "overthrow all academic order". One of only two women
students in a university of 986, Noether was forced to audit classes and required the permission of individual professors whose lectures she wished to attend. Despite the obstacles, on 14 July 1903
she passed the graduation exam at a Realgymnasium in Nuremberg.
During the 1903–04 winter semester she studied at the University of Göttingen, attending lectures given by astronomer Karl Schwarzschild and mathematicians Hermann Minkowski, Otto Blumenthal, Felix
Klein, and David Hilbert. Soon thereafter, restrictions on women's rights in that university were rescinded.
Noether returned to Erlangen. She officially reentered the university on 24 October 1904, and declared her intention to focus solely on mathematics. Under the supervision of Paul Gordan she wrote her
dissertation, Über die Bildung des Formensystems der ternären biquadratischen Form (On Complete Systems of Invariants for Ternary Biquadratic Forms, 1907). Although it had been well received, Noether
later described her thesis as "crap".
For the next seven years (1908–1915) she taught at the University of Erlangen's Mathematical Institute without pay, occasionally substituting for her father when he was too ill to lecture. In 1910
and 1911 she published an extension of her thesis work from three variables to n variables.
Gordan retired in the spring of 1910, but continued to teach occasionally with his successor, Erhard Schmidt, who left shortly afterward for a position in Breslau. Gordan retired from teaching
altogether in 1911 with the arrival of his second successor, Ernst Fischer. Gordan died in December 1912.
According to Hermann Weyl, Fischer was an important influence on Noether, in particular by introducing her to the work of David Hilbert. From 1913 to 1916 Noether published several papers extending
and applying Hilbert's methods to mathematical objects such as fields of rational functions and the invariants of finite groups. This phase marks the beginning of her engagement with abstract algebra
, the field of mathematics to which she would make groundbreaking contributions.
Noether and Fischer shared lively enjoyment of mathematics and would often discuss lectures long after they were over; Noether is known to have sent postcards to Fischer continuing her train of
mathematical thoughts.
University of Göttingen
In the spring of 1915, Noether was invited to return to the University of Göttingen by David Hilbert and Felix Klein. Their effort to recruit her, however, was blocked by the philologists and
historians among the philosophical faculty: women, they insisted, should not become privatdozent. One faculty member protested: "What will our soldiers think when they return to the university and
find that they are required to learn at the feet of a woman?" Hilbert responded with indignation, stating, "I do not see that the sex of the candidate is an argument against her admission as
privatdozent. After all, we are a university, not a bath house."
Noether left for Göttingen in late April; two weeks later her mother died suddenly in Erlangen. She had previously received medical care for an eye condition, but its nature and impact on her death
is unknown. At about the same time Noether's father retired and her brother joined the German Army to serve in World War I. She returned to Erlangen for several weeks, mostly to care for her aging
During her first years teaching at Göttingen she did not have an official position and was not paid; her family paid for her room and board and supported her academic work. Her lectures often were
advertised under Hilbert's name, and Noether would provide "assistance".
Soon after arriving at Gottingen, however, she demonstrated her capabilities by proving the theorem now known as Noether's theorem, which shows that a conservation law is associated with any
differentiable symmetry of a physical system. American physicists Leon M. Lederman and Christopher T. Hill argue in their book Symmetry and the Beautiful Universe that Noether's theorem is "certainly
one of the most important mathematical theorems ever proved in guiding the development of modern physics, possibly on a par with the Pythagorean theorem".
When World War I ended, the German Revolution of 1918–19 brought a significant change in social attitudes, including more rights for women. In 1919 the University of Göttingen allowed Noether to
proceed with her habilitation (eligibility for tenure). Her oral examination was held in late May, and she successfully delivered her habilitation lecture in June.
Three years later she received a letter from the Prussian Minister for Science, Art, and Public Education, in which he conferred on her the title of nicht beamteter ausserordentlicher Professor (an
untenured professor with limited internal administrative rights and functions). This was an unpaid "extraordinary" professorship, not the higher "ordinary" professorship, which was a civil-service
position. Although it recognized the importance of her work, the position still provided no salary. Noether was not paid for her lectures until she was appointed to the special position of
Lehrauftrag für Algebra a year later.
Seminal work in abstract algebra
Although Noether's theorem had a profound effect upon physics, among mathematicians she is best remembered for her seminal contributions to abstract algebra. As Nathan Jacobson says in his
Introduction to Noether's Collected Papers,
The development of abstract algebra, which is one of the most distinctive innovations of twentieth century mathematics, is largely due to her – in published papers, in lectures, and in personal
influence on her contemporaries.
Noether's groundbreaking work in algebra began in 1920. In collaboration with W. Schmeidler, she then published a paper about the theory of ideals in which they defined left and right ideals in a
ring. The following year she published a landmark paper called, Idealtheorie in Ringbereichen, analyzing ascending chain conditions with regard to ideals. A noted algebraist, Irving Kaplansky, has
called this work "revolutionary", and the publication gave rise to the term " Noetherian ring" and several other mathematical objects being dubbed, Noetherian.
In 1924, a young Dutch mathematician, B. L. van der Waerden, arrived at the University of Göttingen. He immediately began working with Noether, who provided invaluable methods of abstract
conceptualization. van der Waerden later said that her originality was "absolute beyond comparison". In 1931 he published Moderne Algebra, a central text in the field; its second volume borrowed
heavily from Noether's work. Although Emmy Noether did not seek recognition, he included as a note in the seventh edition "based in part on lectures by E. Artin and E. Noether". She sometimes allowed
her colleagues and students to receive credit for her ideas, helping them develop their careers at the expense of her own.
van der Waerden's visit was part of a convergence of mathematicians from all over the world to Göttingen, which became a major hub of mathematical and physical research. From 1926 to 1930 the Russian
topologist, Pavel Alexandrov, lectured at the university, and he and Noether quickly became good friends. He began referring to her as der Noether, using the masculine German article as a term of
endearment to show his respect. She tried to arrange for him to obtain a position at Göttingen as a regular professor, but was only able to help him secure a scholarship from the Rockefeller
Foundation. They met regularly and enjoyed discussions about the intersections of algebra and topology. In his 1935 memorial address, Alexandrov named Emmy Noether "the greatest woman mathematician
of all time".
Lecturing and students
In Göttingen, Noether supervised more than a dozen doctoral students; her first was Grete Hermann, who defended her dissertation in February 1925. She later spoke reverently of her
"dissertation-mother". Noether also supervised Max Deuring, who distinguished himself as an undergraduate and went on to contribute significantly to the field of arithmetic geometry; Hans Fitting,
remembered for Fitting's theorem and the Fitting lemma; and Zeng Jiongzhi, who proved Tsen's theorem. She also worked closely with Wolfgang Krull, who greatly advanced commutative algebra with his
Hauptidealsatz and his dimension theory for commutative rings.
In addition to her mathematical insight, Noether was respected for her consideration of others. Although she sometimes acted rudely toward those who disagreed with her, she nevertheless gained a
reputation for constant helpfulness and patient guidance of new students. Her loyalty to mathematical precision caused one colleague to name her "a severe critic", but she combined this demand for
accuracy with a nurturing attitude. A colleague later described her this way: "Completely unegotistical and free of vanity, she never claimed anything for herself, but promoted the works of her
students above all."
Her frugal lifestyle at first was due to being denied pay for her work; however, even after the university began paying her a small salary in 1923, she continued to live a simple and modest life. She
was paid more generously later in her life, but saved half of her salary to bequeath to her nephew, Gottfried E. Noether.
Mostly unconcerned about appearance and manners, she focused on her studies to the exclusion of romance and fashion. A distinguished algebraist Olga Taussky-Todd described a luncheon, during which
Noether, wholly engrossed in a discussion of mathematics, "gesticulated wildly" as she ate and "spilled her food constantly and wiped it off from her dress, completely unperturbed".
Appearance-conscious students cringed as she retrieved the handkerchief from her blouse and ignored the increasing disarray of her hair during a lecture. Two female students once approached her
during a break in a two-hour class to express their concern, but were unable to break through the energetic mathematics discussion she was having with other students.
According to van der Waerden's obituary of Emmy Noether, she did not follow a lesson plan for her lectures, which frustrated some students. Instead, she used her lectures as a spontaneous discussion
time with her students, to think through and clarify important cutting-edge problems in mathematics. Some of her most important results were developed in these lectures, and the lecture notes of her
students formed the basis for several important textbooks, such as those of van der Waerden and Deuring.
Several of her colleagues attended her lectures, and she allowed some of her ideas, such as the crossed product (verschränktes Produkt in German) of associative algebras, to be published by others.
Noether was recorded as having given at least five semester-long courses at Göttingen:
• Winter 1924/25: Gruppentheorie und hyperkomplexe Zahlen (Group Theory and Hypercomplex Numbers)
• Winter 1927/28: Hyperkomplexe Grössen und Darstellungstheorie (Hypercomplex Quantities and Representation Theory)
• Summer 1928: Nichtkommutative Algebra (Noncommutative Algebra)
• Summer 1929: Nichtkommutative Arithmetik (Noncommutative Arithmetic)
• Winter 1929/30: Algebra der hyperkomplexen Grössen (Algebra of Hypercomplex Quantities)
These courses often preceded major publications in these areas.
Noether spoke quickly—reflecting the speed of her thoughts, many said—and demanded great concentration from her students. Students who disliked her style often felt alienated; one wrote in a notebook
with regard to a class that ended at 1:00 pm: "It's 12:50, thank God!" Some pupils felt that she relied too much on spontaneous discussions. Her most dedicated students, however, relished the
enthusiasm with which she approached mathematics, especially since her lectures often built on earlier work they had done together.
She developed a close circle of colleagues and students who thought along similar lines and tended to exclude those who did not. "Outsiders" who occasionally visited Noether's lectures usually spent
only 30 minutes in the room before leaving in frustration or confusion. A regular student said of one such instance: "The enemy has been defeated; he has cleared out."
Noether showed a devotion to her subject and her students that extended beyond the academic day. Once, when the building was closed for a state holiday, she gathered the class on the steps outside,
led them through the woods, and lectured at a local coffee house. Later, after she had been dismissed by the Third Reich, she invited students into her home to discuss their future plans and
mathematical concepts.
In the winter of 1928–29 Noether accepted an invitation to Moscow State University, where she continued working with P. S. Alexandrov. In addition to carrying on with her research, she taught classes
in abstract algebra and algebraic geometry. She worked with the topologists, Lev Pontryagin and Nikolai Chebotaryov, who later praised her contributions to the development of Galois theory.
Although politics was not central to her life, Noether took a keen interest in political matters and, according to Alexandrov, showed considerable support for the Russian revolution. She was
especially happy to see Soviet advancements in the fields of science and mathematics, which she considered indicative of new opportunities made possible by the Bolshevik project. This attitude caused
her problems in Germany, culminating in her eviction from a pension lodging building, after student leaders complained of living with "a Marxist-leaning Jewess".
Noether planned to return to Moscow, an effort for which she received support from Alexandrov. After she left Germany in 1933 he tried to help her gain a chair at Moscow State University through the
Soviet Education Ministry. Although this effort proved unsuccessful, they corresponded frequently during the 1930s, and in 1935 she made plans for a return to the Soviet Union. Meanwhile her brother,
Fritz accepted a position at the Research Institute for Mathematics and Mechanics in Tomsk, in the Siberian Federal District of Russia, after losing his job in Germany.
In 1932 Emmy Noether and Emil Artin received the Ackermann–Teubner Memorial Award for their contributions to mathematics. The prize carried a monetary reward of 500 Reichsmarks and was seen as a
long-overdue official recognition of her considerable work in the field. Nevertheless, her colleagues expressed frustration at the fact that she was not elected to the Göttingen Gesellschaft der
Wissenschaften (academy of sciences) and was never promoted to the position of Ordentlicher Professor (full professor).
Noether's colleagues celebrated her fiftieth birthday in 1932, in typical mathematicians' style. Helmut Hasse dedicated an article to her in the Mathematische Annalen, wherein he confirmed her
suspicion that some aspects of noncommutative algebra are simpler than those of commutative algebra, by proving a noncommutative reciprocity law. This pleased her immensely. He also sent her a
mathematical riddle, the "mμν-riddle of syllables", which she solved immediately; the riddle has been lost.
In September of the same year Noether delivered a plenary address (großer Vortrag) on "Hyper-complex systems in their relations to commutative algebra and to number theory" at the International
Congress of Mathematicians in Zürich. The congress was attended by eight hundred people, including Noether's colleagues Hermann Weyl, Edmund Landau, and Wolfgang Krull. There were four hundred and
twenty official participants and twenty-one plenary addresses presented. Apparently, Noether's prominent speaking position was a recognition of the importance of her contributions to the field of
mathematics. The 1932 congress is sometimes described as the high point of her career.
Expulsion from Göttingen
When Adolf Hitler became the German Reichskanzler in January 1933, Nazi activity around the country increased dramatically. At the University of Göttingen the German Students Association led the
attack on the "un-German Spirit" and was aided by a Privatdozent named, Werner Weber, a former student of Emmy Noether. Antisemitic attitudes created a climate hostile to Jewish professors; one young
protester reportedly demanded: "Aryan students want Aryan mathematics and not Jewish mathematics."
One of the first actions of Hitler's administration was the Law for the Restoration of the Professional Civil Service which removed Jews and politically-suspect government employees (including
university professors) from their jobs—unless they had demonstrated their loyalty to Germany by serving in World War I. In April 1933, Noether received a notice from the Prussian Ministry for
Sciences, Art, and Public Education which read: "On the basis of paragraph 3 of the Civil Service Code of 7 April 1933, I hereby withdraw from you the right to teach at the University of Göttingen."
Several of Noether's colleagues, including Max Born and Richard Courant, had their positions revoked. Noether accepted the decision calmly, providing support for others during this difficult time.
Hermann Weyl later wrote: "Emmy Noether—her courage, her frankness, her unconcern about her own fate, her conciliatory spirit—was in the midst of all the hatred and meanness, despair and sorrow
surrounding us, a moral solace." Typically, Noether remained focused on mathematics, gathering students in her apartment to discuss class field theory. When one of her students appeared in the
uniform of the Nazi paramilitary organization Sturmabteilung (SA), she showed no sign of agitation, and reportedly, even laughed about it later.
Bryn Mawr
As dozens of newly-unemployed professors began searching for positions outside of Germany, their colleagues in the United States sought to provide assistance and job opportunities for them. Albert
Einstein and Hermann Weyl were appointed by the Institute for Advanced Study in Princeton, while others worked to find a sponsor required for legal immigration. Noether was contacted by
representatives of two educational institutions, Bryn Mawr College in the United States and Somerville College at the University of Oxford in England. After a series of negotiations with the
Rockefeller Foundation, a grant to Bryn Mawr was approved for Noether and she took a position there, starting in late 1933.
At Bryn Mawr, Noether met and befriended Anna Wheeler, who had studied at Göttingen just before Noether arrived there. Another source of support at the college was the Bryn Mawr president, Marion
Edwards Park, who enthusiastically invited mathematicians in the area to "see Dr. Noether in action!" Noether and a small team of students worked quickly through van der Waerden's 1930 book Moderne
Algebra I and parts of Erich Hecke's Theorie der algebraischen Zahlen (Theory of algebraic numbers, 1908).
In 1934, Noether began lecturing at the Institute for Advanced Study at Princeton upon the invitation of Abraham Flexner and Oswald Veblen. She also worked with and supervised Abraham Albert and
Harry Vandiver. However, she remarked about Princeton University that she was not welcome at the "men's university, where nothing female is admitted".
Her time in the United States was pleasant, surrounded as she was by supportive colleagues and absorbed in her favorite subjects. In the summer of 1934 she briefly returned to Germany to see Emil
Artin and her brother Fritz before he left for Tomsk. Although many of her former colleagues had been forced out of the universities she was able to use the library as a "foreign scholar".
In April 1935 doctors discovered a tumor in Noether's pelvis. Worried about complications from surgery, they ordered two days of bed rest first. During the operation they discovered an ovarian cyst
"the size of a large cantaloupe". Two smaller tumors in her uterus appeared to be benign and were not removed, to avoid prolonging surgery. For three days she appeared to convalesce normally, and
recovered quickly from a circulatory collapse on the fourth. On 14 April, she fell unconscious, her temperature soared to 109 °F (42.8 °C), and she died. "[I]t is not easy to say what had occurred in
Dr. Noether", one of the physicians wrote. "It is possible that there was some form of unusual and virulent infection, which struck the base of the brain where the heat centers are supposed to be
A few days after Noether's death her friends and associates at Bryn Mawr held a small memorial service at President Park's house. Hermann Weyl and Richard Brauer traveled from Princeton and spoke
with Wheeler and Taussky about their departed colleague. In the months which followed, written tributes began to appear around the globe: Albert Einstein joined van der Waerden, Weyl, and Alexandrov
in paying their respects. Her body was cremated and the ashes interred under the walkway around the cloisters of the M. Carey Thomas Library at Bryn Mawr.
Contributions to mathematics and physics
First and foremost Noether is remembered as an algebraist, although her work also had far-ranging consequences for theoretical physics and topology. She showed an acute propensity for abstract
thought, which allowed her to approach problems of mathematics in fresh and original ways. Her friend and colleague Hermann Weyl described her scholarly output in three epochs.
In the first epoch (1908–19), Noether dealt primarily with differential and algebraic invariants, beginning with her dissertation under Paul Albert Gordan. Her mathematical horizons broadened, and
her work became more general and abstract, as she became acquainted with the work of David Hilbert, through close interactions with a successor to Gordan, Ernst Sigismund Fischer. After moving to
Göttingen in 1915, she produced her seminal work for physics, the two Noether's theorems.
In the second epoch (1920–26), Noether devoted herself to developing the theory of mathematical rings.
In the third epoch (1927–35), Noether focused on noncommutative algebra, linear transformations, and commutative number fields.
Historical context
In the century from 1832 to Noether's death in 1935, the field of mathematics—specifically algebra—underwent a profound revolution, whose reverberations are still being felt. Mathematicians of
previous centuries had worked on practical methods for solving specific types of equations, e.g., cubic, quartic, and quintic equations, as well as on the related problem of constructing regular
polygons using compass and straightedge. Beginning with Carl Friedrich Gauss' 1829 proof that prime numbers such as five can be factored in Gaussian integers, Évariste Galois' introduction of groups
in 1832, and William Rowan Hamilton's discovery of quaternions in 1843, however, research turned to determining the properties of ever-more-abstract systems defined by ever-more-universal rules.
Noether's most important contributions to mathematics were to the development of this new field, abstract algebra.
Abstract algebra and begriffliche Mathematik (conceptual mathematics)
Two of the most basic objects in abstract algebra are groups and rings. A group consists of a set of elements and a single operation which combines a first and a second element and, returns a third.
The operation must satisfy certain constraints for it to determine a group: It must be associative, there must be an identity element (an element which, when combined with another element using the
operation, results in the original element, such as adding zero to a number or multiplying it by one), and for every element there must be an inverse element. A ring likewise, has a set of elements,
but now has two operations. The first operation must make the set a group, and the second operation is associative and distributive with respect to the first operation. It may or may not be
commutative; this means that the result of applying the operation to a first and a second element is the same as to the second and first—the order of the elements does not matter. If every non-zero
element has a multiplicative inverse (an element x such that ax = xa = 1), the ring is called a division ring. A field is defined as a commutative division ring.
Groups are frequently studied through group representations. In their most general form, these consist of a choice of group, a set, and an action of the group on the set, that is, an operation which
takes an element of the group and an element of the set and returns an element of the set. Most often, the set is a vector space, and the group represents symmetries of the vector space. For example,
there is a group which represents the rigid rotations of space. This is a type of symmetry of space, because space itself does not change when it is rotated even though the positions of objects in it
do. Noether used these sorts of symmetries in her work on invariants in physics.
A powerful way of studying rings is through their modules. A module consists of a choice of ring, another set, usually distinct from the underlying set of the ring and called the underlying set of
the module, an operation on pairs of elements of the underlying set of the module, and an operation which takes an element of the ring and an element of the module and returns an element of the
module. The underlying set of the module and its operation must form a group. A module is a ring-theoretic version of a group representation: Ignoring the second ring operation and the operation on
pairs of module elements determines a group representation. The real utility of modules is that the kinds of modules that exist and their interactions, reveal the structure of the ring in ways that
are not apparent from the ring itself. An important special case of this is an algebra. (The word algebra means both a subject within mathematics as well as an object studied in the subject of
algebra.) An algebra consists of a choice of two rings and an operation which takes an element from each ring and returns an element of the second ring. This operation makes the second ring into a
module over the first. Often the first ring is a field.
Words such as "element" and "combining operation" are very general, and can be applied to many real-world and abstract situations. Any set of things that obeys all the rules for one (or two)
operation(s) is, by definition, a group (or ring), and obeys all theorems about groups (or rings). Integer numbers, and the operations of addition and multiplication, are just one example. For
example, the elements might be computer data words, where the first combining operation is exclusive or and the second is logical conjunction. Theorems of abstract algebra are powerful because they
are general; they govern many systems. It might be imagined that little could be concluded about objects defined with so few properties, but precisely therein lay Noether's gift: to discover the
maximum that could be concluded from a given set of properties, or conversely, to identify the minimum set, the essential properties responsible for a particular observation. Unlike most
mathematicians, she did not make abstractions by generalizing from known examples; rather, she worked directly with the abstractions. As van der Waerden recalled in his obituary of her,
The maxim by which Emmy Noether was guided throughout her work might be formulated as follows: "Any relationships between numbers, functions, and operations become transparent, generally
applicable, and fully productive only after they have been isolated from their particular objects and been formulated as universally valid concepts.
This is the begriffliche Mathematik (purely conceptual mathematics) that was characteristic of Noether. This style of mathematics was adopted by other mathematicians and, after her death, flowered
into new forms, such as category theory.
Integers as an example of a ring
The integers form a commutative ring whose elements are the integers, and the combining operations are addition and multiplication. Any pair of integers can be added or multiplied, always resulting
in another integer, and the first operation, addition, is commutative, i.e., for any elements a and b in the ring, a + b = b + a. The second operation, multiplication, also is commutative, but that
need not be true for other rings, meaning that a combined with b might be different from b combined with a. Examples of noncommutative rings include matrices and quaternions. The integers do not form
a division ring, because the second operation cannot always be inverted; there is no integer a such that 3 × a = 1.
The integers have additional properties which do not generalize to all commutative rings. An important example is the fundamental theorem of arithmetic, which says that every positive integer can be
factored uniquely into prime numbers. Unique factorizations do not always exist in other rings, but Noether found a unique factorization theorem, now called the Lasker–Noether theorem, for the ideals
of many rings. Much of Noether's work lay in determining what properties do hold for all rings, in devising novel analogs of the old integer theorems, and in determining the minimal set of
assumptions required to yield certain properties of rings.
First epoch (1908–19)
Algebraic invariant theory
Much of Noether's work in the first epoch of her career was associated with invariant theory, principally algebraic invariant theory. Invariant theory is concerned with expressions that remain
constant (invariant) under a group of transformations. As an everyday example, if a rigid yardstick is rotated, the coordinates (x, y, z) of its endpoints change, but its length L given by the
formula L^2 = Δx^2 + Δy^2 + Δz^2 remains the same. Invariant theory was an active area of research in the later nineteenth century, prompted in part by Felix Klein's Erlangen program, according to
which different types of geometry should be characterized by their invariants under transformations, e.g., the cross-ratio of projective geometry. The archetypal example of an invariant is the
discriminant B^2 − 4AC of a binary quadratic form Ax^2 + Bxy + Cy^2. This is called an invariant because it is unchanged by linear substitutions x→ax + by, y→cx + dy with determinant ad − bc = 1.
These substitutions form the special linear group SL[2]. (There are no invariants under the general linear group of all invertible linear transformations because these transformations can be
multiplication by a scaling factor. To remedy this, classical invariant theory also considered relative invariants, which were forms invariant up to a scale factor.) One can ask for all polynomials
in A, B, and C that are unchanged by the action of SL[2]; these are called the invariants of binary quadratic forms, and turn out to be the polynomials in the discriminant. More generally, one can
ask for the invariants of homogeneous polynomials A[0]x^ry^0 + ... + A[r]x^0y^r of higher degree, which will be certain polynomials in the coefficients A[0], ... , A[r], and more generally still, one
can ask the similar question for homogeneous polynomials in more than two variables.
One of the main goals of invariant theory was to solve the "finite basis problem". The sum or product of any two invariants is invariant, and the finite basis problem asked whether it was possible to
get all the invariants by starting with a finite list of invariants, called generators, and then, adding or multiplying the generators together. For example, the discriminant gives a finite basis
(with one element) for the invariants of binary quadratic forms. Noether's advisor, Paul Albert Gordan, was known as the "king of invariant theory", and his chief contribution to mathematics was his
1870 solution of the finite basis problem for invariants of homogeneous polynomials in two variables. He proved this by giving a constructive method for finding all of the invariants and their
generators, but was not able to carry out this constructive approach for invariants in three or more variables. In 1890, David Hilbert proved a similar statement for the invariants of homogeneous
polynomials in any number of variables. Furthermore, his method worked, not only for the special linear group, but also for some of its subgroups such as the special orthogonal group. His first proof
caused some controversy because it did not give a method for constructing the generators, although in later work he made his method constructive. For her thesis, Noether extended Gordan's
computational proof to homogeneous polynomials in three variables. Noether's constructive approach made it possible to study the relationships among the invariants. Later, after she had turned to
more abstract methods, Noether called her thesis Mist (crap) and Formelngestrüpp (a jungle of equations).
Galois theory
Galois theory concerns transformations of number fields that permute the roots of an equation. Consider a polynomial equation of a variable x of degree n, in which the coefficients are drawn from
some "ground" field, which might be, for example, the field of real numbers, rational numbers, or the integers modulo 7. There may or may not be choices of x, which make this polynomial evaluate to
zero. Such choices, if they exist, are called roots. If the polynomial is x^2 + 1 and the field is the real numbers, then the polynomial has no roots, because any choice of x makes the polynomial
greater than or equal to one. If the field is extended, however, then the polynomial may gain roots, and if it is extended enough, then it always has a number of roots equal to its degree. Continuing
the previous example, if the field is enlarged to the complex numbers, then the polynomial gains two roots, i and −i, where i is the imaginary unit, that is, i^ 2 = −1. More generally, the extension
field in which a polynomial can be factored into its roots is known as the splitting field of the polynomial.
The Galois group of a polynomial is the set of all ways of transforming the splitting field, while preserving the ground field and the roots of the polynomial. (In mathematical jargon, these
transformations are called automorphisms.) The Galois group of x^2 + 1 consists of two elements: The identity transformation, which sends every complex number to itself, and complex conjugation,
which sends i to −i. Since the Galois group does not change the ground field, it leaves the coefficients of the polynomial unchanged, so it must leave the set of all roots unchanged. Each root can
move to another root, however, so transformation determines a permutation of the n roots among themselves. The significance of the Galois group derives from the fundamental theorem of Galois theory,
which proves that the fields lying between the ground field and the splitting field are in one-to-one correspondence with the subgroups of the Galois group.
In 1918, Noether published a seminal paper on the inverse Galois problem. Instead of determining the Galois group of transformations of a given field and its extension, Noether asked whether, given a
field and a group, it always is possible to find an extension of the field that has the given group as its Galois group. She reduced this to " Noether's problem", which asks whether the fixed field
of a subgroup G of the permutation group S[n] acting on the field k(x[1], ... , x[n]) always is a pure transcendental extension of the field k. (She first mentioned this problem in a 1913 paper,
where she attributed the problem to her colleague Fischer.) She showed this was true for n = 2, 3, or 4. In 1969, R. G. Swan found a counter-example to Noether's problem, with n = 47 and G a cyclic
group of order 47 (although this group can be realized as a Galois group over the rationals in other ways). The inverse Galois problem remains unsolved.
Noether was brought to Göttingen in 1915 by David Hilbert and Felix Klein, who wanted her expertise in invariant theory to help them in understanding general relativity, a geometrical theory of
gravitation developed mainly by Albert Einstein. Hilbert had observed that the conservation of energy seemed to be violated in general relativity, due to the fact that gravitational energy could
itself gravitate. Noether provided the resolution of this paradox, and a fundamental tool of modern theoretical physics, with her first Noether's theorem, which she proved in 1915, but did not
publish until 1918. She solved the problem not only for general relativity, but determined the conserved quantities for every system of physical laws that possesses some continuous symmetry.
Upon receiving her work, Einstein wrote to Hilbert: "Yesterday I received from Miss Noether a very interesting paper on invariants. I'm impressed that such things can be understood in such a general
way. The old guard at Göttingen should take some lessons from Miss Noether! She seems to know her stuff."
For illustration, if a physical system behaves the same, regardless of how it is oriented in space, the physical laws that govern it are rotationally symmetric; from this symmetry, Noether's theorem
shows the angular momentum of the system must be conserved. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry.
Rather, the symmetry of the physical laws governing the system is responsible for the conservation law. As another example, if a physical experiment has the same outcome at any place and at any time,
then its laws are symmetric under continuous translations in space and time; by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system,
Noether's theorem has become a fundamental tool of modern theoretical physics, both because of the insight it gives into conservation laws, and also, as a practical calculation tool. Her theorem
allows researchers to determine the conserved quantities from the observed symmetries of a physical system. Conversely, it facilitates the description of a physical system based on classes of
hypothetical physical laws. For illustration, suppose that a new physical phenomenon is discovered. Noether's theorem provides a test for theoretical models of the phenomenon: if the theory has a
continuous symmetry, then Noether's theorem guarantees that the theory has conserved quantity, and for the theory to be correct, this conservation must be observable in experiments.
Second epoch (1920–26)
Although the results of Noether's first epoch were impressive and useful, her fame as a mathematician rests more on the groundbreaking work she did in her second and third epochs, as noted by Hermann
Weyl and B. L. van der Waerden in their obituaries of her.
In these epochs, she was not merely applying ideas and methods of earlier mathematicians; rather, she was crafting new systems of mathematical definitions that would be used by future mathematicians.
In particular, she developed a completely new theory of ideals in rings, generalizing earlier work of Richard Dedekind. She also is renowned for developing ascending chain conditions, a simple
finiteness condition that yielded powerful results in her hands. Such conditions and the theory of ideals enabled Noether to generalize many older results and to treat old problems from a new
perspective, such as elimination theory and the algebraic varieties that had been studied by her father.
Ascending and descending chain conditions
In this epoch, Noether became famous for her deft use of ascending (Teilerkettensatz) or descending (Vielfachenkettensatz) chain conditions. A sequence of non-empty subsets A[1], A[2], A[3], etc. of
a set S is usually said to be strictly ascending, if each is a subset of the next
$A_{1} \subset A_{2} \subset A_{3} \subset \cdots$
The ascending chain condition requires that such sequences break off after a finite number of steps; in other words, all such sequences of subsets must be finite. Conversely, with strictly descending
sequences of subsets
$A_{1} \supset A_{2} \supset A_{3} \supset \cdots$
the descending chain condition requires that such sequences break off after a finite number.
Ascending and descending chain conditions are general, meaning that they can be applied to many types of mathematical objects—and, on the surface, they might not seem very powerful. Noether showed
how to exploit such conditions, however, to maximum advantage: for example, how to use them to show that every set of sub-objects has a maximal/minimal element or that a complex object can be
generated by a smaller number of elements. These conclusions often are crucial steps in a proof.
Many types of objects in abstract algebra can satisfy chain conditions, and usually if they satisfy an ascending chain condition, they are called Noetherian in her honour. By definition, a Noetherian
ring satisfies an ascending chain condition on its left and right ideals, whereas a Noetherian group is defined as a group in which every strictly ascending chain of subgroups is finite. A Noetherian
module is a module in which every strictly ascending chain of submodules breaks off after a finite number. A Noetherian space is a topological space in which every strictly increasing chain of open
subspaces breaks off after a finite number of terms; this definition is made so that the spectrum of a Noetherian ring is a Noetherian topological space.
The chain condition often is "inherited" by sub-objects. For example, all subspaces of a Noetherian space, are Noetherian themselves; all subgroups and quotient groups of a Noetherian group are
likewise, Noetherian; and, mutatis mutandis, the same holds for submodules and quotient modules of a Noetherian module. All quotient rings of a Noetherian ring are Noetherian, but that does not
necessarily hold for its subrings. The chain condition also may be inherited by combinations or extensions of a Noetherian object. For example, finite direct sums of Noetherian rings are Noetherian,
as is the ring of formal power series over a Noetherian ring.
Another application of such chain conditions is in Noetherian induction—also known as well-founded induction—which is a generalization of mathematical induction. It frequently is used to reduce
general statements about collections of objects to statements about specific objects in that collection. Suppose that S is a partially ordered set. One way of proving a statement about the objects of
S is to assume the existence of a counterexample and deduce a contradiction, thereby proving the contrapositive of the original statement. The basic premise of Noetherian induction is that the every
non-empty subset of S contains a minimal element. In particular, the set of all counterexamples contains a minimal element, the minimal counterexample. In order to prove the original statement,
therefore, it suffices to prove something seemingly much weaker: For any counterexample, there is a smaller counterexample.
Commutative rings, ideals, and modules
Noether's paper, Idealtheorie in Ringbereichen (Theory of Ideals in Ring Domains, 1921), is the foundation of general commutative ring theory, and gives one of the first general definitions of a
commutative ring. Before her paper, most results in commutative algebra were restricted to special examples of commutative rings, such as polynomial rings over fields or rings of algebraic integers.
Noether proved that in a ring which satisfies the ascending chain condition on ideals, every ideal is finitely generated. In 1943, French mathematician Claude Chevalley coined the term, Noetherian
ring, to describe this property. A major result in Noether's 1921 paper is the Lasker–Noether theorem, which extends Lasker's theorem on the primary decomposition of ideals of polynomial rings to all
Noetherian rings. The Lasker–Noether theorem can be viewed as a generalization of the fundamental theorem of arithmetic which states that any positive integer can be expressed as a product of prime
numbers, and that this decomposition is unique.
Noether's work Abstrakter Aufbau der Idealtheorie in algebraischen Zahl- und Funktionenkörpern (Abstract Structure of the Theory of Ideals in Algebraic Number and Function Fields, 1927) characterized
the rings in which the ideals have unique factorization into prime ideals as the Dedekind domains: integral domains that are Noetherian, 0 or 1- dimensional, and integrally closed in their quotient
fields. This paper also contains what now are called the isomorphism theorems, which describe some fundamental natural isomorphisms, and some other basic results on Noetherian and Artinian modules.
Elimination theory
In 1923–24, Noether applied her ideal theory to elimination theory—in a formulation that she attributed to her student, Kurt Hentzelt—showing that fundamental theorems about the factorization of
polynomials could be carried over directly. Traditionally, elimination theory is concerned with eliminating one or more variables from a system of polynomial equations, usually by the method of
resultants. For illustration, the system of equations often can be written in the form of a matrix M (missing the variable x) times a vector v (having only different powers of x) equaling the zero
vector, M•v = 0. Hence, the determinant of the matrix M must be zero, providing a new equation in which the variable x has been eliminated.
Invariant theory of finite groups
Techniques such as Hilbert's original non-constructive solution to the finite basis problem could not be used to get quantitative information about the invariants of a group action, and furthermore,
they did not apply to all group actions. In her 1915 paper, Noether found a solution to the finite basis problem for a finite group of transformations G acting on a finite dimensional vector space
over a field of characteristic zero. Her solution shows that the ring of invariants is generated by homogenous invariants whose degree is less than, or equal to, the order of the finite group; this
is called, Noether's bound. Her paper gave two proofs of Noether's bound, both of which also work when the characteristic of the field is coprime to |G|!, the factorial of the order |G| of the group
G. The number of generators need not satisfy Noether's bound when the characteristic of the field divides the |G|, but Noether was not able to determine whether the bound was correct when the
characteristic of the field divides |G|! but not |G|. For many years, determining the truth or falsity of the bound in this case was an open problem called "Noether's gap". It finally was resolved
independently by Fleischmann in 2000 and Fogarty in 2001, who both showed that the bound remains true.
In her 1926 paper, Noether extended Hilbert's theorem to representations of a finite group over any field; the new case that did not follow from Hilbert's work, is when the characteristic of the
field divides the order of the group. Noether's result was later extended by William Haboush to all reductive groups by his proof of the Mumford conjecture. In this paper Noether also introduced the
Noether normalization lemma, showing that a finitely generated domain A over a field k has a set x[1], ... , x[n] of algebraically-independent elements such that A is integral over k[x[1], ... , x[n
Contributions to topology
As noted by Pavel Alexandrov and Hermann Weyl in their obituaries, Noether's contributions to topology illustrate her generosity with ideas and how her insights could transform entire fields of
mathematics. In topology, mathematicians study the properties of objects that remain invariant even under deformation, properties such as their connectedness. A common joke is that a topologist can
not distinguish her donut from her coffee mug, since they can be smoothly deformed into one another.
Noether is credited with the fundamental ideas that led to the development of algebraic topology from the earlier combinatorial topology, specifically, the idea of homology groups. According to the
account of Alexandrov, Noether attended lectures given by Heinz Hopf and him in the summers of 1926 and 1927, where "she continually made observations, which were often deep and subtle" and he
continues that,
When ... she first became acquainted with a systematic construction of combinatorial topology, she immediately observed that it would be worthwhile to study directly the groups of algebraic
complexes and cycles of a given polyhedron and the subgroup of the cycle group consisting of cycles homologous to zero; instead of the usual definition of Betti numbers, she suggested immediately
defining the Betti group as the complementary (quotient) group of the group of all cycles by the subgroup of cycles homologous to zero. This observation now seems self-evident. But in those years
(1925–1928) this was a completely new point of view.
Noether's suggestion that topology be studied algebraically, was adopted immediately by Hopf, Alexandrov, and others, and it became a frequent topic of discussion among the mathematicians of
Göttingen. Noether observed that her idea of a Betti group makes the Euler–Poincaré formula simple to understand, and Hopf's own work on this subject "bears the imprint of these remarks of Emmy
Noether". Noether mentions her own topology ideas only as an aside in one 1926 publication, where she cites it as an application of group theory.
The algebraic approach to topology was developed independently in Austria. In a 1926–27 course given in Vienna, Leopold Vietoris defined a homology group, which was developed by Walther Mayer, into
an axiomatic definition in 1928.
Third epoch (1927–35)
Hypercomplex numbers and representation theory
Much work on hypercomplex numbers and group representations was carried out in the nineteenth and early twentieth centuries, but remained disparate. Noether united the results and gave the first
general representation theory of groups and algebras. Briefly, Noether subsumed the structure theory of associative algebras and the representation theory of groups into a single arithmetic theory of
modules and ideals in rings satisfying ascending chain conditions. This single work by Noether was of fundamental importance for the development of modern algebra.
Noncommutative algebra
Noether also was responsible for a number of other advancements in the field of algebra. With Emil Artin, Richard Brauer, and Helmut Hasse, she founded the theory of central simple algebras.
A seminal paper by Noether, Helmut Hasse, and Richard Brauer pertains to division algebras, which are algebraic systems in which division is possible. They proved two important theorems: a
local-global theorem stating that if a finite dimensional central division algebra over a number field splits locally everywhere then it splits globally (so is trivial), and from this, deduced their
Hauptsatz ("main theorem"): every finite dimensional central division algebra over an algebraic number field F splits over a cyclic cyclotomic extension. These theorems allow one to classify all
finite dimensional central division algebras over a given number field. A subsequent paper by Noether showed, as a special case of a more general theorem, that all maximal subfields of a division
algebra D are splitting fields. This paper also contains the Skolem–Noether theorem which states that any two embeddings of an extension of a field k into a finite dimensional central simple algebra
over k, are conjugate. The Brauer–Noether theorem gives a characterization of the splitting fields of a central division algebra over a field.
Assessment, recognition, and memorials
Noether's work continues to be relevant for the development of theoretical physics and mathematics and she consistently is ranked as one of the greatest mathematicians of the twentieth century. In
his obituary, fellow algebraist B. L. van der Waerden says that her mathematical originality was "absolute beyond comparison", and Hermann Weyl said that Noether "changed the face of algebra by her
work". During her lifetime and even until today, Noether has been characterized as the greatest woman mathematician in recorded history by mathematicians such as Pavel Alexandrov, Hermann Weyl, and
Jean Dieudonné.
In a letter to The New York Times, Albert Einstein wrote:
In the judgment of the most competent living mathematicians, Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began. In
the realm of algebra, in which the most gifted mathematicians have been busy for centuries, she discovered methods which have proved of enormous importance in the development of the present-day
younger generation of mathematicians.
On 2 January 1935, a few months before her death, mathematician Norbert Wiener wrote that
Miss Noether is ... the greatest woman mathematician who has ever lived; and the greatest woman scientist of any sort now living, and a scholar at least on the plane of Madame Curie.
At an exhibition at the 1964 World's Fair devoted to Modern Mathematicians, Noether was the only woman represented among the notable mathematicians of the modern world.
Noether has been honored in several memorials,
• The Association for Women in Mathematics holds a Noether Lecture to honour women in mathematics every year; in its 2005 pamphlet for the event, the Association characterizes Noether as "one of
the great mathematicians of her time, someone who worked and struggled for what she loved and believed in. Her life and work remain a tremendous inspiration".
• Consistent with her dedication to her students, the University of Siegen houses its mathematics and physics departments in buildings on the Emmy Noether Campus.
• The German Research Foundation ( Deutsche Forschungsgemeinschaft) operates the Emmy Noether Programm, a scholarship providing funding to promising young post-doctorate scholars in their further
research and teaching activities.
• A street in her hometown, Erlangen, has been named after Emmy Noether and her father, Max Noether.
• The successor to the secondary school she attended in Erlangen has been renamed as the Emmy Noether School.
Farther from home,
• The Nöther crater on the far side of the Moon is named for her.
• The 7001 Noether asteroid also is named for Emmy Noether.
List of doctoral students
Date Student name Dissertation title and English translation University Publication
Verzweigungen von Lösungen nichtlinearer Differentialgleichungen
1911.12.16 Falckenberg, Erlangen Leipzig 1912
Hans Ramifications of Solutions of Nonlinear Differential Equations^§
Die Gesamtheit der kubischen und biquadratischen Gleichungen mit Affekt bei beliebigem
Seidelmann, Rationalitätsbereich
1916.03.04 Fritz Erlangen Erlangen 1916
Complete Set of Cubic and Biquadratic Equations with Affect in an Arbitrary Rationality Domain^§
Die Frage der endlich vielen Schritte in der Theorie der Polynomideale unter Benutzung nachgelassener
Sätze von Kurt Hentzelt
1925.02.25 Hermann, Grete Göttingen Berlin 1926
The Question of the Finite Number of Steps in the Theory of Ideals of Polynomials using Theorems of the
Late Kurt Hentzelt^§
Beziehungen zwischen den Idealen verschiedener Ringe
1926.07.14 Grell, Heinrich Göttingen Berlin 1927
Relationships between the Ideals of Various Rings^§
Über einem verallgemeinerten Gruppenbegriff
1927 Doräte, Wilhelm Göttingen Berlin 1927
On a Generalized Conceptions of Groups^§
Zur Theorie der primären Ringe
died before Hölzer, Rudolf Göttingen Berlin 1927
defense On the Theory of Primary Rings^§
Idealtheoretische Deutung der Darstellbarkeit beliebiger natürlicher Zahlen durch quadratische Formen
1929.06.12 Weber, Werner Göttingen Berlin 1930
Ideal-theoretic Interpretation of the Representability of Arbitrary Natural Numbers by Quadratic Forms^§
Über vollständig reduzible Ringe und Unterringe
1929.06.26 Levitski, Jakob Göttingen Berlin 1931
On Completely Reducible Rings and Subrings^§
Zur arithmetischen Theorie der algebraischen Funktionen
1930.06.18 Deuring, Max Göttingen Berlin 1932
On the Arithmetic Theory of Algebraic Functions^§
Zur Theorie der Automorphismenringe Abelscher Gruppen und ihr Analogon bei nichtkommutativen Gruppen
1931.07.29 Fitting, Hans Göttingen Berlin 1933
On the Theory of Automorphism-Rings of Abelian Groups and Their Analogs in Noncommutative Groups^§
Riemann-Rochscher Satz und Zeta-Funktion im Hyperkomplexen
1933.07.27 Witt, Ernst Göttingen Berlin 1934
The Riemann-Roch Theorem and Zeta Function in Hypercomplex Numbers^§
Algebren über Funktionenkörper
1933.12.06 Tsen, Chiungtze Göttingen Göttingen 1934
Algebras over Function Fields^§
Über gewisse Beziehungen zwischen der Arithmetik hyperkomplexer Zahlsysteme und algebraischer Zahlkörper
1934 Schilling, Otto On Certain Relationships between the Arithmetic of Hypercomplex Number Systems and Algebraic Number Fields Marburg Braunschweig 1935
1935 Stauffer, Ruth The construction of a normal basis in a separable extension field Bryn Mawr Baltimore 1936
Nichtgaloissche Zerfällungskörper einfacher Systeme
1935 Vorbeck, Werner Göttingen
Non-Galois Splitting Fields of Simple Systems^§
Wichmann, Anwendungen der p-adischen Theorie im Nichtkommutativen Algebren Monatshefte für Mathematik und Physik (1936) 44,
1936 Wolfgang Göttingen 203–224.
Applications of the p-adic Theory in Noncommutative Algebras^§
Eponymous mathematical topics
• Noetherian • Noetherian induction • Noether's theorem
• Noetherian group • Noetherian scheme • Lasker–Noether theorem
• Noetherian ring • Noether normalization lemma • Skolem–Noether theorem
• Noetherian module • Noether problem • Brauer–Noether theorem
• Noetherian space
|
{"url":"http://www.pustakalaya.org/wiki/wp/e/Emmy_Noether.htm","timestamp":"2014-04-17T00:49:20Z","content_type":null,"content_length":"105938","record_id":"<urn:uuid:f02727f3-5766-42c1-b311-31c5b5703d0e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coding Theory
Written by Harry Fairhead Follow
Huffman coding
The optimal code for any set of symbols can be constructed by assigning shorter codes to symbols that are more probable and longer codes to less commonly occurring symbols.
The way that this is done is very similar to the binary division used for Shannon-Fanno coding but instead of trying to create groups with equal probability we are trying to put
unlikely symbols at the bottom of the “tree”.
The way that this works is that we sort the symbols into order of increasing probability and select the two most unlikely symbols and assign these to a 0/1 split in the code. The
new group consisting of the pair of symbols is now treated as a single symbol with a probability equal to the sum of the probabilities and the process is repeated.
This is called Huffman coding, after its inventor, and it is the optimal code that we have been looking for.
For example suppose we have five symbols A,B,C,D and E with probabilities 0.1, 0.15, 0.2, 0.25 and 0.3 respectively. i.e.
│A │B │C │D │E │
│0.1│0.15 │0.2│0.25 │0.3│
The first stage groups A and B together because these are the least often occurring symbols. The probability of A or B occurring is 0.25 and now we repeat the process treating A/B
as a single symbol.
The first stage of coding
Now the symbols with the smallest probability are C and the A/B pair which gives another split and a combined A/B/C symbol with a probability of 0.45. Notice we could have chosen C
and D as the least likely given a different but just as good code.
The second stage
The two symbols that are least likely now are D and E with a combined probability of .55. This also completes the coding because there are now only two groups of symbols and we
might as well combine these to produce the finished tree.
The final step
This coding tree gives the most efficient representation of the five letters possible.
To find the code for a symbol you simply move down the tree reading off the zeros and ones as you go until you arrive at the symbol.
To decode a set of bits that has just arrived you start at the top of the tree and take each branch in turn according to whether the bit is a zero or a one until you run out of bits
and arrive at the symbol.
Notice that the length of the code used for each symbol varies depending on how deep in the tree the symbol is.
The theoretical average information in a symbol in this example is 2.3 bits - this is what you get if you work out the average information formula given earlier.
If you try to code B you will find that it corresponds to 111 i.e. three bits and it corresponds to moving down the far right hand branch of the tree.
If you code D you will find it corresponds to 00 i.e. the far left hand branch on the tree.
In fact each remaining letter is either coded as a two or three bit code and guess what? If the symbols occur with their specified probabilities the average length of code used is
2.3 bits.
So we have indeed split the bit!
The code we are using averaged 2.3 bits to send a symbol.
Notice that there are some problems with variable length codes in that it is more difficult to store them because you need to indicate how many bits are in each group of bits. The
most common way of overcoming this is to use code words that have a unique sequence of initial bits. This wastes some code words but it still generally produces a good degree of
data compression.
ZIP it!
If you have some data stored say on disk then it is unlikely to be stored using an efficient code. After all the efficient code would depend on the probabilities that each symbol
occurred with and this is not something taken into account in simple standard codings.
What this means is that almost any file can be stored in less space if you switch to an optimal code.
So now you probably think that data compression programs build Huffman codes for the data on a disk?
They don’t because there are other considerations than achieving the best possible data compression such as speed of coding and decoding.
However, what they do is based on the principle of the Huffman code.
They scan through data and look for patterns of bits that occur often. When they find one say “01010101” they record it in a table and assign it a short code 11 say. Now when ever
the code 11 occurs in the coded data this means 01010101 i.e. 8 bits are now represented by 2. As the data is scanned and repeating patterns found the table or “dictionary” is built
up and sections of the data are replaced by shorter codes.
This is how the data compression is achieved but when the file is stored back on disk its dictionary has to be stored along with it. In practice data is generally so repetitive that
the coded file plus its dictionary is much smaller than the original.
There are even schemes called "data deduping" that build a system wide dictionary and apply it to everything in a storage system. If every document starts in the same way with a
standard heading and ends with a legal statement then this produces huge compression ratios.
What next
Coding theory has a lot to contribute to computing and data compression is just a tiny part of the story. The next application we look at is error detecting and error correcting
Related Articles
Information Theory
How Error Correcting Codes Work
Claude Shannon
Introduction to Cryptography with Open-Source Software
blog comments powered by Disqus
To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter.
The Heart Of A Compiler
Compilers are an essential part of using a computer - but there was a time when they simply didn't exist. First we had to realize that we needed such a thing and then we had to
figure out how to build [ ... ]
+ Full Article
Computational Complexity
A lightning guide to the basic ideas of computational complexity without the maths or the proofs. It's almost more fun than programming!
+ Full Article
Other Articles
RSS feed of all content
Copyright © 2014 i-programmer.info. All Rights Reserved.
|
{"url":"http://i-programmer.info/babbages-bag/211-coding-theory.html?start=1","timestamp":"2014-04-18T21:28:09Z","content_type":null,"content_length":"41643","record_id":"<urn:uuid:8541529a-6c5e-4a1a-93bb-7d57802e0277>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Freetown, MA Math Tutor
Find a Freetown, MA Math Tutor
My name is John D. and I'm 28 years old. I graduated from UMass Dartmouth in 2008. I have been tutoring for ten years now.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...My experience with tutoring began in college where I took a student position at the University's Learning Center. I worked with college level students, some with minor learning disabilities,
on a one-to-one basis throughout the week. Some students came in only as needed while others had recurring appointments to meet with me.
49 Subjects: including calculus, elementary (k-6th), study skills, baseball
...You want an analyst, coach and encourager for your child. I've had experience with students who are several years behind their grade level in math. I have often brought them back to grade
10 Subjects: including algebra 1, prealgebra, SAT math, ACT Math
...I am comfortable and have been successful working one-to-one, with small groups, up to groups of 25-30. If one explanation or example doesn't work, we will come up with another until we reach
that "Aha!" moment. Feel free to e-mail me with any questions.
14 Subjects: including prealgebra, algebra 1, discrete math, reading
...Thank you for taking the time to read and look forward to hearing from you, so I can help you achieve success for your loved one or yourself =).I spent 2 years working for General Nutrition
Center, the largest vitamin and supplement retail store in the country. I took multiple tests to be a certified sales associate. I am capable to choosing the right supplements to meet
bodybuilding goals.
16 Subjects: including algebra 1, Microsoft Excel, geometry, elementary math
Related Freetown, MA Tutors
Freetown, MA Accounting Tutors
Freetown, MA ACT Tutors
Freetown, MA Algebra Tutors
Freetown, MA Algebra 2 Tutors
Freetown, MA Calculus Tutors
Freetown, MA Geometry Tutors
Freetown, MA Math Tutors
Freetown, MA Prealgebra Tutors
Freetown, MA Precalculus Tutors
Freetown, MA SAT Tutors
Freetown, MA SAT Math Tutors
Freetown, MA Science Tutors
Freetown, MA Statistics Tutors
Freetown, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Freetown_MA_Math_tutors.php","timestamp":"2014-04-17T00:54:05Z","content_type":null,"content_length":"23715","record_id":"<urn:uuid:e27bbb30-088f-4934-9272-94bf329bdd76>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definite Integral
May 14th 2008, 06:25 PM #1
May 2008
Definite Integral
I am having trouble with this definite integral, especially since it involves infinity, and the way my professor explained the process has me completely baffled. Any help is very much
appreciated, last time one of the members of this forum explained the process better than my professor! Thank you for any help !
P.s. I haven't figured out how to display the definite integral with the math tag sorry about that...
Definite Integral (a= 2 and b=oo infinity) of $1/(x^2-1) dx$
Note that: $\frac{1}{x^2-1}=\frac{1}{2}\cdot\left(\frac{1}{x-1}-\frac{1}{x+1}\right)$
Thus: $\int_2^{\infty}\frac{dx}{x^2-1}=\frac{1}{2}\cdot \lim_{b\rightarrow{+\infty}}\left(\int_2^b\frac{dx }{x-1}-\int_2^b\frac{dx}{x+1}\right)$
$\int_2^b\frac{dx}{x-1}=\ln(b-1)$ and $\int_2^b\frac{dx}{x+1}=\ln(b+1)-\ln(3)$
Thus: $\lim_{b\rightarrow{+\infty}}\left(\int_2^b\frac{dx }{x-1}-\int_2^b\frac{dx}{x+1}\right)=\ln(3)+\lim_{b\right arrow{+\infty}}\ln\left(\frac{b-1}{b+1}\right)$
But: $\lim_{b\rightarrow{+\infty}}\ln\left(\frac{b-1}{b+1}\right)=0$ since $\frac{b-1}{b+1}=\frac{1-\frac{1}{b}}{1+\frac{1}{b}}\rightarrow{1}$ and the logarithm is a continuous function.
Thus: $\int_2^{\infty}\frac{dx}{x^2-1}=\frac{\ln(3)}{2}$
I am having trouble with this definite integral, especially since it involves infinity, and the way my professor explained the process has me completely baffled. Any help is very much
appreciated, last time one of the members of this forum explained the process better than my professor! Thank you for any help !
P.s. I haven't figured out how to display the definite integral with the math tag sorry about that...
Definite Integral (a= 2 and b=oo infinity) of $1/(x^2-1) dx$
Using either the definition of $arctanh(x)$
Or doing PFD
to get
Letting $x=1$ we see that $A=\frac{-1}{2}$
and letting $x=-1$ we see that $B=\frac{-1}{2}$
So $\frac{1}{x^2-1}=\frac{-1}{2}\bigg[\frac{1}{x-1}-\frac{1}{x+1}\bigg]$
So $\int_2^{\infty}\frac{dx}{x^2-1}=\frac{1}{2}\int_2^{\infty}\bigg[\frac{1}{x-1}-\frac{1}{x+1}=\frac{1}{2}\bigg[\ln(x+1)-\ln(x-1)\bigg]\bigg|_2^{\infty}=\frac{\ln(3)}{2}$
Last edited by Mathstud28; May 14th 2008 at 07:13 PM.
Using either the definition of $arctanh(x)$
Or doing PFD
to get
Letting $x=1$ we see that $A=\frac{1}{2}$
and letting $x=-1$ we see that $B=\frac{-1}{2}$
So $\frac{1}{x^2-1}=\frac{1}{2}\bigg[\frac{1}{x-1}-\frac{1}{x+1}\bigg]$
So $\int_2^{\infty}\frac{dx}{x^2-1}=\frac{1}{2}\int_2^{\infty}\bigg[\frac{1}{x-1}-\frac{1}{x+1}=\frac{1}{2}\bigg[\ln(x+1)-\ln(x-1)\bigg]\bigg|_2^{\infty}=\infty=\frac{\ln(3)}{2}$
And as I said alternatively $\int_2^{\infty}\frac{dx}{x^2-1}=arctanh(x)\bigg<br /> |_2^{\infty}=\frac{\ln(3)}{2}$
I am having trouble with this definite integral, especially since it involves infinity, and the way my professor explained the process has me completely baffled. Any help is very much
appreciated, last time one of the members of this forum explained the process better than my professor! Thank you for any help !
P.s. I haven't figured out how to display the definite integral with the math tag sorry about that...
Definite Integral (a= 2 and b=oo infinity) of $1/(x^2-1) dx$
Well, as you might expect, we should first begin by finding the antiderivative of the function. So:
$F(x)=\frac{1}{2}ln \frac{x-1}{x+1}$
Now, recall the formula for definite integrals:
$F(\infty)-F(2)=[\frac{1}{2}ln \frac{\infty-1}{\infty+1}]-[\frac{1}{2}ln \frac{2-1}{2+1}]$
$F(\infty)-F(2)=ln \sqrt{3}$
May 14th 2008, 06:38 PM #2
May 14th 2008, 06:42 PM #3
May 14th 2008, 06:45 PM #4
May 14th 2008, 06:58 PM #5
May 14th 2008, 07:14 PM #6
Senior Member
Feb 2008
|
{"url":"http://mathhelpforum.com/calculus/38383-definite-integral.html","timestamp":"2014-04-16T16:08:59Z","content_type":null,"content_length":"59598","record_id":"<urn:uuid:f115fc12-2e54-4226-a07e-571f7d7070bf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Umbilic Torus, Writ Large
Stony Brook University
has a new landmark—a gracefully contoured, intricately patterned ring that rises 24 feet above its granite base.
in October, this bronze sculpture is a visual testament to the beauty of mathematics.
Created by noted sculptor and mathematician
Helaman Ferguson
, the sculpture was constructed from 144 bronze plates, each one unique and formed in a sandstone mold carved to order by a computer-controlled robot at Ferguson's Baltimore warehouse studio.
Baltimore Sun article
Helaman Ferguson stands with a completed portion of his Umbilic Torus SC sculpture in his Baltimore warehouse studio.
Umbilic Torus SC
, this sculpture is a giant version of one of Ferguson’s signature pieces,
Umbilic Torus NC
, created in 1988.
One rendition of Umbilic Torus NC, 27 inches tall, stands in the lobby of MAA headquarters in Washington, D.C.
The basic underlying form is a
, but with a roughly triangular rather than a circular cross section.
This assemblage of bronze plates shows the curved triangular cross section of the sculpture.
The triangular cross section has three inwardly curving sides, which correspond to a curve called a
. In this case, the curve is the path followed by a point on the circumference of a small circle that, in turn, is rolling inside a circle three times as wide. The result is a curve with three cusps,
known as a
As shown in this model, the Stonybrook sculpture's granite base shows this curve.
Imagine sweeping this curved triangle through space while rotating it by 120 degrees before the ends meet to form a loop. The result is one continuous surface, and the three cusps, as seen in the
cross section, lie on the same curve. In other words, a finger tracing the cusp-defined rim travels three times around the ring before ending up back at its starting point. The term “
” in this context refers to the particular way in which the torus is twisted to give this property.
The sculpture’s surface is covered by an approximation of a surface-filling curve know as the
Peano-Hilbert curve
. After a few steps, the pattern looks like an intricate but highly regular maze.
After four stages (iterations), the Peano-Hilbert curve begins to look like a maze.
Rendered in bronze, it gives the sculpture a distinctive surface relief pattern—a continuous trail that echoes Mayan pictographic writing or ancient Chinese bronze vessels. Ferguson adapted this
pattern to curved contours of his sculpture.
Commissioned by
Jim Simons
and the
Simons Foundation
Umbilic Torus SC
took nearly two years to complete. The
involved not only Ferguson but also a team of
, welders, programmers, and others, who had to cope with one challenge after another. Even the problem of moving the massive sculpture from Baltimore to Stony Brook caused much head scratching and
required considerable ingenuity to solve.
The official
) of the sculpture took place on October 25, 2012.
Photos by I. Peterson
No comments:
|
{"url":"http://mathtourist.blogspot.com/2012/12/umbilic-torus-writ-large.html","timestamp":"2014-04-21T00:43:41Z","content_type":null,"content_length":"65290","record_id":"<urn:uuid:1ce586ef-6801-4ff2-9f26-af7a48d652db>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Conservapedia
Mechanics is the branch of physics that studies the motion of bodies. The Greek philosophers were among the first to propose abstract ideas about what motion means. The experimental scientific method
was first introduced by the Islamic scientists during the middle ages. However, it was not until the works of Galileo Galilei and Isaac Newton that the foundations of what it is now known as
classical mechanics were established.
Mechanics can be broadly divided into classical mechanics, relativistic mechanics and quantum mechanics.
Classical mechanics: Describes the motion of macroscopic bodies, which travel at a velocity much less than the speed of light. There are several formulations of classical mechanics. The most widely
known is Newtonian mechanics, as was established in Isaac Newton’s book “Principia Mathematica”. It is based on the fact that bodies accelerate whenever they are under the influence of a force.
Classical mechanics can also be studied from the point of view of conservation of energy, or with the more abstract formulations of Lagrangian Dynamics and Hamiltonean Dynamics. According to
classical mechanics, gravity is an “action at a distance” force between two objects, that diminish the more separated the objects are.
Relativistic Mechanics: Describe the motion of bodies which travel at speeds comparable, but less, than the speed of light. Classical mechanics is an approximation of relativistic mechanics when the
object travels at a much lower speed. Its main ideas is that all the laws of physics are the same no matter how fast the observer is moving, as long as it is moving at a constant speed, that the
speed of light is the same for all observers, again, independently of how fast they are moving, and that gravity is not a force, but a curvature of space time caused by a massive object and “felt” by
another massive object. Some authors consider relativistic mechanics as part of classical mechanics, and only make distinction between classical and quantum mechanics.
Quantum mechanics: Study the motion of the bodies at very low energy scales. (Or at a microscopic level). It was developed in the first half of the XX century by physicists like Heisenberg, Planck,
Einstein, Schrödinger and others. According to quantum mechanics, all objects behave at the same time as particles and as waves. For example, we could view light as an electromagnetic wave, or as a
stream of particles called photons. Similarly, all ordinary objects have wave like properties, although almost unnoticeable for macroscopic objects. Also, instead of the certain predictions of
classical mechanics, quantum mechanics can only make probabilistic statements, both of the current state and of the future of the object in study.
|
{"url":"http://www.conservapedia.com/Mechanics","timestamp":"2014-04-16T07:33:48Z","content_type":null,"content_length":"14783","record_id":"<urn:uuid:5908a16c-ee0a-46ad-b18a-de68e0fd4a01>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berkeley Lake, GA Prealgebra Tutor
Find a Berkeley Lake, GA Prealgebra Tutor
I have a BS and MS in Physics from Georgia Tech and a Ph.D. in Mathematics from Carnegie Mellon University. I worked for 30+ years as an applied mathematician for Westinghouse in Pittsburgh.
During that time I also taught as an adjunct professor at CMU and at Duquesne University in the Mathematics Departments.
10 Subjects: including prealgebra, calculus, physics, geometry
...I usually get to know a student by watching them work and figuring out where their weaknesses are. I then provide ways to strengthen the weak points and give pointers on how to correct errors.
I also excel at teaching study methods, along with having a lot of patience to work with students in subjects they may not like.I have extensive experience in tutoring Algebra 1 and 2.
37 Subjects: including prealgebra, reading, English, chemistry
...Samples of my portfolio are available upon request. I am passionate about the creative process and am excited about sharing the fundamentals of artistic expression with others. I believe all
people are creative, whether or not they know it.
26 Subjects: including prealgebra, English, reading, writing
...In 2006, I was certified as translator of English. One of the most important moments in my career path was entering the the College of Foreign Languages at Vinnitsa State Pedagogical
University and being accepted by a government-funded program. The time spent studying methodology, lexicology, grammar and stylistics gave me ability to put my knowledge into practical use.
10 Subjects: including prealgebra, chemistry, algebra 1, algebra 2
...It's a VERY powerful tool TOO powerful for most - but can be mastered relatively quickly with a little one-on-one training. Physics is a fascinating subject and universally useful. But, it can
seem daunting at first.
32 Subjects: including prealgebra, reading, calculus, physics
Related Berkeley Lake, GA Tutors
Berkeley Lake, GA Accounting Tutors
Berkeley Lake, GA ACT Tutors
Berkeley Lake, GA Algebra Tutors
Berkeley Lake, GA Algebra 2 Tutors
Berkeley Lake, GA Calculus Tutors
Berkeley Lake, GA Geometry Tutors
Berkeley Lake, GA Math Tutors
Berkeley Lake, GA Prealgebra Tutors
Berkeley Lake, GA Precalculus Tutors
Berkeley Lake, GA SAT Tutors
Berkeley Lake, GA SAT Math Tutors
Berkeley Lake, GA Science Tutors
Berkeley Lake, GA Statistics Tutors
Berkeley Lake, GA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Chamblee, GA prealgebra Tutors
Clarkston, GA prealgebra Tutors
Cumming, GA prealgebra Tutors
Doraville, GA prealgebra Tutors
Duluth, GA prealgebra Tutors
Holly Springs, GA prealgebra Tutors
Johns Creek, GA prealgebra Tutors
Lilburn prealgebra Tutors
Norcross, GA prealgebra Tutors
North Metro prealgebra Tutors
Peachtree Corners, GA prealgebra Tutors
Scottdale, GA prealgebra Tutors
Stone Mountain prealgebra Tutors
Sugar Hill, GA prealgebra Tutors
Suwanee prealgebra Tutors
|
{"url":"http://www.purplemath.com/Berkeley_Lake_GA_Prealgebra_tutors.php","timestamp":"2014-04-20T11:32:44Z","content_type":null,"content_length":"24507","record_id":"<urn:uuid:bd6ff8c1-7269-49f8-be59-720b1dfefa25>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quotient? I'm having a tough time with it would appreciate help!
Simplify using the product and quotient properties of radicals cube root of 4X^4/25Y^6
$\displaystyle \sqrt[3]{\frac{4x^4}{25y^6}} = \frac{\sqrt[3]{4x^4}}{\sqrt[3]{25y^6}}$ $\displaystyle = \frac{\sqrt[3]{4}\sqrt[3]{x^4}}{\sqrt[3]{25}\sqrt[3]{y^6}}$ $\displaystyle = \frac{\sqrt[3]{4}\
sqrt[3]{x^4}}{\sqrt[3]{25}\,y^2}$ $\displaystyle = \frac{\sqrt[3]{5}\sqrt[3]{4}\sqrt[3]{x^4}}{\sqrt[3]{5}\sqrt[3]{5^2}\,y^2}$ $\displaystyle = \frac{\sqrt[3]{5\cdot 4\cdot x^4}}{\sqrt[3]{5^3}\,y^2}$
$\displaystyle = \frac{\sqrt[3]{20x^4}}{5y^2}$.
Hello, victorfk06! $\text{Simplify using the product and quotient properties of radicals:}$ . . $\sqrt[3]{\dfrac{4x^4}{25y^6}}$ $\displaystyle \text{We have: }\;\sqrt[3]{\frac{4\cdot x^3 \cdot x}{5^2
\cdot y^6}}$ $\displaystyle \text{Under the radical multiply by }\frac{5}{5}\!:\;\;\sqrt[3]{\frac{5}{5}\cdot \frac{4\cdot x^3 \cdot x}{5^2\cdot y^6}}$ $\displaystyle \text{We have: }\; \sqrt[3]{\frac
{20\cdot x^3 \cdot x}{5^3\cdot y^6}} \;=\;\frac{\sqrt[3]{20}\cdot\sqrt[3]{x^3}\cdot\sqrt[3]{x}} {\sqrt[3]{5^3}\cdot\sqrt[3]{y^6}}$ . . . . . . . . $\displaystyle =\;\frac{\sqrt[3]{20}\cdot x \cdot \
sqrt[3]{x}}{5\cdot y^2} \;=\;\frac{x\sqrt[3]{20x}}{5y^2}$
|
{"url":"http://mathhelpforum.com/algebra/169985-quotient-i-m-having-tough-time-would-appreciate-help.html","timestamp":"2014-04-17T13:50:23Z","content_type":null,"content_length":"50997","record_id":"<urn:uuid:c9a5a50c-65f4-4490-b9c1-6ae749031618>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about categories on Math ∩ Programming
A lot of people who like functional programming often give the reason that the functional style is simply more elegant than the imperative style. When compelled or inspired to explain (as I did in my
old post, How I Learned to Love Functional Programming), they often point to the three “higher-order” functions map, fold, and filter, as providing a unifying framework for writing and reasoning
about programs. But how unifying are they, really? In this post we’ll give characterizations of these functions in terms of universal properties, and we’ll see that in fact fold is the “most”
universal of the three, and its natural generalization gives a characterization of transformations of standard compound data types.
By being universal or having a universal property, we don’t mean that map, fold, and filter are somehow mystically connected to all programming paradigms. That might be true, but it’s not the point
of this article. Rather, by saying something has a universal property we are making a precise mathematical statement that it is either an initial or final object (or the unique morphism determined by
such) in some category.
That means that, as a fair warning to the reader, this post assumes some basic knowledge of category theory, but no more than what we’ve covered on this blog in the past. Of particular importance for
the first half of this post is the concept of a universal property, and in the followup post we’ll need some basic knowledge of functors.
Map, Filter, and Fold
Recalling their basic definitions, map is a function which accepts as input a list $L$ whose elements all have the same type (call it $A$), and a function $f$ which maps $A$ to another type $B$. Map
then applies $f$ to every entry of $L$ to produce a new list whose entries all have type $B$.
In most languages, implementing the map function is an elementary exercise. Here is one possible definition in ML.
fun map(_, nil) = nil
| map(f, (head::tail)) = f(head) :: map(f, tail)
Next, filter takes as input a boolean-valued predicate $p : A \to \textup{bool}$ on types $A$ and a list $L$ whose entries have type $A$, and produces a list of those entries of $L$ which satisfy the
predicate. Again, it’s implementation in ML might be:
fun filter(_, nil) = nil
| filter(p, (head::tail)) = if p(head)
then (head :: filter(p, tail))
else filter(p, tail)
Finally, fold is a function which “reduces” a list $L$ of entries with type $A$ down to a single value of type $B$. It accepts as input a function $f : A \times B \to B$, and an initial value $v \in
B$, and produces a value of type $B$ by recursively applying $f$ as follows:
fun fold(_, v, nil) = v
| fold(f, v, (head::tail)) = f(head, fold(f, v, tail))
(If this definition is mysterious, you’re probably not ready for the rest of this post.)
One thing that’s nice about fold is that we can define map and filter in terms of it:
fun map(f, L) = fold((fn x, xs => (f(x):xs)), [], L)
fun filter(p, L) = fold((fn x, xs => if p(x) then x:xs else xs end), [], L)
We’ll see that this is no accident: fold happens to have the “most general” universality of the three functions, but as a warm-up and a reminder we first investigate the universality of map and
Free Monoids and Map
Map is the easiest function of the three to analyze, but to understand it we need to understand something about monoids. A monoid is simple to describe, it’s just a set $M$ with an associative binary
operation denoted by multiplication and an identity element for that operation.
A monoid homomoprhism between two monoids $M, N$ is a function $f: M \to N$ such that $f(xy) = f(x)f(y)$. Here the multiplication on the left hand side of the equality is the operation from $M$ and
on the right it’s the one from $N$ (this is the same definition as a group homomorphism). As is to be expected, monoids with monoid homomorphisms form a category.
We encounter simple monoids every day in programming which elucidate this definition. Strings with the operation of string concatenation form a monoid, and the empty string acts as an identity
because concatenating a string to the empty string has no effect. Similarly, lists with list concatenation form a monoid, where the identity is the empty list. A nice example of a monoid homomorphism
is the length function. We know it’s a homomorphism because the length of a concatenation of two lists is just the sum of the lengths of the two lists.
Integers also form a monoid, with addition as the operation and zero as the identity element. However, the list and string monoids have an extra special property that integers do not. For a number
$n$ you can always find $-n$ so that $n + (-n) = 0$ is the identity element. But for lists, the only way to concatenate two lists and get the empty list is if both of the lists were empty to begin
with. A monoid with this property is called free, and to fully understand it we should make a definition which won’t seem to mean anything at first.
Definition: Let $\mathbf{C}$ be a category. Given a set $A$, the free object over $A$, denoted $F(A)$, is an object of $\mathbf{C}$ which is universal with respect to set-maps $A \to B$ for any
object $B$ in $\mathbf{C}$.
As usual with a definition by universal property, we should elaborate as to exactly what’s going on. Let $\mathbf{C}$ be a category whose objects are sets and let $A$ a set, possibly not in this
category. We can construct a new category, $\mathbf{C}_{A \to X}$ whose objects are set-maps
$\displaystyle f: A \to X$
and whose morphisms are commutative diagrams of the form
where $\varphi$ is a morphism in $\mathbf{C}$. In other words, we simply require that $\varphi(f_1(a)) = f_2(a)$ for every $a \in A$.
By saying that the free monoid on $A$ satisfies this universal property for the category of monoids, we really mean that it is initial in this category of set-maps and commutative diagrams. That is,
there is a monoid $F(A)$ and a set-map $i_A: A \to F(A)$ so that for every monoid $Y$ and set-map $f: A \to Y$ there is a unique monoid homomorphism from $i_A$ to $f$. In other words, there is a
unique monoid homomorphism $\varphi$ making the following diagram commute:
For example, if $A$ is the set of all unicode characters, then $F(A)$ is precisely the monoid of all strings on those characters. The map $i_A$ is then the map taking a character $a$ to the
single-character string “$a$“. More generally, if $T$ is any type in the category of types and computable functions, then $F(T)$ is the type of homogeneous lists whose elements have type $T$.
We will now restrict our attention to lists and types, and we will denote the free (list) monoid on a type $A$ as $[A]$. The universal property of map is then easy to describe, it’s just a specific
instance of this more general universal property for the free monoids. Specifically, we work in the sub-category of homogeneous list monoids (we only allow objects which are $[T]$ for some type $T$).
The morphism $i_A: A \to [A]$ just takes a value $a$ of type $A$ and spits out a single-item list $[a]$. We might hope that for any mapping $A \to [B]$, there is a unique monoid homomorphism $[A] \to
[B]$ making the following diagram commute.
And while this is true, the diagram lies because “map” does not achieve what $\varphi$ does. The map $f$ might send all of $A$ to the empty list, and this would cause $\varphi$ to be the trivial map.
But if we decomposed $f$ further to require it to send $a$ to a single $b$, such as
then things work out nicely. In particular, specifying a function $f: A \to B$ uniquely defines how a list homomorphism operates on lists. In particular, the diagram forces the behavior for
single-element lists: $[a] \mapsto [f(a)]$. And the property of $\varphi$ being a monoid homomorphism forces $\varphi$ to preserve order.
Indeed, we’ve learned from this analysis that the structure of the list involved is crucial in forming the universal property. Map is far from the only computable function making the diagram commute,
but it is clearly the only monoid homomorphism. As we’ll see, specifying the order of operations is more generally described by fold, and we’ll be able to restate the universal property of map
without relying on monoid homomorphisms.
The filter function is a small step up in complexity from map, but its universal property is almost exactly the same. Again filter can be viewed through the lens of a universal property for list
monoids, because filter itself takes a predicate and produces a list monoid. We know this because filtering two lists by the same predicate and concatenating them is the same as concatenating them
first and then filtering them. Indeed, the only difference here is that the diagram now looks like this
where $p$ is our predicate, and $j_A$ is defined by $(a, T) \mapsto [a]$ and $(a,F) \mapsto []$. The composition $j_A \circ p$ gives the map $A \to [A]$ that we can extend uniquely to a monoid
homomorphism $[A] \to [A]$. We won’t say any more on it, except to mention that again maintaining the order of the list is better described via fold.
Fold, and its Universal Property
Fold is different from map and filter in a very concrete way. Even though map and filter do specify that the order of the list should be preserved, it’s not an important part of their definition:
filter should filter, and map should map, but no interaction occurs between different entries during the computation. In a sense, we got lucky that having everything be monoid homomorphisms resulted
in preserving order without our specific intention.
Because it doesn’t necessarily produce lists and the operation can be quite weird, fold cannot exist without an explicit description of the order of operations. Let’s recall the definition of fold,
and stick to the “right associative” order of operations sometimes specified by calling the function “foldr.” We will use foldr and fold interchangeably.
fun foldr(_, v, nil) = v
| foldr(f, v, (head::tail)) = f(head, foldr(f, v, tail))
Even more, we can’t talk about foldr in terms of monoids. Even after fixing $f$ and $v$, foldr need not produce a monoid homomorphism. So if we’re going to find a universal property of foldr, we’ll
need a more general categorical picture.
One first try would be to view foldr as a morphism of sets
$\displaystyle B \times \textup{Hom}(A \times B, B) \to \textup{Hom}([A], B)$
The mapping is just $f,b \mapsto \textup{foldr } f \textup{ } b$, and this is just the code definition we gave above. One might hope that this mapping defines an isomorphism in the category of types
and programs, but it’s easy to see that it does not. For example, let
$A = \left \{ 1 \right \}, B = \left \{ 1,2 \right \}$
Then the left hand side of the mapping above is a set of size 8 (there are eight ways to combine a choice of element in $B$ with a map from $A \times B \to B$). But the right hand size is clearly
infinite. In fact, it’s uncountably infinite, though not all possible mappings are realizable in programs of a reasonable length (in fact, very few are). So the morphism can’t possibly be a
surjective and hence is not an isomorphism.
So what can we say about fold? The answer is a bit abstract, but it works out nicely.
Say we fix a type for our lists, $A$. We can define a category $\mathbf{C}_A$ which has as objects the following morphisms
By $1$ we mean the type with one value (null, if you wish), and $f$ is a morphism from a coproduct (i.e. there are implicit parentheses around $A \times B$). As we saw in our post on universal
properties, a morphism from a coproduct is the same thing as a pair of functions which operates on each piece. Here one operates on $1$ and the other on $A \times B$. Since morphisms from $1$ are
specified by the image of the sole value $f(1) = b$, we will often write such a function as $b \amalg h$, where $h: A \times B \to B$.
Now the morphisms in $\mathbf{C}_A$ are the pairs of morphisms $\varphi, \overline{\varphi}$ which make the following diagram commute:
Here we write $\overline{\varphi}$ because it is determined by $\varphi$ in a canonical way. It maps $1 \to 1$ in the only way one can, and it maps $(a,b) \mapsto (a, \varphi(b))$. So we’re really
specifying $\varphi$ alone, but as we’ll see it’s necessary to include $\overline{\varphi}$ as well; it will provide the “inductive step” in the definition of fold.
Now it turns out that fold satisfies the universal property of being initial in this category. Well, not quite. We’re saying first that the following object $\mathscr{A}$ is initial,
Where cons is the list constructor $A \times [A] \to [A]$ which sends $(a_0, [a_1, \dots, a_n]) \mapsto [a_0, a_1, \dots, a_n]$. By being initial, we mean that for any object $\mathscr{B}$ given by
the morphism $b \amalg g: 1 \coprod A \times B \to B$, there is a unique morphism from $\mathscr{A} \to \mathscr{B}$. The “fold” is precisely this unique morphism, which we denote by $(\textup{fold
g, b})$. We implicitly know its “barred” counterpart making the following diagram commute.
This diagram has a lot going on in it, so let’s go ahead and recap. The left column represents an object $\mathscr{A}$ we’re claiming is initial in this crazy category we’ve made. The right hand
side is an arbitrary object $\mathscr{B}$, which is equivalently the data of an element $b \in B$ and a mapping $g: A \times B \to B$. This is precisely the data needed to define fold. The dashed
lines represent the unique morphisms making the diagram commute, whose existence and uniqueness is the defining property of what it means for an object to be initial in this category. Finally, we’re
claiming that foldr, when we fix $g$ and $b$, makes this diagram commute, and is hence the very same unique morphism we seek.
To prove all of this, we need to first show that the object $\mathscr{A}$ is initial. That is, that any two morphisms we pick from $\mathscr{A} \to \mathscr{B}$ must be equal. The first thing to
notice is that the two objects $1 \coprod A \times [A]$ and $[A]$ are really the same thing. That is, $\textup{nil} \amalg \textup{cons}$ has a two-sided inverse which makes it an isomorphism in the
category of types. Specifically, the inverse is the map $\textup{split}$ sending
$\textup{nil} \mapsto \textup{nil}$
$[a_1, \dots, a_n] \mapsto (a_1, [a_2, \dots, a_n])$
So if we have a morphism $\varphi, \overline{\varphi}$ from $\mathscr{A} \to \mathscr{B}$, and the diagram commutes, we can see that $\varphi = (b \amalg g) \circ \overline{\varphi} \circ \textup
{split}$. We’re just going the long way around the diagram.
Supposing that we have two such morphisms $\varphi_1, \varphi_2$, we can prove they are equal by induction on the length of the input list. It is trivial to see that they both must send the empty
list to $b$. Now suppose that for lists of length $n-1$ the two are equal. Given a list $[a_1, \dots, a_n]$ we see that
$\displaystyle \varphi_1([a_1, \dots, a_n]) = \varphi_1 \circ \textup{cons} (a_1, [a_2, \dots, a_n]) = g \circ \overline{\varphi_1} (a_1, [a_2, \dots, a_n])$
where the last equality holds by the commutativity of the diagram. Continuing,
$\displaystyle g \circ \overline{\varphi_1} (a_1, [a_2, \dots, a_n]) = g (a_1, \varphi_1([a_2, \dots, a_n])) = g(a_1, \varphi_2(a_2, \dots, a_n))$
where the last equality holds by the inductive hypothesis. From here, we can reverse the equalities using $\varphi_2$ and it’s “barred” version to get back to $\varphi_2([a_1, \dots, a_n])$, proving
the equality.
To show that fold actually makes the diagram commute is even simpler. In order to make the diagram commute we need to send the empty list to $b$, and we need to inductively send $[a_1, \dots, a_n]$
to $g(a_1, (\textup{fold g b})([a_2, \dots, a_n]))$, but this is the very definition of foldr!
So we’ve established that fold has this universal property. It’s easy now to see how map and filter fit into the picture. For mapping types $A$ to $C$ via $f$, just use the type $[C]$ in place of $B$
above, and have $g(a, L) = \textup{cons}(f(a), L)$, and have $b$ be the empty list. Filter similarly just needs a special definition for $g$.
A skeptical reader might ask: what does all of this give us? It’s a good question, and one that shouldn’t be taken lightly because I don’t have an immediate answer. I do believe that with some extra
work we could use universal properties to give a trivial proof of the third homomorphism theorem for lists, which says that any function expressible as both a foldr and a foldl can be expressed as a
list homomorphism. The proof would involve formulating a universal property for foldl, which is very similar to the one for foldr, and attaching the diagrams in a clever way to give the universal
property of a monoid homomorphism for lists. Caveat emptor: this author has not written such a proof, but it seems likely that it would work.
More generally, any time we can represent the requirements of a list computation by an object like $\mathscr{B}$, we can represent the computation as a foldr. What good does this do us? Well, it
might already be obvious when you can and can’t use fold. But in addition to proving when it’s possible to use fold, this new perspective generalizes very nicely to give us a characterization of
arbitrary computations on compound data types. One might want to know when to perform fold-like operations on trees, or other sufficiently complicated beasts, and the universal property gives us a
way to see when such a computation is possible.
That’s right, I said it: there’s more to the world than lists. Shun me if you must, but I will continue dream of great things.
In an effort to make the egregiously long posts on this blog slightly shorter, we’ll postpone our generalization of the universal property of fold until next time. There we’ll define “initial
algebras” and show how to characterize “fold-like” computations any compound data type.
Until then!
Last time we worked through some basic examples of universal properties, specifically singling out quotients, products, and coproducts. There are many many more universal properties that we will
mention as we encounter them, but there is one crucial topic in category theory that we have only hinted at: functoriality.
As we’ve repeatedly stressed, the meat of category theory is in the morphisms. One natural question one might ask is, what notion of morphism is there between categories themselves? Indeed, the most
straightforward way to see category theoretic concepts in classical mathematics is in a clever choice of functor. For example (and this example isn’t necessary for the rest of the article) one can
“associate” to each topological space a group, called the homology group, in such a way that continuous functions on topological spaces translate to group homomorphisms. Moreover, this translation
is functorial in the following sense: the group homomorphism associated to a composition is the composition of the associated group homomorphisms. If we denote the association by a subscripted
asterisk, then we get the following common formula.
$\displaystyle (fg)_* = f_* g_*$
This is the crucial property that maintains the structure of morphisms. Again, this should reinforce the idea that the crucial ingredient of every definition in category theory is its effect on
Functors: a Definition
In complete generality, a functor is a mapping between two categories which preserves the structure of morphisms. Formally,
Definition: Let $\mathbf{C}, \mathbf{D}$ be categories. A functor $\mathscr{F}$ consists of two parts:
• For each object $C \in \mathbf{C}$ an associated object $\mathscr{F}(C) \in \mathbf{D}$.
• For each morphism $f \in \textup{Hom}_{\mathbf{C}}(A,B)$ a corresponding morphism $\mathscr{F}(f) \in \textup{Hom}_{\mathbf{D}}(\mathscr{F}(A), \mathscr{F}(B))$. Specifically, for each $A,B$ we
have a set-function $\textup{Hom}_{\mathbf{C}}(A,B) \to \textup{Hom}_{\mathbf{C}}(\mathscr{F}(A), \mathscr{F}(B))$.
There are two properties that a functor needs to satisfy to “preserve structure.” The first is that the identity morphisms are preserved for every object; that is, $\mathscr{F}(1_A) = 1_{\mathscr{F}
(A)}$ for every object $A$. Second, composition must be preserved. That is, if $f \in \textup{Hom}_{\mathbf{C}}(A,B)$ and $g \in \textup{Hom}_{\mathbf{C}}(B,C)$, we have
$\displaystyle \mathscr{F}(gf) = \mathscr{F}(g) \mathscr{F}(f)$
We often denote a functor as we would a function $\mathscr{F}: \mathbf{C} \to \mathbf{D}$, and use the function application notation as if everything were with sets.
Let’s look at a few simple examples.
Let $\mathbf{FiniteSet}$ be the poset category of finite sets with subsets as morphisms, and let $\mathbf{Int}$ be the category whose objects are integers where there is a unique morphism from $x \to
y$ if $x \leq y$. Then the size function is a functor $\mathbf{Set} \to \mathbf{Int}$. Continuing with $\mathbf{Int}$, remember that $\mathbf{Int}$ forms a group under addition (also known as $\
mathbb{Z}$). And so by its very definition any group homomorphism $\mathbb{Z} \to \mathbb{Z}$ is a functor from $\mathbf{Int} \to \mathbf{Int}$. A functor from a category to itself is often called
an endofunctor.
There are many examples of functors from the category $\mathbf{Top}$ of topological spaces to the category $\mathbf{Grp}$ of groups. These include some examples we’ve seen on this blog, such as the
fundamental group and homology groups.
One trivial example of a functor is called the forgetful functor. Let $\mathbf{C}$ be a category whose objects are sets and whose morphisms are set-maps with additional structure, for example the
category of groups. Define a functor $\mathbf{C} \to \mathbf{Set}$ which acts as the identity on both sets and functions. This functor simply “forgets” the structure in $\mathbf{C}$. In the realm of
programs and types (this author is thinking Java), one can imagine this as a ‘type-cast’ from String to Object. In the same vein, one could define an “identity” endofunctor $\mathbf{C} \to \mathbf
{C}$ which does absolutely nothing.
One interesting way to think of a functor is as a “realization” of one category inside another. In particular, because the composition structure of $\mathbf{C}$ is preserved by a functor $\mathbf{C}
\to \mathbf{D}$, it must be the case that all commutative diagrams are “sent” to commutative diagrams. In addition, isomorphisms are sent to isomorphisms because if $f, g$ are inverses of each other,
$1_{\mathscr{F}(A)} = \mathscr{F}(1_A) = \mathscr{F}(gf) = \mathscr{F}(g)\mathscr{F}(f)$ and likewise for the reverse composition. And so if we have a functor $\mathscr{F}$ from a poset category
(say, the real numbers with the usual inequality) to some category $\mathbf{C}$, then we can realize the structure of the poset sitting inside of $\mathbf{C}$ (perhaps involving only some of the
objects of $\mathbf{C}$). This view comes in handy in a few places we’ll see later in our series on computational topology.
The Hom Functor
There is a very important and nontrivial example called the “hom functor” which is motivated by the category of vector spaces. We’ll stick to the concrete example of vector spaces, but the
generalization to arbitrary categories is straightforward. If the reader knows absolutely nothing about vector spaces, replace “vector space” with “object” and “linear map” with “morphism.” It won’t
quite be correct, but it will get the idea across.
To each vector space $V$ one can define a dual vector space of functions $V \to \mathbb{C}$ (or whatever the field of scalars for $V$ is). Following the lead of hom sets, the dual vector space is
denoted $\textup{Hom}(V,\mathbb{C})$. Here the morphisms in the set are those from the category of vector spaces (that is, linear maps $V \to \mathbb{C}$). Indeed, this is a vector space: one can add
two functions pointwise ($(f+g)(v) = f(v) + g(v)$) and scale them ($(\lambda f)(v) = \lambda f(v)$), and the properties for a vector spaces are trivial to check.
Now the mapping $\textup{Hom}(-, \mathbb{C})$ which takes $V$ and produces $\textup{Hom}(V, \mathbb{C})$ is a functor called the hom functor. But let’s inspect this one more closely. The source
category is obviously the category of vector spaces, but what is the target category? The objects are clear: the hom sets $\textup{Hom}(V,\mathbb{C})$ where $V$ is a vector space. The morphisms of
the category are particularly awkward. Officially, they are written as
$\displaystyle \textup{Hom}(\textup{Hom}(V, \mathbb{C}), \textup{Hom}(W, \mathbb{C}))$
so a morphism in this category takes as input a linear map $V \to \mathbb{C}$ and produces as output one $W \to \mathbb{C}$. But what are the morphisms in words we can understand? And how can we
compose them? Before reading on, think about what a morphism of morphisms should look like.
Okay, ready?
The morphisms in this category can be thought of as linear maps $W \to V$. More specifically, given a morphism $\varphi: V \to \mathbb{C}$ and a linear map $g: W \to V$, we can construct a linear map
$W \to \mathbb{C}$ by composing $\varphi g$.
And so if we apply the $\textup{Hom}$ functor to a morphism $f: W \to V$, we get a morphism in $\textup{Hom}(\textup{Hom}(V, \mathbb{C}), \textup{Hom}(W, \mathbb{C}))$. Let’s denote the application
of the hom functor using an asterisk so that $f \mapsto f_*$.
But wait a minute! The mapping here is going in the wrong direction: we took a map in one category going from the $V$ side to the $W$ side, and after applying the functor we got a map going from the
$W$ side ($\textup{Hom}(W, \mathbb{C})$) to the $V$ side ($\textup{Hom}(V, \mathbb{C})$). It seems there is no reasonable way to take a map $V \to \mathbb{C}$ and get a map in $W \to \mathbb{C}$
using just $f$, but the other way is obvious. The hom functor “goes backward” in a sense. In other words, the composition property for our “functor” makes the composite $(g f)_*$ to the map taking $\
varphi$ to $\varphi g f$. On the other hand, there is no way to compose $g_* f_*$, as they operate on the wrong domains! It must be the other way around:
$\displaystyle (gf)_* = f_* g_*$
We advise the reader to write down the commutative diagram and trace out the compositions to make sure everything works out. But this is a problem, because it makes the hom functor fail the most
important requirement. In order to fix this reversal “problem,” we make the following definition:
Definition: A functor $\mathscr{F} : \mathbf{C} \to \mathbf{D}$ is called covariant if it preserves the order of morphism composition, so that $\mathscr{F}(gf) = \mathscr{F}(g) \mathscr{F}(f)$. If it
reverses the order, we call it contravariant.
And so the hom functor on vector spaces is a contravariant functor, while all of the other functors we’ve defined in this post are covariant.
There is another way to describe a contravariant functor as a covariant functor which is often used. It involves the idea of an “opposite” category. For any category $\mathbf{C}$ we can define the
opposite category $\mathbf{C}^{\textup{op}}$ to be a category with the same objects as $\mathbf{C}$, but with all morphisms reversed. That is, we define
$\displaystyle \textup{Hom}_{\mathbf{C}^{\textup{op}}}(A,B) = \textup{Hom}_{\mathbf{C}}(B,A)$
We leave it to the reader to verify that this is indeed a category. It is also not hard to see that $(\mathbf{C}^{\textup{op}})^{\textup{op}} = \mathbf{C}$. Opposite categories give us a nice
recharacterization of a contrvariant functor. Indeed, because composition in opposite categories is reversed, a contravariant functor $\mathbf{C} \to \mathbf{D}$ is just a covariant functor on the
opposite category $\mathbf{C}^{\textup{op}} \to \mathbf{D}$. Or equivalently, one $\mathbf{C} \to \mathbf{D}^{\textup{op}}$. More than anything, opposite categories are syntactical sugar. Composition
is only reversed artificially to make domains and codomains line up, but the actual composition is the same as in the original category.
Functors as Types
Before we move on to some code, let’s take a step back and look at the big picture (we’ve certainly plowed through enough details up to this point). The main thesis is that functoriality is a
valuable property for an operation to have, but it’s not entirely clear why. Even the brightest of readers can only assume such properties are useful for mathematical analysis. It seems that the
question we started this series out with, “what does category theory allow us to do that we couldn’t do before?” still has the answer, “nothing.” More relevantly, the question of what functoriality
allows us to do is unclear. Indeed, once again the answer is “nothing.” Rather, functoriality in a computation allows one to analyze the behavior of a program. It gives the programmer a common
abstraction in which to frame operations, and ease in proving the correctness of one’s algorithms.
In this light, the best we can do in implementing functors in programs is to give a type definition and examples. And in this author’s opinion this series is quickly becoming boring (all of the
computational examples are relatively lame), so we will skip the examples in favor of the next post which will analyze more meaty programming constructs from a categorical viewpoint.
So recall the ML type definition of a category, a tuple of operations for source, target, identity, and composition:
datatype ('object, 'arrow)Category =
category of ('arrow -> 'object) *
('arrow -> 'object) *
('object -> 'arrow) *
('arrow * 'arrow -> 'arrow)
And so a functor consists of the two categories involved (as types), and the mapping on objects, and the mapping on morphisms.
datatype ('cObject, 'cArrow, 'dObject, 'dArrow)Functor =
aFunctor of ('cObject, 'cArrow)Category *
('cObject -> 'dObject) *
('cArrow -> 'dArrow) *
('dObject -> 'dArrow)Category
We encourage the reader who is uncomfortable with these type definitions to experiment with them by implementing some of our simpler examples (say, the size functor from sets to integers). Insofar as
the basic definitions go, functors are not all that interesting. They become much more interesting when additional structure is imposed on them, and in the distant future we will see a glimpse of
this in the form of adjointness. We hope to get around to analyzing statements like “syntax and semantics are adjoint functors.” For the next post in this series, we will take the three beloved
functions of functional programming (map, foldl(r), and filter), and see what their categorical properties are.
Until then!
Universal Properties
Previously in this series we’ve seen the definition of a category and a bunch of examples, basic properties of morphisms, and a first look at how to represent categories as types in ML. In this post
we’ll expand these ideas and introduce the notion of a universal property. We’ll see examples from mathematics and write some programs which simultaneously prove certain objects have universal
properties and construct the morphisms involved.
A Grand Simple Thing
One might go so far as to call universal properties the most important concept in category theory. This should initially strike the reader as odd, because at first glance universal properties are so
succinctly described that they don’t seem to be very interesting. In fact, there are only two universal properties and they are that of being initial and final.
Definition: An object $A$ in a category $\mathbf{C}$ is called initial if for every object $B$ there is a unique morphism $A \to B$. An object $Z$ is called final if for every object $B$ there is a
unique morphism $B \to Z$. If an object satisfies either of these properties, it is called universal. If an object satisfies both, it is called a zero object.
In both cases, the existence of a unique morphism is the same as saying the relevant Hom set is a singleton (i.e., for initial objects $A$, the Hom set $\textup{Hom}_{\mathbf{C}}(A,B)$ consists of a
single element). There is one and only one morphism between the two objects. In the particular case of $\textup{Hom}(A,A)$ when $A$ is initial (or final), the definition of a category says there must
be at least one morphism, the identity, and the universal property says there is no other.
There’s only one way such a simple definition could find fruitful applications, and that is by cleverly picking categories. Before we get to constructing interesting categories with useful universal
objects, let’s recognize some universal objects in categories we already know.
In $\mathbf{Set}$ the single element set is final, but not initial; there is only one set-function to a single-element set. It is important to note that the single-element set is far from unique.
There are infinitely many (uncountably many!) singleton sets, but as we have already seen all one-element sets are isomorphic in $\mathbf{Set}$ (they all have the same cardinality). On the other
hand, the empty set is initial, since the “empty function” is the only set-mapping from the empty set to any set. Here the initial object truly is unique, and not just up to isomorphism.
It turns out universal objects are always unique up to isomorphism, when they exist. Here is the official statement.
Proposition: If $A, A'$ are both initial in $\mathbf{C}$, then $A \cong A'$ are isomorphic. If $Z, Z'$ are both final, then $Z \cong Z'$.
Proof. Recall that a mophism $f: A \to B$ is an isomorphism if it has a two sided inverse, a $g:B \to A$ so that $gf = 1_A$ and $fg=1_B$ are the identities. Now if $A,A'$ are two initial objects
there are unique morphisms $f : A \to A'$ and $g: A' \to A$. Moreover, these compose to be morphisms $gf: A \to A$. But since $A$ is initial, the only morphism $A \to A$ is the identity. The
situation for $fg : A' \to A'$ is analogous, and so these morphisms are actually inverses of each other, and $A, A'$ are isomorphic. The proof for final objects is identical. $\square$
Let’s continue with examples. In the category of groups, the trivial group $\left \{ 1 \right \}$ is both initial and final, because group homomorphisms must preserve the identity element. Hence the
trivial group is a zero object. Again, “the” trivial group is not unique, but unique up to isomorphism.
In the category of types with computable (halting) functions as morphisms, the null type is final. To be honest, this depends on how we determine whether two computable functions are “equal.” In this
case, we only care about the set of inputs and outputs, and for the null type all computable functions have the same output: null.
Partial order categories are examples of categories which need not have universal objects. If the partial order is constructed from subsets of a set $X$, then the initial object is the empty set (by
virtue of being a subset of every set), and $X$ as a subset of itself is obviously final. But there are other partial orders, such as inequality of integers, which have no “smallest” or “largest”
objects. Partial order categories which have particularly nice properties (such as initial and final objects, but not quite exactly) are closely related to the concept of a domain in denotational
semantics, and the language of universal properties is relevant to that discussion as well.
The place where universal properties really shine is in defining new constructions. For instance, the direct product of sets is defined by the fact that it satisfies a universal property. Such
constructions abound in category theory, and they work via the ‘diagram categories’ we defined in our introductory post. Let’s investigate them now.
Let’s recall the classical definition from set theory of a quotient. We described special versions of quotients in the categories of groups and topological spaces, and we’ll see them all unified via
the universal property of a quotient in a moment.
Definition: An equivalence relation on a set $X$ is a subset of the set product $\sim \subset X \times X$ which is reflexive, symmetric, and transitive. The quotient set $X / \sim$ is the set of
equivalence classes on $X$. The canonical projection $\pi : X \to X/\sim$ is the map sending $x$ to its equivalence class under $\sim$.
The quotient set $X / \sim$ can also be described in terms of a special property: it is the “largest” set which agrees with the equivalence relation $\sim$. On one hand, it is the case that whenever
$a \sim b$ in $X$ then $\pi(a) = \pi(b)$. Moreover, for any set $Y$ and any map $g: X \to Y$ which equates equivalent things ($g(a) = g(b)$ for all $a \sim b$), then there is a unique map $f : X/\sim
\to Y$ such that $f \pi = g$. This word salad is best translated into a diagram.
Here we use a dashed line to assert the existence of a morphism (once we’ve proven such a morphism exists, we use a solid line instead), and the symbol $\exists !$ signifies existence ($\exists$) and
uniqueness (!).
We can prove this explicitly in the category $\mathbf{Set}$. Indeed, if $g$ is any map such that $g(a) = g(b)$ for all equivalent $a,b \in X$, then we can define $f$ as follows: for any $a \in X$
whose equivalence class is denoted by $[a]$ in $X / \sim$, and define $f([a]) = g(a)$. This map is well defined because if $a \sim b$, then $f([a]) = g(a) = g(b) = f([b])$. It is unique because if $f
\pi = g = h \pi$ for some other $h: X / \sim \to Y$, then $h([x]) = g(x) = f([x])$; this is the only possible definition.
Now the “official” way to state this universal property is as follows:
The quotient set $X / \sim$ is universal with respect to the property of mapping $X$ to a set so that equivalent elements have the same image.
But as we said earlier, there are only two kinds of universal properties: initial and final. Now this $X / \sim$ looks suspiciously like an initial object ($f$ is going from $X / \sim$, after all),
but what exactly is the category we’re considering? The trick to dissecting this sentence is to notice that this is not a statement about just $X / \sim$, but of the morphism $\pi$.
That is, we’re considering a diagram category. In more detail: fix an object $X$ in $\mathbf{Set}$ and an equivalence relation $\sim$ on $X$. We define a category $\mathbf{Set}_{X,\sim}$ as follows.
The objects in the category are morphisms $f:X \to Y$ such that $a \sim b$ in $X$ implies $f(a) = f(b)$ in $Y$. The morphisms in the category are commutative diagrams
Here $f_1, f_2$ need to be such that they send equivalent things to equal things (or they wouldn’t be objects in the category!), and by the commutativity of the diagram $f_2 = \varphi f_1$. Indeed,
the statement about quotients is that $\pi : X \to X / \sim$ is an initial object in this category. In fact, we have already proved it! But note the abuse of language in our offset statement above:
it’s not really $X / \sim$ that is the universal object, but $\pi$. Moreover, the statement itself doesn’t tell us what category to inspect, nor whether we care about initial or final objects in that
category. Unfortunately this abuse of language is widespread in the mathematical world, and for arguably good reason. Once one gets acquainted with these kinds of constructions, reading between the
limes becomes much easier and it would be a waste of time to spell it out. After all, once we understand $X / \sim$ there is no “obvious” choice for a map $X \to X / \sim$ except for the projection $
\pi$. This is how $\pi$ got its name, the canonical projection.
Two last bits of terminology: if $\mathbf{C}$ is any category whose objects are sets (and hence, where equivalence relations make sense), we say that $\mathbf{C}$has quotients if for every object $X$
there is a morphism $\pi$ satisfying the universal property of a quotient. Another way to state the universal property is to say that all maps respecting the equivalence structure factor through the
quotient, in the sense that we get a diagram like the one above.
What is the benefit of viewing $X / \sim$ by its universal property? For one, the set $X / \sim$ is unique up to isomorphism. That is, if any other pair $(Z, g)$ satisfies the same property, we
automatically get an isomorphism $X / \sim \to Z$. For instance, if $\sim$ is defined via a function $f : X \to Y$ (that is, $a \sim b$ if $f(a) = f(b)$), then the pair $(\textup{im}(f), f)$
satisfies the universal property of a quotient. This means that we can “decompose” any function into three pieces:
$\displaystyle X \to X / \sim \to \textup{im}(f) \to Y$
The first map is the canonical projection, the second is the isomorphism given by the universal property of the quotient, and the last is the inclusion map into $Y$. In a sense, all three of these
maps are “canonical.” This isn’t so magical for set-maps, but the same statement (and essentially the same proof) holds for groups and topological spaces, and are revered as theorems. For groups,
this is called The First Isomorphism Theorem, but it’s essentially the claim that the category of groups has quotients.
This is getting a bit abstract, so let’s see how the idea manifests itself as a program. In fact, it’s embarrassingly simple. Using our “simpler” ML definition of a category from last time, the
constructive proof that quotient sets satisfy the universal property is simply a concrete version of the definition of $f$ we gave above. In code,
fun inducedMapFromQuotient(setMap(x, pi, q), setMap(x, g, y)) =
setMap(q, (fn a => g(representative(a))), y)
That is, once we have $\pi$ and $X / \sim$ defined for our given equivalence relation, this function accepts as input any morphism $g$ and produces the uniquely defined $f$ in the diagram above. Here
the “representative” function just returns an arbitrary element of the given set, which we added to the abstract datatype for sets. If the set $X$ is empty, then all functions involved will raise an
“empty” exception upon being called, which is perfectly fine. We leave the functions which explicitly construct the quotient set given $X, \sim$ as an exercise to the reader.
Products and Coproducts
Just as the concept of a quotient set or quotient group can be generalized to a universal property, so can the notion of a product. Again we take our intuition from $\mathbf{Set}$. There the product
of two sets $X,Y$ is the set of ordered pairs
$\displaystyle X \times Y = \left \{ (x,y) : x \in X, y \in Y \right \}$
But as with quotients, there’s much more going on and the key is in the morphisms. Specifically, there are two obvious choices for morphisms $X \times Y \to X$ and $X \times Y \to Y$. These are the
two projections onto the components, namely $\pi_1(x,y) = x$ and $\pi_2(x,y) = y$. These projections are also called “canonical projections,” and they satisfy their own universal property.
The product of sets is universal with respect to the property of having two morphisms to its factors.
Indeed, this idea is so general that it can be formulated in any category, not just categories whose objects are sets. Let $X,Y$ be two fixed objects in a category $\mathbf{C}$. Should it exist, the
product $X \times Y$ is defined to be a final object in the following diagram category. This category has as objects pairs of morphisms
and as morphisms it has commutative diagrams
In words, to say products are final is to say that for any object in this category, there is a unique map $\varphi$ that factors through the product, so that $\pi_1 \varphi = f$ and $\pi_2 \varphi =
g$. In a diagram, it is to claim the following commutes:
If the product $X \times Y$ exists for any pair of objects, we declare that the category $\mathbf{C}$ has products.
Indeed, many familiar product constructions exist in pure mathematics: sets, groups, topological spaces, vector spaces, and rings all have products. In fact, so does the category of ML types. Given
two types ‘a and ‘b, we can form the (aptly named) product type ‘a * ‘b. The canonical projections exist because ML supports parametric polymorphism. They are
fun leftProjection(x,y) = x
fun rightProjection(x,y) = y
And to construct the unique morphism to the product,
fun inducedMapToProduct(f,g) = fn a => (f(a), g(a))
We leave the uniqueness proof to the reader as a brief exercise.
There is a similar notion called a coproduct, denoted $X \coprod Y$, in which everything is reversed: the arrows in the diagram category go to $X \coprod Y$ and the object is initial in the diagram
category. Explicitly, for a fixed $X, Y$ the objects in the diagram category are again pairs of morphisms, but this time with arrows going to the central object
The morphisms are again commutative diagrams, but with the connecting morphism going away from the central object
And a coproduct is defined to be an initial object in this category. That is, a pair of morphisms $i_1, i_2$ such that there is a unique connecting morphism in the following diagram.
Coproducts are far less intuitive than products in their concrete realizations, but the universal property is no more complicated. For the category of sets, the coproduct is a disjoint union (in
which shared elements of two sets $X, Y$ are forcibly considered different), and the canonical morphisms are “inclusion” maps (hence the switch from $\pi$ to $i$ in the diagram above). Specifically,
if we define the coproduct
$\displaystyle X \coprod Y = (X \times \left \{ 1 \right \}) \cup (Y \times \left \{ 2 \right \})$
as the set of “tagged” elements (the right entry in a tuple signifies which piece of the coproduct the left entry came from), then the inclusion maps $i_1(x) = (x,1)$ and $i_2(y) = (y,2)$ are the
canonical morphisms.
There are similar notions of disjoint unions for topological spaces and graphs, which are their categories’ respective coproducts. However, in most categories the coproducts are dramatically
different from “unions.” In group theory, it is a somewhat complicated object known as the free product. We mentioned free products in our hasty discussion of the fundamental group, but understanding
why and where free groups naturally occur is quite technical. Similarly, coproducts of vector spaces (or $R$-modules) are more like a product, with the stipulation that at most finitely many of the
entries of a tuple are nonzero (e.g., formal linear combinations of things from the pieces). Even without understanding these examples, the reader should begin to believe that relatively simple
universal properties can yield very useful objects with potentially difficult constructions in specific categories. The ubiquity of the concepts across drastically different fields of mathematics is
part of why universal properties are called “universal.”
Luckily, the category of ML types has a nice coproduct which feels like a union, but it is not supported as a “native” language feature like products types are. Specifically, given two types ‘a, ‘b
we can define the “coproduct type”
datatype ('a, 'b)Coproduct = left of 'a | right of 'b
Let’s prove this is actually a coproduct: fix two types ‘a and ‘b, and let $i_1, i_2$ be the functions
fun leftInclusion(x) = left(x)
fun rightInclusion(y) = right(y)
Then given any other pair of functions $f,g$ which accept as input types ‘a and ‘b, respectively, there is a unique function $\varphi$ operating on the coproduct type. We construct it as follows.
fun inducedCoproductMap(f,g) =
theMap(left(a)) = f(a)
theMap(right(b)) = g(b)
The uniqueness of this construction is self-evident. This author finds it fascinating that these definitions are so deep and profound (indeed, category theory is heralded as the queen of
abstraction), but their realizations are trivially obvious to the working programmer. Perhaps this is a statement about how well-behaved the category of ML types is.
Continuing On
So far we have seen three relatively simple examples of universal properties, and explored how they are realized in some categories. We should note before closing two things. The first is that not
every category has objects with these universal properties. Unfortunately poset categories don’t serve as a good counterexample for this (they have both products and coproducts; what are they?), but
it may be the case that in some categories only some pairs of objects have products or coproducts, while others do not.
Second, there are many more universal properties that we haven’t covered here. For instance, the notion of a limit is a universal property, as is the notion of a “free” object. There are kernels,
pull-backs, equalizers, and many other ad-hoc universal properties without names. And for every universal property there is a corresponding “dual” property that results from reversing the arrows in
every diagram, as we did with coproducts. We will visit the relevant ones as they come up in our explorations.
In the next few posts we’ll encounter functors and the concept of functoriality, and start asking some poignant questions about familiar programmatic constructions.
Until then!
Properties of Morphisms
This post is mainly mathematical. We left it out of our introduction to categories for brevity, but we should lay these definitions down and some examples before continuing on to universal properties
and doing more computation. The reader should feel free to skip this post and return to it later when the words “isomorphism,” “monomorphism,” and “epimorphism” come up again. Perhaps the most
important part of this post is the description of an isomorphism.
Isomorphisms, Monomorphisms, and Epimorphisms
Perhaps the most important paradigm shift in category theory is the focus on morphisms as the main object of study. In particular, category theory stipulates that the only knowledge one can gain
about an object is in how it relates to other objects. Indeed, this is true in nearly all fields of mathematics: in groups we consider all isomorphic groups to be the same. In topology, homeomorphic
spaces are not distinguished. The list goes on. The only way to do determine if two objects are “the same” is by finding a morphism with special properties. Barry Mazur gives a more colorful
explanation by considering the meaning of the number 5 in his essay, “When is one thing equal to some other thing?” The point is that categories, more than existing to be a “foundation” for all
mathematics as a formal system (though people are working to make such a formal system), exist primarily to “capture the essence” of mathematical discourse, as Mazur puts it. A category defines
objects and morphisms, but literally all of the structure of a category lies in its morphisms. And so we study them.
The strongest kind of morphism we can consider is an isomorphism. An isomorphism is the way we say two objects in a category are “the same.” We don’t usually care whether two objects are equal, but
rather whether some construction is unique up to isomorphism (more on that when we talk of universal properties). The choices made in defining morphisms in a particular category allow us to
strengthen or weaken this idea of “sameness.”
Definition: A morphism $f : A \to B$ in a category $\mathbf{C}$ is an isomorphism if there exists a morphism $g: B \to A$ so that both ways to compose $f$ and $g$ give the identity morphisms on the
respective objects. That is,
$gf = 1_A$ and $fg = 1_B$.
The most basic (usually obvious, but sometimes very deep) question in approaching a new category is to ask what the isomorphisms are. Let us do this now.
In $\mathbf{Set}$ the morphisms are set-functions, and it is not hard to see that any two sets of equal cardinality have a bijection between them. As all bijections have two-sided inverses, two
objects in $\mathbf{Set}$ are isomorphic if and only if they have the same cardinality. For example, all sets of size 10 are isomorphic. This is quite a weak notion of “sameness.” In contrast, there
is a wealth of examples of groups of equal cardinality which are not isomorphic (the smallest example has cardinality 4). On the other end of the spectrum, a poset category $\mathbf{Pos}_X$ has no
isomorphisms except for the identity morphisms. The poset categories still have useful structure, but (as with objects within a category) the interesting structure is in how a poset category relates
to other categories. This will become clearer later when we look at functors, but we just want to dissuade the reader from ruling out poset categories as uninteresting due to a lack of interesting
Consider the category $\mathbf{C}$ of ML types with ML functions as morphisms. An isomorphism in this category would be a function which has a two-sided inverse. Can the reader think of such a
Let us now move on to other, weaker properties of morphisms.
Definition: A morphism $f: A \to B$ is a monomorphism if for every object $C$ and every pair of morphisms $g,h: C \to A$ the condition $fg = fh$ implies $g = h$.
The reader should parse this notation carefully, but truly think of it in terms of the following commutative diagram:
Whenever this diagram commutes and $f$ is a monomorphism, then we conclude (by definition) that $g=h$. Remember that a diagram commuting just means that all ways to compose morphisms (and arrive at
morphisms with matching sources and targets) result in an identical morphism. In this diagram, commuting is the equivalent of claiming that $fg = fh$, since there are only two nontrivial ways to
The idea is that monomorphisms allow one to “cancel” $f$ from the left side of a composition (although, confusingly, this means the cancelled part is on the right hand side of the diagram).
The corresponding property for cancelling on the right is defined identically.
Definition: A morphism $f: A \to B$ is an epimorphism if for every object $C$ and every pair of morphism $g,h: B \to C$ the condition $gf = hf$ implies $g = h$.
Again, the relevant diagram.
Whenever $f$ is an epimorphism and this diagram commutes, we can conclude $g=h$.
Now one of the simplest things one can do when considering a category is to identify the monomorphisms and epimorphisms. Let’s do this for a few important examples.
Monos and Epis in Various Categories
In the category $\mathbf{Set}$, monomorphisms and epimorphisms correspond to injective and surjective functions, respectively. Lets see why this is the case for monomorphisms. Recall that an
injective function $f$ has the property that $f(x) = f(y)$ implies $x=y$. With this property we can show $f$ is a monomorphism because if $f(g(x)) = f(h(x))$ then the injective property gives us
immediately that $g(x) = h(x)$. Conversely, if $f$ is a monomorphism and $f(x) = f(y)$, we will construct a set $C$ and two convenient functions $g, h: C \to A$ to help us show that $x=y$. In
particular, pick $C$ to be the one point set $\left \{ c \right \}$, and define $g(c) = x, h(c) = y$. Then as functions $fg = fh$. Indeed, there is only one value in the domain, so saying this
amounts to saying $f(x) = fg(c) = fh(c) = f(y)$, which we know is true by assumption. By the monomorphism property $g = h$, so $x = g(c) = h(c) = y$.
Now consider epimorphisms. It is clear that a surjective map is an epimorphism, but the converse is a bit trickier. We prove by contraposition. Instead of now picking the “one-point set,” for our $C$
, we must choose something which is one element bigger than $B$. In particular, define $g, h : B \to B'$, where $B'$ is $B$ with one additional point $x$ added (which we declare to not already be in
$B$). Then if $f$ is not surjective, and there is some $b_0 \in B$ which is missed by $f$, we define $g(b_0) = x$ and $g(b) = b$ otherwise. We can also define $h$ to be the identity on $B$, so that
$gf = hf$, but $g eq h$. So epimorphisms are exactly the surjective set-maps.
There is one additional fact that makes the category of sets well-behaved: a morphism in $\mathbf{Set}$ is an isomorphism if and only if it is both a monomorphism and an epimorphism. Indeed,
isomorphisms are set-functions with two-sided inverses (bijections) and we know from classical set theory that bijections are exactly the simultaneous injections and surjections. A warning to the
reader: not all categories are like this! We will see in a moment an example of a nontrivial category in which isomorphisms are not the same thing as simultaneous monomorphisms and epimorphisms.
The category $\mathbf{Grp}$ is very similar to $\mathbf{Set}$ in regards to monomorphisms and epimorphisms. The former are simply injective group homomorphisms, while the latter are surjective group
homomorphisms. And again, a morphisms is an isomorphism if and only if it is both a monomorphism and an epimorphism. We invite the reader to peruse the details of the argument above and adapt it to
the case of groups. In both cases, the hard decision is in choosing $C$ when necessary. For monomorphisms, the “one-point group” does not work because we are constrained to send the identity to the
identity in any group homomorphism. The fortuitous reader will avert their eyes and think about which group would work, and otherwise we suggest trying $C = \mathbb{Z}$. After completing the proof,
the reader will see that the trick is to find a $C$ for which only one “choice” can be made. For epimorphisms, the required $C$ is a bit more complex, but we invite the reader to attempt a proof to
see the difficulties involved.
Why do these categories have the same properties but they are acquired in such different ways? It turns out that although these proofs seem different in execution, they are the same in nature, and
they follow from properties of the category as a whole. In particular, the “one-point object” (a singleton set for $\mathbf{Set}$ and $\mathbb{Z}$ for $\mathbf{Grp}$) is more categorically defined as
the “free object on one generator.” We will discuss this more when we get to universal properties, but a “free object on $n$ generators” is roughly speaking an object $A$ for which any morphism with
source $A$ must make exactly $n$ “choices” in its definition. With sets that means $n$ choices for the images of elements, for groups that means $n$ consistent choices for images of group elements.
On the epimorphism side, the construction is a sort of “disjoint union object” which is correctly termed a “coproduct.” But momentarily putting aside all of this new and confusing terminology, let
us see some more examples of morphisms in various categories.
Our recent primer on rings was well-timed, because the category $\mathbf{Ring}$ of rings (with identity) is an example of a not-so-well-behaved category. As with sets and groups, we do have that
monomorphisms are equivalent to injective ring homomorphisms, but the argument is trickier than it was for groups. It is not obvious which “convenient” object $C$ to choose here, since maps $\mathbb
{Z} \to R$ yield no choices: 1 maps to 1, 0 maps to 0, and the properties of a ring homomorphism determine everything else (in fact, the abelian group structure and the fact that units are preserved
is enough). This makes $\mathbb{Z}$ into what’s called an “initial object” in $\mathbf{Ring}$; more on that when we study universal properties. In fact, we invite the reader to return to this post
after we talk about the universal property of polynomial rings. It turns out that $\mathbb{Z}[x]$ is a suitable choice for $C$, and the “choice” made is where to send the indeterminate $x$.
On the other hand, things go awry when trying to apply analogous arguments to epimorphisms. While it is true that every surjective ring homomorphism is an epimorphism (it is already an epimorphism in
$\mathbf{Set}$, and the argument there applies), there are ring epimorphisms which are not surjections! Consider the inclusion map of rings $i : \mathbb{Z} \to \mathbb{Q}$. The map $i$ is not
surjective, but it is an epimorphism. Suppose $g, h : \mathbb{Q} \to R$ are two parallel ring morphisms, and they agree on $\mathbb{Z}$ (they will always do so, since there is only one ring
homomorphism $\mathbb{Z} \to R$). Then $g,h$ must also agree on $\mathbb{Q}$, because if $p,q \in \mathbb{Z}$ with $q eq 0$, then
$\displaystyle g(p/q) = g(p)g(q^{-1}) = g(p)g(q)^{-1} = h(p)h(q)^{-1} = h(p/q)$
Because the map above is also an injection, the category of rings is a very naturally occurring example of a category which has morphisms that are both epimorphisms and monomorphisms, but not
There are instances in which monomorphisms and epimorphisms are trivial. Take, for instance any poset category. There is at most one morphism between any two objects, and so the conditions for an
epimorphism and monomorphism vacuously hold. This is an extreme example of a time when simultaneous monomorphisms and epimorphisms are not the same thing as isomorphisms! The only isomorphisms in a
poset category are the identity morphisms.
Morals about Good and Bad Categories
The inspection of epimorphisms and monomorphisms is an important step in the analysis of a category. It gives one insight into how “well-behaved” the category is, and picks out the objects which are
special either for their useful properties or confounding trickery.
This reminds us of a quote of Alexander Grothendieck, one of the immortal gods of mathematics who popularized the use of categories in mainstream mathematics.
A good category containing some bad objects is preferable to a bad category containing only good objects.
I suppose the thesis here is that the having only “good” objects yields less interesting and useful structure, and that one should be willing to put up with the bad objects in order to have access to
that structure.
Next time we’ll jump into a discussion of universal properties, and start constructing some programs to prove that various objects satisfy them.
Until then!
|
{"url":"http://jeremykun.com/tag/categories/","timestamp":"2014-04-20T08:13:49Z","content_type":null,"content_length":"221949","record_id":"<urn:uuid:03662b8f-1cb0-4b95-b0db-6eeb224b0667>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Factor Tree for 136
PreAlgebra Activities
Algebra Activites
Honors Algebra 1
Software List
Algebra Animator
Fraction Attraction
Fractions: + - x /
Green Globs
Signed Number Race
Math Arena
Math Toolbox
Project Interactivate
Spreadsheet Activities
Tabletop/Tabletop, Jr.
Digital Video Projects
Multicultural Activities
Other Resources
If you use this website, please send an email to the webmaster!
Prentice Hall Pre-Algebra
Tools for a Changing World
McDougal- Littel
Algebra 1
Holt, Rinehart and Winston
Algebra 1 (Honors course)
back to Inspiration Projects
Using Inspiration to create factor trees.
Objective: Students will use Inspiration software to create factor trees for composite whole numbers.
Prerequisite skills: basic knowledge of Inspiration software: how to create new boxes, create correct links, change font size and style, create a text-only box, save and print.
Activity: Step-by-Step Instructions
1. Open Inspiration to a new project.
2. In the Main Idea box, type in "Factor tree for yournumber."
3. Create two boxes below the main idea box and enter two factors for your number.
4. Check each of your two factors to see if either one is a prime number. Mark your prime numbers with a different type of box.
5. Continue factoring until you have only prime numbers.
6. Rearrange your factors so that all the prime numbers are on one line.
7. At the bottom, use the text-box tool to create a text box and type in the prime factorization for your number. Use Format - Text Style - Superscript to create exponents.
8. Use File - Print Options to set your project to fit onto one page, and to print in black and white. Type your name on your project and print it.
Are your factors correct? 60 points
Are your prime factors correctly indicated? 20 points
Do you have the prime factorization written out? 10 points
Have you used the software correctly (correct links, print format, etc.)? 10 points
Sample factor tree with all prime factors on the bottom line, and the prime factorizaton written below the diagram:
This diagram created using Inspiration® by Inspiration Software, Inc.
|
{"url":"http://traylork.home.comcast.net/~traylork/mathtech/FactorTreefor136.HTM","timestamp":"2014-04-20T06:20:29Z","content_type":null,"content_length":"7532","record_id":"<urn:uuid:3ad4d03c-ab35-4631-af8d-69a9b712d39d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tujunga Math Tutor
Find a Tujunga Math Tutor
...All ages. Begin now to be ready to play Christmas songs!I have a Bachelor of Music (BMu) from the Eastman School of Music of the University of Rochester (NY) with a major in Applied Music
(Organ/Piano). I provide a solid foundation in classical piano. Beginners of all ages are welcome, includi...
15 Subjects: including algebra 1, prealgebra, reading, English
...Unlike other aspects of music, there is no surefire way to write a song. In my 12 years as a songwriter, I have written hundreds of songs in the genres of pop, rock, folk and country. I have
been a writer under BMI since 2006.
42 Subjects: including algebra 1, grammar, piano, elementary (k-6th)
...I have not only the textbook from which I taught the course, but also the text from when I was enrolled in the class as an undergraduate student. I feel I have a solid grasp of this topic, and
encourage potential students to review my ratings on Rate My Professor. I have taught Linear Algebra at the college level for three years.
14 Subjects: including algebra 1, algebra 2, calculus, geometry
...We moved to LA three years ago, and are loving the beautiful weather, healthy food and breathtaking nature. I've been working as a private tutor and nanny since graduating from college, and
also teach dance at Westridge School in Pasadena. I'm extremely patient and creative, and love working with people to help them achieve their academic and artistic goals.
29 Subjects: including algebra 1, algebra 2, reading, prealgebra
...I have PhD in Chemical Engineering. I have taken Organic Chemistry for two semesters, and Advanced Organic Chemistry for one semester. I have had 5 years of researches in hospital and start-up
35 Subjects: including precalculus, discrete math, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Tujunga_Math_tutors.php","timestamp":"2014-04-20T19:33:33Z","content_type":null,"content_length":"23664","record_id":"<urn:uuid:d0608f38-667c-4282-a56f-87b867535dd9>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analyzing Railgun Data Using For Loops
6.4: Analyzing Railgun Data Using For Loops
Created by: CK-12
This example requires an understanding of the relationships between acceleration and velocity of an object moving in a straight line. A clear discussion of this relationship can be found in
"Acceleration" (http://cnx.org/content/m13769/latest/); the Wikipedia article "Motion Graphs and Derivatives" (http://en.wikipedia.org/wiki/Motion_graphs_and_derivatives) also has an explanation of
this relationship as well as a discussion of average and instantaneous velocity and acceleration and the role derivatives play. Also, in this example, we will compute approximate integrals using the
trapezoidal rule; the Wikipedia article "Trapezium rule" (http://en.wikipedia.org/wiki/Trapezoidal_rule) has an explanation of the trapezoidal rule.
Velocity Analysis of an Experimental Rail Gun
A railgun is a device that uses electrical energy to accelerate a projectile; information about railguns can be found at the Wikipedia article "Railgun" (http://en.wikipedia.org/wiki/Railgun). The
paper "Effect of Railgun Electrodynamics on Projectile Launch Dynamics" by Zielinski shows the current profile of a railgun launch. The acelleration $a$$\tfrac{m}{s^2}$$c$
$a = 0.0036 c^2 \textrm{sgn}(c)$
where $\textrm{sgn}(c)$$1$$c > 0$$-1$$c < 0$
Exercise 23
Download the data set of current values in the file Current.txt (available at http://cnx.org/content/m14031/latest/Current.txt) onto your computer. The file is formatted as two columns: the first
column is time in miliseconds, and the second column is current in kA.
The following sequence of commands will load the data, create a vector
of time values, create a vector
of current values, and plot the current as a function of time.
load Current.txt -ascii
t = Current(:,1);
c = Current(:,2);
xlabel('time (msec)')
ylabel('current (kA)')
The plot should be similar to that in Figure 6.
Plot of railgun current versus time
Exercise 24:
Compute the projectile velocity as a function of time. Note that velocity is the integral of acceleration.
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first.
|
{"url":"http://www.ck12.org/book/Engineering%253A-An-Introduction-to-Solving-Engineering-Problems-with-Matlab/r2/section/6.4/","timestamp":"2014-04-16T07:55:39Z","content_type":null,"content_length":"100767","record_id":"<urn:uuid:fc50601f-e314-4f4f-b95c-c664485149c7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finite number of cardinal in a model of $ZFC + \neg CH$ ?
up vote 5 down vote favorite
Hello everybody, little question for logicians: Considering $ZFC + \neg CH$, is it possible to construct a model $V \vDash ZFC + \neg CH$ such that there exists a finite number of cardinal between $\
aleph_0$ and $2^{\aleph_0}$ ? Same question with exactly one cardinal between $\aleph_0$ and $2^{\aleph_0}$ ?
2 Yes, $2^{\aleph_0}=\aleph_n$ is relatively consistent with ZFC for any positive integer $n$. – Emil Jeřábek Feb 29 '12 at 13:40
add comment
1 Answer
active oldest votes
Yes! Almost anything is possible. You can force over a model of ZFC + CH to create a new model where $2^{\aleph_0}$ is $\aleph_2$, for example, so that there is one cardinal between $\
aleph_0$ and the continuum. The idea is to create new binary sequences, new real numbers, with a partial order (a notion of forcing) and allow a generic filter to make it coherent and take
you to the new universe where the continuum is a new size. You could have 3 or 4 or any finite number of cardinals between $\aleph_0$ and $2^{\aleph_0}$ by adding new subsets of natural
up vote
11 down See the discussion below about how the answer to the question "What size can the continuum be?" is due to Cohen, Solovay, and Easton. Also, see in the comments how the continuum could reach
vote as far up as $\aleph_{2^{\aleph_0}}$, so there are continuum many cardinals between omega and $2^{\aleph_0}$. Hamkins' paper on the Multiverse shows that the ability to force to create
models which have a variety of sizes of the continuum settles the continuum hypothesis. You can read all about how to add new reals to create a new model in Thomas Jech's Set Theory or
Kenneth Kunen's book on the same subject.
1 "Almost anything is possible": indeed, as Ed Dean pointed out, Easton's theorem gives a precise expression of this fact. – Todd Trimble♦ Feb 29 '12 at 14:20
2 More precisely, the Cohen-Solovay theorem. Solovay wrote a short note "$2^{\aleph_0}$ can be anything it ought to be" in the 1965 proceedings volume edited by Addison, Henkin and Tarski.
MR0195680 (33 #3878). – Goldstern Feb 29 '12 at 14:58
5 Since the spirit of the question seems to be to look for strange values of the continuum, it might be worth mentioning that it is even possible that there are $2^{\aleph_0}$ cardinals
between $\aleph_0$ and $2^{\aleph_0}$. – Juris Steprans Feb 29 '12 at 16:20
2 Joel, can this be made into a very scary power set operation by Easton's theorem: for every regular $\kappa$ we have that $2^\kappa = \aleph_{2^\kappa}$? – Asaf Karagila Feb 29 '12 at
2 Asaf, yes, it does seem that we can achieve your scary situation: just let $E(\kappa)$ be a suitable aleph-fixed point for each regular cardinal, starting from a model of GCH, and then
appeal to Easton's theorem. I like it! – Joel David Hamkins Mar 1 '12 at 0:27
show 7 more comments
Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/89865/finite-number-of-cardinal-in-a-model-of-zfc-neg-ch","timestamp":"2014-04-20T11:40:03Z","content_type":null,"content_length":"58769","record_id":"<urn:uuid:09f62d54-afc1-42df-b238-f06d34270b7b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Difference of two numbers
July 27th 2006, 06:02 AM
Difference of two numbers
Hello I am having some difficulty with this problem.
Two positive numbers differ by 11, and their square roots differ by 1. Find
the numbers.
Difference between the numbers is 11.
This means (x-y)=11
substituting in our main equation, we get:
Now we arrive at two simple linear equations:
solve for 'x' and 'y' from the above eqns to get:
x=5.75, y=-5.25
July 27th 2006, 06:13 AM
Originally Posted by pashah
Hello I am having some difficulty with this problem.
Two positive numbers differ by 11, and their square roots differ by 1. Find
the numbers.
Difference between the numbers is 11.
This means (x-y)=11
substituting in our main equation, we get:
Now we arrive at two simple linear equations:
solve for 'x' and 'y' from the above eqns to get:
x=5.75, y=-5.25
$<br /> x-y=11<br />$
$<br /> \sqrt{x}-\sqrt{y}=1<br />$
Trial and error indicates 36 and 25 are suitable numbers.
July 27th 2006, 06:21 AM
Thanks Capn
Really fast reply. Much appreciated.
July 27th 2006, 01:03 PM
Originally Posted by pashah
Hello I am having some difficulty with this problem.
Two positive numbers differ by 11, and their square roots differ by 1. Find
the numbers.
Difference between the numbers is 11.
This means (x-y)=11
substituting in our main equation, we get:
Now we arrive at two simple linear equations:
solve for 'x' and 'y' from the above eqns to get:
x=5.75, y=-5.25
A more elegant solution.
You have,
Use diffrence of two squares,
Therefore, the two equation you have are,
Add then subtract them,
$2\sqrt{x}=11\rightarrow x=36$
$2\sqrt{y}=10\rightarrow x=25$
July 27th 2006, 06:25 PM
You Reign
Thank you for the expert help. One thing you did forget to mention about the four individuals. They were all megalomaniacs who were hell bent on conquering the known world. Unlike yourself I
don't see where they were interested in helping others overcome difficulties. Thanks Again.
July 27th 2006, 06:38 PM
Originally Posted by pashah
Thank you for the expert help. One thing you did forget to mention about the four individuals. They were all megalomaniacs who were hell bent on conquering the known world. Unlike yourself I
don't see where they were interested in helping others overcome difficulties. Thanks Again.
I would place Hannibal of Carthage on the list since he is my favorite (ever I seen Hannibal vs. Rome on Histroy Channel Special) but he never did try to take over the world.
July 27th 2006, 08:43 PM
Hannibal the Barca
That's a strange coincidence since I am also fond of Hannibal. His father Hamilcar was quite an impressive leader as well. Although, I must say they seem to have commited some horrible atrocities
in their conquests. Hannibal was known to slaughter entire companies of his own men for fear that they would fall prey to the Roman legions of Scipio. I suspect his biggest mistake was limiting
his tactics. He was redundant and consequently an observant Scipio would later adopt his tactics and use those very same tactics to defeat him in a decisive battle.
Did you know that Hannibal was originally from an ancient tribe of Spanish descent?
July 27th 2006, 09:05 PM
Originally Posted by ThePerfectHacker
I would place Hannibal of Carthage on the list since he is my favorite (ever I seen Hannibal vs. Rome on Histroy Channel Special) but he never did try to take over the world.
Won spectacular victories in battle, lost war due to inability to
overcome political constraints.
Compare with oneone like W S Churchill - dreadfull when interfereing
with the running of campaigns but understood the (geo-) political
side better than the enemies of the UK - result: the UK on winning side
in one of the most important conflicts in its history.
July 29th 2006, 07:40 PM
Hello, pashah!
TPHacker's solution is elegant.
It still can be solve by "normal" methods.
Two positive numbers differ by 11, and their square roots differ by 1.
Find the numbers.
We have: . $\begin{array}{cc}(1)\;\;x \:- \:y\;=\;\;11 \\ (2)\;\sqrt{x} - \sqrt{y}\:=\:1\end{array}$
From (2), we have: . $\sqrt{y} \,= \,\sqrt{x}-1\quad\Rightarrow\quad y \,= \,(\sqrt{x} - 1)^2$
Substitute into (1): . $x - (\sqrt{x} - 1)^2\:=\:11\quad\Rightarrow\quad x - (x - 2\sqrt{x} + 1) \:= \:11$
Hence: . $2\sqrt{x} - 1 \:=\:11\quad\Rightarrow\quad 2\sqrt{x}\,=\,$$12\quad\Rightarrow\quad \sqrt{x}\,=\,6\quad\Rightarrow\quad x = 36$
Therefore: . $\boxed{x = 36,\;y = 25}$
|
{"url":"http://mathhelpforum.com/algebra/4333-difference-two-numbers-print.html","timestamp":"2014-04-17T07:31:00Z","content_type":null,"content_length":"14417","record_id":"<urn:uuid:84b81d13-c7c2-4712-941f-7ac38b5c0b00>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elementary Algebra 4th Edition | 9780495389606 | eCampus.com
List Price: [S:$270.33:S]
Only one copy
in stock at this price.
In Stock Usually Ships in 24 Hours.
Currently Available, Usually Ships in 24-48 Hours
Downloadable Offline Access
Questions About This Book?
Why should I rent this book?
Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book
back to us with a free UPS shipping label! No need to worry about selling it back.
How do rental returns work?
Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from
our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in!
What version or edition is this?
This is the 4th edition with a publication date of 1/10/2008.
What is included with this book?
• The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
• The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included.
• The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
Algebra can be like a foreign language. But one text delivers an interpretation you can fully understand. Building a conceptual foundation in the "language of algebra," ELEMENTARY ALGEBRA, 4e
provides an integrated learning process that helps you expand your reasoning abilities as it teaches you how to read, write, and think mathematically. Packed with real-life applications of math, it
blends instructional approaches that include vocabulary, practice, and well-defined pedagogy with an emphasis on reasoning, modeling, communication, and technology skills. The authors' five-step
problem-solving approach makes learning easy. More student-friendly than ever, the text offers a rich collection of student learning tools, including Enhanced WebAssign online learning system. With
ELEMENTARY ALGEBRA, 4e, algebra makes sense!
Table of Contents
An Introduction to Algebra
Equations, Inequalities, and Problem Solving
Linear Equations and Inequalities in Two Variables
Systems of Equations and Inequalities
Exponents and Polynomials
Factoring and Quadratic Equations
Rational Expressions and Equations
Radical Expressions and Equations
Quadratic Equations
Roots and Powers
Answers to Selected Exercises
Table of Contents provided by Publisher. All Rights Reserved.
|
{"url":"http://www.ecampus.com/elementary-algebra-4th-tussyalan-s/bk/9780495389606","timestamp":"2014-04-21T02:43:42Z","content_type":null,"content_length":"59758","record_id":"<urn:uuid:3128a447-19df-41dd-a331-c9c3d0153233>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from July 2010 on The Unapologetic Mathematician
I’ve been thinking about yesterday’s post about the product of measurable spaces, and I’m not really satisfied. I’d like to present a concrete example of what can go wrong when one or the other
factor isn’t a total measurable space — that is, when the underlying set of that factor is not a measurable subset of itself.
Let $X=\mathbb{R}$ be the usual real line. However, instead of the Borel or Lebesgue measurable sets, let $\mathcal{S}$ be the collection of all countable subsets of $X$. We check that this is a $\
sigma$-ring: it’s clearly closed under unions and differences, making it a ring, and the union of a countable collection of countable sets is still countable, so it’s monotone, and thus a $\sigma$
-ring. And every point $x\in X$ is in the measurable set $\{x\}\in\mathcal{S}$, so $(X,\mathcal{S})$ is a measurable space. We’ll let $(Y,\mathcal{T})=(Z,\mathcal{U})=(X\mathcal{S})$ be two more
copies of the same measurable space.
Now we define the “product” space $(X\times Y,\mathcal{S}\times\mathcal{T})$. A subset of $X\times Y$ is in $\mathcal{S}\times\mathcal{T}$ if it can be written as the union of a countable number of
measurable rectangles $A\times B$ with $A\in\mathcal{S}$ and $B\in\mathcal{T}$. But if $A$ and $B$ are both countable, then $A\times B$ is also countable, and thus any measurable subset of $X\times
Y$ is countable. On the other hand, if a subset of $X\times Y$ is countable, it’s the countable union of all of its singletons $\{(a,b)\}=\{a\}\times\{b\}$, each of which is a measurable rectangle.
That is, if a subset of $X\times Y$ is countable, then it is measurable. That is, $\mathcal{S}\times\mathcal{T}$ is the collection of all the countable subsets of $X\times Y$.
Next we define the function $f:Z\to X\times Y$ by setting $f(z)=(z,0)$ for any real number $z$. We check that it’s measurable: given a measurable set $C\subseteq X\times Y$ we calculate
$\displaystyle f^{-1}(C)=\{z\in Z\vert f(z)\in C\}=\{z\in Z\vert (z,0)\in C\}$
but if $C$ is measurable, then it’s countable, and it can contain only countably many points of the form $(z,0)$. The preimage $f^{-1}(C)$ must then be countable, and thus measurable, and thus $f$ is
a measurable function.
That said, the component function $f_2=\pi_2\circ f$ is not measurable. Indeed, we have $f_2(z)=0$ for all real numbers $z$. The set $\{0\}\subseteq Y$ is countable — and thus measurable — but its
preimage $f^{-1}(\{0\})=Z$ is the entire real line, which is uncountable and is not measurable. This is exactly the counterintuitive result we were worried about.
Now we return to the category of measurable spaces and measurable functions, and we discuss product spaces. Given two spaces $(X,\mathcal{S})$ and $(Y,\mathcal{T})$, we want to define a $\sigma$-ring
of measurable sets on the product $X\times Y$.
In fact, we’ve seen that given rings $\mathcal{S}$ and $\mathcal{T}$ we can define the product $\mathcal{S}\times\mathcal{T}$ as the collection of finite disjoint unions of sets $A\times B$, where $A
\in\mathcal{S}$ and $B\in\mathcal{T}$. If $\mathcal{S}$ and $\mathcal{T}$ are $\sigma$-rings, then we define $\mathcal{S}\times\mathcal{T}$ to be the smallest monotone class containing this
collection, which will then be a $\sigma$-ring. If $\mathcal{S}$ and $\mathcal{T}$ are $\sigma$-algebras — that is, if $X\in\mathcal{S}$ and $Y\in\mathcal{T}$ — then clearly $X\times Y\in\mathcal{S}\
times\mathcal{T}$, which is thus another $\sigma$-algebra.
Indeed, $(X\times Y,\mathcal{S}\times\mathcal{T})$ is a measurable space. The collection $\mathcal{S}\times\mathcal{T}$ is a $\sigma$-algebra, and every point is in some one of these sets. If $(x,y)\
in X\times Y$, then $x\in A\in\mathcal{S}$ and $y\in B\in\mathcal{T}$, so $(x,y)\in A\times B\in\mathcal{S}\times\mathcal{T}$. But is this really the $\sigma$-algebra we want?
Most approaches to measure theory simply define this to be the product of two measurable spaces, but we have a broader perspective here. Indeed, we should be asking if this is a product object in the
category of measurable spaces! That is, the underlying space $X\times Y$ comes equipped with projection functions $\pi_1:X\times Y\to X$ and $\pi_2:X\times Y\to Y$. We must ask if these projections
are measurable for our choice of $\sigma$-algebra, and also that they satisfy the universal property of a product object.
Checking measurability is pretty straightforward. It’s the same for either projector, so we’ll consider $\pi_1:X\times Y\to X$ explicitly. Given a measurable set $A\in\mathcal{S}$, the preimage $\
pi_1^{-1}(A)=A\times Y$ must be measurable as well. And here we run into a snag: we know that $A$ is measurable, and so for any measurable $B\in\mathcal{T}$ the subset $A\times B\subseteq A\times Y$
is measurable. In particular, if $\mathcal{T}$ is a $\sigma$-algebra then $A\times Y$ is measurable right off. However, if $Y$ is not itself measurable then $Y$ is not the limit of any increasing
countable sequence of measurable sets either (since $\sigma$-rings are monotonic classes). Thus $A\times Y$ is not the limit of any increasing sequence of measurable products $A\times B$, and so $A\
times Y$ can’t be in the smallest monotonic class generated by such products, and thus can’t be measurable!
So in order for $(X\times Y,\mathcal{S}\times\mathcal{T})$ to be a product object, it’s necessary that $\mathcal{T}$ be a $\sigma$-algebra. Similarly, $\mathcal{S}$ must be a $\sigma$-algebra as
well. What would happen if this condition fails? Consider a measurable function $f:Z\to X\times Y$ defined by $f(z)=(f_1(z),f_2(z))$. We can write $f_1=\pi_1\circ f$, but since $\pi_1$ is not
measurable we have no guarantee that $f_1$ will be measurable!
On the other hand, all is not lost; if $f_1:Z\to X$ and $f_2:Z\to Y$ are both measurable, and if $A\in\mathcal{S}$ and $B\in\mathcal{T}$, then we calculate the preimage
\displaystyle\begin{aligned}f^{-1}(A\times B)&=\left\{z\in Z\vert f(z)\in A\times B\right\}\\&=\left\{z\in Z\vert (f_1(z),f_2(z))\in A\times B\right\}\\&=\left\{z\in Z\vert f_1(z)\in A\textrm{ and }
f_2(z)\in B\right\}\\&=\left\{z\in Z\vert f_1(z)\in A\right\}\cap\left\{z\in Z\vert f_2(z)\in B\right\}\\&=f_1^{-1}(A)\cap f_2^{-1}(B)\end{aligned}
which is thus measurable. Monotone limits of finite disjoint unions of such sets are easily handled. Thus if both components of $f$ are measurable, then $f$ is measurable.
Okay, so back to the case where $\mathcal{S}$ and $\mathcal{T}$ are both $\sigma$-algebras, and so $\pi_1$ and $\pi_2$ are both measurable. We still need to show that the universal property holds.
That is, given two measurable functions $f_1:Z\to X$ and $f_2:Z\to Y$, we must show that there exists a unique measurable function $f:Z\to X\times Y$ so that $f_1=\pi_1\circ f$ and $f_2=\pi_2\circ f$
. There’s obviously a unique function on the underlying set: $z\mapsto(f_1(z),f_2(z))$. And the previous paragraph shows that this function must be measurable!
So, the uniqueness property always holds, but with the one caveat that the projectors may not themselves be measurable. That is, the full subcategory of total measurable spaces has product objects,
but the category of measurable spaces overall does not. However, we’ll still talk about the “product” space $X\times Y$ with the $\sigma$-ring $\mathcal{S}\times\mathcal{T}$ understood.
A couple of notes are in order. First of all, this is the first time that we’ve actually used the requirement that every point in a measurable space be a member of some measurable set or another. It
will become more important as we go on. Secondly, we define a “measurable rectangle” in $X\times Y$ to be a set $A\times B$ so that $A\in\mathcal{S}$ and $B\in\mathcal{T}$ — that is, one for which
both “sides” are measurable. The class of all measurable sets $\mathcal{S}\times\mathcal{T}$ is the $\sigma$-ring generated by all the measurable rectangles.
As we’ve said before, singularity and absolute continuity are diametrically opposed. And so it’s not entirely surprising that if we have two totally $\sigma$-finite signed measures $\mu$ and $u$,
then we can break $u$ into two uniquely-defined pieces, $u_c$ and $u_s$, so that $u_c\ll\mu$, $u_s\perp\mu$, and $u=u_c+u_s$. We call such a pair the “Lebesgue decomposition” of $u$ with respect to $
Since a signed measure is absolutely continuous or singular with respect to $\mu$ if and only if it’s absolutely continuous or singular with respect to $\lvert\mu\rvert$, we may as well assume that $
\mu$ is a measure. And since in both cases $u$ is absolutely continuous or singular with respect to $\mu$ if and only if $u^+$ and $u^-$ both are, we may as well assume that $u$ is also a measure.
And, as usual, we can break our measurable space $X$ down into the disjoint union of countably many subspaces on which both $\mu$ and $u$ are both totally finite. We can assemble the Lebesgue
decompositions on a collection such subspaces into the Lebesgue decomposition on the whole, and so we can assume that $\mu$ and $u$ are totally finite.
Now that we can move on to the proof itself, the core is based on our very first result about absolute continuity: $u\ll\mu+u$. Thus the Radon-Nikodym theorem tells us that there exists a function
$f$ so that
for every measurable set $E$. Since $0\lequ(E)\leq\mu(E)+u(E)$, we must have $0\leq f\leq1$$(\mu+u)$-a.e., and thus $0\leq f\leq1$$u$-a.e. as well. Since
Let us define $A=\{x\in X\vert f(x)=1\}$ and $B=\{x\in X\vert0\leq f(x)<1\}$. Then we calculate
and thus (by the finiteness of $u$), $\mu(A)=0$. Defining $u_s(E)=u(E\cap A)$ and $u_c(E)=u(E\cap B)$, then it’s clear that $u_s\perp\mu$. We still must prove that $u_c\ll\mu$.
If $\mu(E)=0$, then we calculate
$\displaystyle\int\limits_{E\cap B}\,du=u(E\cap B)=\int\limits_{E\cap B}f\,d\mu+\int\limits_{E\cap B}f\,du=\int\limits_{E\cap B}f\,du$
and, therefore
$\displaystyle\int\limits_{E\cap B}(1-f)\,d\mu=0$
But $1-f\geq0$$u$-a.e., which means that we must have $u_c(E)=u(E\cap B)=0$, and thus $u_c\ll\mu$.
Now, suppose $u=u_s+u_c$ and $u=\bar{u}_s+\bar{u}_c$ are two Lebesgue decompositions of $u$ with respect to $\mu$. Then $u_s-\bar{u}_s=\bar{u}_c-u_c$. We know that both singularity and absolute
continuity pass to sums, so $u_s-\bar{u}_s$ is singular with respect to $\mu$, while $\bar{u}_c-u_c$ is absolutely continuous with respect to $\mu$. But the only way for this to happen is for them
both to be zero, and thus $u_s=\bar{u}_s$ and $u_c-\bar{u}_c$.
Today we’ll look at a couple corollaries of the Radon-Nikodym chain rule.
First up, we have an analogue of the change of variables formula, which was closely tied to the chain rule in the first place. If $\lambda$ and $\mu$ are totally $\sigma$-finite signed measures with
$\mu\ll\lambda$, and if $f$ is a finite-valued $\mu$-integrable function, then
$\displaystyle\int f\,d\mu=\int f\frac{d\mu}{d\lambda}\,d\lambda$
which further justifies the the substitution of one “differential measure” for another.
So, define a signed measure $u$ as the indefinite integral of $f$. Immediately we know that $u$ is totally $\sigma$-finite and that $u\ll\mu$. And, obviously, $f$ is the Radon-Nikodym derivative of
$u$ with respect to $\mu$. Thus we can invoke the above chain rule to conclude that $\lambda$-a.e. we have
We then know that for every measurable $E$
and the substitution formula follows by putting $X$ in for $E$.
Secondly, if $\mu$ and $u$ are totally $\sigma$-finite signed measures so that $\mu\equivu$ — that is, $\mu\llu$ and $u\ll\mu$ — then $\mu$-a.e. we have
Indeed, $\mu\ll\mu$, and by definition we have
so $1$ serves as the Radon-Nikodym derivative of $\mu$ with respect to itself. Putting this into the chain rule immediately gives us the desired result.
Today we take the Radon-Nikodym derivative and prove that it satisfies an analogue of the chain rule.
If $\lambda$, $\mu$, and $u$ are totally $\sigma$-finite signed measures so that $u\ll\mu$ and $\mu\ll\lambda$, then $\lambda$-a.e. we have
By the linearity we showed last time, if this holds for the upper and lower variations of $u$ then it holds for $u$ itself, and so we may assume that $u$ is also a measure. We can further simplify by
using Hahn decompositions with respect to both $\lambda$ and $\mu$, passing to subspaces on which each of our signed measures has a constant sign. We will from here on assume that $\lambda$ and $\mu$
are (positive) measures, and the case where one (or the other, or both) has a constant negative sign has a similar proof.
Let’s also simplify things by writing
Since $\mu$ and $u$ are both non-negative there is also no loss of generality in assuming that $f$ and $g$ are everywhere non-negative.
So, let $\{f_n\}$ be an increasing sequence of non-negative simple functions converging pointwise to $f$. Then monotone convergence tells us that
for every measurable $E$. For every measurable set $F$ we find that
$\displaystyle\int\limits_E\chi_F\,d\mu=\mu(E\cap F)=\int\limits_{E\cap F}\,d\mu=\int\limits_{E\cap F}g\,d\lambda=\int\limits_E\chi_Fg\,d\lambda$
and so for all the simple $f_n$ we conclude that
Passing to the limit, we find that
and so the product $fg$ serves as the Radon-Nikodym derivative of $u$ in terms of $\lambda$, and it’s uniquely defined $\lambda$-almost everywhere.
Okay, so the Radon-Nikodym theorem and its analogue for signed measures tell us that if we have two $\sigma$-finite signed measures $\mu$ and $u$ with $u\ll\mu$, then there’s some function $f$ so
But we also know that by definition
If both of these integrals were taken with respect to the same measure, we would know that the equality
for all measurable $E$ implies that $f=g$$\mu$-almost everywhere. The same thing can’t quite be said here, but it motivates us to say that in some sense we have equality of “differential measures”
$du=f\,d\mu$. In and of itself this doesn’t really make sense, but we define the symbol
and call it the “Radon-Nikodym derivative” of $u$ by $\mu$. Now we can write
The left equality is the Radon-Nikodym theorem, and the right equality is just the substitution of the new symbol for $f$. Of course, this function — and the symbol $\frac{du}{d\mu}$ — is only
defined uniquely $\mu$-almost everywhere.
The notation and name is obviously suggestive of differentiation, and indeed the usual laws of derivatives hold. We’ll start today by the easy property of linearity.
That is, if $u_1$ and $u_2$ are both $\sigma$-finite signed measures, and if $a_1$ and $a_2$, then $a_1u_1+a_2u_2$ is clearly another $\sigma$-finite signed measure. Further, it’s not hard to see if
$u_i\ll\mu$ then $a_1u_1+a_2u_2\ll\mu$ as well. By the Radon-Nikodym theorem we have functions $f_1$ and $f_2$ so that
for all measurable sets $E$. Then it’s clear that
That is, $a_1f_1+a_2f_2$ can serve as the Radon-Nikodym derivative of $a_1u_1+a_2u_2$ with respect to $\mu$. We can also write this in our suggestive notation as
which equation holds $\mu$-almost everywhere.
Now that we’ve proven the Radon-Nikodym theorem, we can extend it to the case where $\mu$ is a $\sigma$-finite signed measures.
Indeed, let $X=A\uplus B$ be a Hahn decomposition for $\mu$. We find that $\mu^+$ is a $\sigma$-finite measure on $A$, while $\mu^-$ is a $\sigma$-finite measure on $B$.
As it turns out that $u\ll\mu^+$ on $A$, while $u\ll\mu^-$ on $B$. For the first case, let $E\subseteq A$ be a set for which $\mu^+(E)=0$. Since $E\cap B=\emptyset$, we must have $\mu^-(E)=0$, and so
$\lvert\mu\rvert(E)=\mu^+(E)+\mu^-(E)=0$. Then by absolute continuity, we conclude that $u(E)=0$, and thus $u\ll\mu^+$ on $A$. The proof that $u\ll\mu^-$ on $B$ is similar.
So now we can use the Radon-Nikodym theorem to show that there must be functions $f_A$ on $A$ and $f_B$ on $B$ so that
\displaystyle\begin{aligned}u(E\cap A)=&\int\limits_{E\cap A}f_A\,d\mu^+\\u(E\cap B)=&\int\limits_{E\cap B}f_B\,d\mu^-=-\int\limits_{E\cap B}-f_B\,d\mu^-\end{aligned}
We define a function $f$ on all of $X$ by $f(x)=f^+(x)$ for $x\in A$ and $f(x)=f^-(x)$ for $x\in B$. Then we can calculate
\displaystyle\begin{aligned}u(E)&=u((E\cap A)\uplus(E\cap B))\\&=u(E\cap A)+u(E\cap B)\\&=\int\limits_{E\cap A}f_A\,d\mu^+-\int\limits_{E\cap B}-f_B\,d\mu^-\\&=\int\limits_{E\cap A}f\,d\mu^+-\int\
limits_{E\cap B}f\,d\mu^-\\&=\int\limits_Ef\,d\mu\end{aligned}
which in exactly the conclusion of the Radon-Nikodym theorem for the signed measure $\mu$.
Today we set about the proof of the Radon-Nikodym theorem. We assumed that $(X,\mathcal{S},\mu)$ is a $\sigma$-finite measure space, and that $u$ is a $\sigma$-finite signed measure. Thus we can
write $X$ as the countable union of subsets on which both $\mu$ and $u$ are finite, and so without loss of generality we may as well assume that they’re finite to begin with.
Now, if we assume for the moment that we’re correct and an $f$ does exist so that $u$ is its indefinite integral, then the fact that $u$ is finite means that $f$ is integrable, and then if $g$ is any
other such function we can calculate
for every measurable $E\in\mathcal{S}$. Now we know that this implies $f-g=0$ a.e., and thus the uniqueness condition we asserted will hold.
Back to the general case, we know that the absolute continuity $u\ll\mu$ is equivalent to the conjunction of $u^+\ll\mu$ and $u^-\ll\mu$, and so we can reduce to the case where $u$ is a finite
measure, not just a finite signed measure.
Now we define the collection $\mathcal{K}$ of all nonnegative functions $f$ which are integrable with respect to $\mu$, and for which we have
for every measurable $E$. We define
$\displaystyle\alpha=\sup\limits_{f\in\mathcal{K}}\int f\,d\mu$
Since $\alpha$ is the supremum, we can find a sequence $\{f_n\}$ of functions in $\mathcal{K}$ so that
$\displaystyle\lim\limits_{n\to\infty}\int f\,d\mu=\alpha$
For each $n$ we define
$\displaystyle g_n(x)=\max\limits_{1\leq i\leq n}f_i(x)$
Now if $E$ is some measurable set we can break it into the finite disjoint union of $n$ sets $E_i$ so that $g_n=f_i$ on $E_i$. Thus we have
and so $g_n\in\mathcal{K}$.
We can write $g_n=g_{n-1}\cup f_n$, which tells us that the sequence $\{g_n\}$ is increasing. We define $f_0$ to be the limit of the $g_n$ — $f_0(x)$ is the maximum of all the $f_i(x)$ — and use the
monotone convergence theorem to tell us that
Since all of the integrals on the right are bounded above by $u(E)$, their limit is as well, and $f_0\in\mathcal{K}$. Further, we can tell that the integral of $f_0$ over all of $X$ must be $\alpha$.
Since $f_0$ is integrable, it must be equal $\mu$-a.e. to some finite-valued function $f$. What we must now show is that if we define
then $u_0$ is identically zero.
If it’s not identically zero, then by the lemma from yesterday there is a positive number $\epsilon$ and a set $A$ so that $\mu(A)>0$ and so that
$\displaystyle\epsilon\mu(E\cap A)\lequ_0(E\cap A)=u(E\cap A)-\int\limits_{E\cap A}f\,d\mu$
for every measurable set $E$. If we define $g=f+\epsilon\chi_A$, then
$\displaystyle\int\limits_Eg\,d\mu=\int\limits_Ef\,d\mu+\epsilon\mu(E\cap A)\leq\int\limits_{E\setminus A}+u(E\cap A)\lequ(E)$
for every measurable set $E$, which means that $g\in\mathcal{K}$. But
$\displaystyle\int g\,d\mu=\int f\,d\mu+\epsilon\mu(A)>\alpha$
which contradicts the maximality of the integral of $f$. Thus $u_0$ must be identically zero, and the proof is complete.
Before the main business, a preliminary lemma: if $\mu$ and $u$ are totally finite measures so that $u$ is absolutely continuous with respect to $\mu$, and $u$ is not identically zero, then there is
a positive number $\epsilon$ and a measurable set $A$ so that $\mu(A)>0$ and $A$ is a positive set for the signed measure $u-\epsilon\mu$. That is, we can subtract off a little bit (but not zero!) of
$\mu$ from $u$ and still find a non-$\mu$-negligible set on which what remains is completely positive.
To show this, let $X=A_n\uplus B_n$ be a Hahn decomposition with respect to the signed measure $u-\frac{1}{n}\mu$ for each positive integer $n$. Let $A_0$ be the union of all the $A_n$ and let $B_0$
be the intersection of all the $B_n$. Then since $B_0\subseteq B_n$ and $B_n$ is negative for $u-\frac{1}{n}\mu$ we find
for every positive integer $n$. This shows that we must have $u(B_0)=0$. And then, since $u$ is not identically zero we must have $u(A_0)=u(X\setminus B_0)>0$. By absolute continuity we conclude that
$\mu(A_0)>0$, which means that we must have $\mu(A_n)>0$ for at least one value of $n$. So we pick just such a value, set $A=A_n$, and $\epsilon=\frac{1}{n}$, and everything we asserted is true.
Now for the Radon-Nikodym Theorem: we let $(X,\mathcal{S},\mu)$ be a totally $\sigma$-finite measure space and let $u$ be a $\sigma$-finite signed measure on $\mathcal{S}$ which is absolutely
continuous with respect to $\mu$. Then $u$ is an indefinite integral. That is, there is a finite-valued measurable function $f:X\to\mathcal{R}$ so that
for every measurable set $E$. The function $f$ is unique in the sense that if any other function $g$ has $u$ as its indefinite integral, then $f=g$$\mu$-almost everywhere. It should be noted that we
don’t assert that $f$ is integrable, which will only be true of $u$ is actually finite. However, either its positive or its negative integral must converge or we wouldn’t use the integral sign for a
divergent integral.
Let’s take a moment and consider what this means. We know that if we take an integrable function $f$, or a function whose integral diverges definitely, on a $\sigma$-finite measure space and define
its indefinite integral $u$, then $u$ is a $\sigma$-finite signed measure that is absolutely continuous with respect to the measure against which we integrate. What the Radon-Nikodym theorem tells us
that any such signed measure arises as the indefinite integral of some such function $f$. Further, it tells us that such a function is essentially unique, as much as any function is in measure theory
land. In particular, we can tell that if we start with a function $f$ and get its indefinite integral, then any other function with the same indefinite integral must be a.e. equal to $f$.
Another relation between signed measures besides absolute continuity — indeed, in a sense the opposite of absolute continuity — is singularity. We say that two signed measures $\mu$ and $u$ are
“mutually singular” and write $\mu\perpu$ if there exists a partition of $X$ into two sets $A\uplus B=X$ so that for every measurable set $E$ the intersections $A\cap E$ and $B\cap E$ are measurable,
$\displaystyle\lvert\mu\rvert(A\cap E)=0=\lvertu\lvert(B\cap E)$
We sometimes just say that $\mu$ and $u$ are singular, or that (despite the symmetry of the definition) “$u$ is singular with respect to $\mu$“, or vice versa.
In a manner of speaking, if $\mu$ and $u$ are mutually singular then all of the sets that give $\mu$ a nonzero value are contained in $B$, while all of the sets that give $u$ a nonzero value are
contained in $A$, and the two never touch. In contradistinction to absolute continuity, not only does the vanishing of $\lvert\mu\rvert$ not imply the vanishing of $\lvertu\rvert$, but if we pare
away portions of a set for which $\lvertu\rvert$ gives zero measure then what remains — essentially the only sets for which $\lvertu\rvert$ doesn’t automatically vanish — is necessarily a set for
which $\lvert\mu\rvert$does vanish. Another way to see this is to notice that if $\mu$ and $u$ are signed measures with both $u\ll\mu$ and $u\perp\mu$, then we must necessarily have $u=0$;
singularity says that $u$ must vanish on any set $E$ with $\lvert\mu\rvert(E)eq0$, and absolute continuity says $u$ must vanish on any set $E$ with $\lvert\mu\rvert(E)=0$.
As a quick and easy example, let $\mu^+$ and $\mu^-$ be the Jordan decomposition of a signed measure $\mu$. Then a Hahn decomposition for $\mu$ gives exactly such a partition $X=A\uplus B$ showing
that $\mu^+\perp\mu^-$.
One interesting thing is that singular measures can be added. That is, if $u_1$ and $u_2$ are both singular with respect to $\mu$, then $(u_1+u_2)\perp\mu$. Indeed, let $X=A_1\uplus B_1$ and $X=A_2\
uplus B_2$ be decompositions showing that $u_1\perp\mu$ and $u_2\perp\mu$, respectively. That is, for any measurable set $E$ we have
\displaystyle\begin{aligned}u_1(A_1\cap E)&=0\\\mu(B_1\cap E)&=0\\u_2(A_2\cap E)&=0\\\mu(B_2\cap E)&=0\end{aligned}
Then we can write
$\displaystyle X=(A_1\cap A_2)\uplus\left((A_1\cap B_2)\uplus(A_2\cap B_1)\uplus(B_1\cap B_2)\right)$
It’s easy to check that $u_1+u_2$ must vanish on measurable subsets of $A_1\cap A_2$, and that $\mu$ must vanish on measurable subsets of the remainder of $X$.
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
|
{"url":"http://unapologetic.wordpress.com/2010/07/page/2/","timestamp":"2014-04-21T07:07:57Z","content_type":null,"content_length":"164906","record_id":"<urn:uuid:c9fbaeee-d8bb-4af0-a44e-9d7ca58f33e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Woodlands, TX ACT Tutor
Find a The Woodlands, TX ACT Tutor
...Once you learn the basics, you can easily tackle more advanced areas. My approach uses real world problems and how to apply algebraic principles to solve those problems. Writing clearly is a
necessity for almost all scientific, technical and business careers.
20 Subjects: including ACT Math, writing, algebra 1, algebra 2
...Students will use a variety of tools to engage their analytical and thinking abilities, learn how language is processed in the brain and how to use scientifically-proven techniques to form new
language acquisition skills. Additionally, students study how word parts, syllables, roots and affixes ...
39 Subjects: including ACT Math, Spanish, English, chemistry
...Every student has the potential to be a successful learner. I'm here to help struggling students maximize that potential. I worked as a substitute teacher in grades K-12 for 4 years.
73 Subjects: including ACT Math, reading, chemistry, English
I have been teaching math for 43 years. During the summers I am an AP Reader for the Calculus AP test. I have taught AP Calculus AB & BC and AP Statistics, also Dual Credit Trigonometry,
Pre-Calculus, College Algebra, Calculus I & II.
6 Subjects: including ACT Math, calculus, algebra 2, trigonometry
...I also have worked with Autistic students and am currently being trained to lead our Autism program. I am currently being trained in to lead the Autism program at my high school and have
extensive experience working with Autistic students. Through my experience working in Special Education, I have worked with numerous students that are afflicted with Dyslexia.
32 Subjects: including ACT Math, reading, ASVAB, finance
Related The Woodlands, TX Tutors
The Woodlands, TX Accounting Tutors
The Woodlands, TX ACT Tutors
The Woodlands, TX Algebra Tutors
The Woodlands, TX Algebra 2 Tutors
The Woodlands, TX Calculus Tutors
The Woodlands, TX Geometry Tutors
The Woodlands, TX Math Tutors
The Woodlands, TX Prealgebra Tutors
The Woodlands, TX Precalculus Tutors
The Woodlands, TX SAT Tutors
The Woodlands, TX SAT Math Tutors
The Woodlands, TX Science Tutors
The Woodlands, TX Statistics Tutors
The Woodlands, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/The_Woodlands_TX_ACT_tutors.php","timestamp":"2014-04-16T13:26:36Z","content_type":null,"content_length":"23795","record_id":"<urn:uuid:5fe5e539-25ae-491e-9740-f1480059aec0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does this series converge or diverge?
October 29th 2009, 05:18 PM #1
Sep 2008
Does this series converge or diverge?
Does this series converge or diverge?
1 / ( (ln(n)) ^ (ln(n)) )
I'm not really sure how to approach this. It doesn't look like you can solve an indefinite integral, I can't find anything to used for the comparison test, and the ration test doesn't really
simplify. What should I do? Thank you!
Does this series converge or diverge?
1 / ( (ln(n)) ^ (ln(n)) )
I'm not really sure how to approach this. It doesn't look like you can solve an indefinite integral, I can't find anything to used for the comparison test, and the ration test doesn't really
simplify. What should I do? Thank you!
You're forgetting, apparenty, the very nice Cauchy's Condensation Test: if a sequence $\{a_n\}$ is positive and monotonically descending to zero, the series $\sum\limits_{n=1}^\infty a_n$
converges iff the series $\sum\limits_{n=1}^\infty 2^na_{2^n}$ converges (and we can take any prime p instead of 2).
Apply this to your series and you'll find out it diverges (after, perhaps, you apply the n-th root test to the result of taking $2^na_{2^n}$)
We never really learned that, so I would probably get docked points for not using a technique I already know.
That is a practice that many people don’t understand. But in many schools it is widely practiced. My guess is that your textbook & instructor want you to use the comparison test.
Here is the trick I learned thanks to Gillman’s text.
October 29th 2009, 07:50 PM #2
Oct 2009
October 30th 2009, 03:09 AM #3
Sep 2008
October 30th 2009, 05:16 AM #4
Oct 2009
October 30th 2009, 07:57 AM #5
|
{"url":"http://mathhelpforum.com/calculus/111265-does-series-converge-diverge.html","timestamp":"2014-04-18T09:43:00Z","content_type":null,"content_length":"45156","record_id":"<urn:uuid:899fdb73-c54f-4eac-b44b-046d5fdcf2d0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] help(xxx) vs print(xxx.__doc__)
David M. Cooke cookedm at physics.mcmaster.ca
Thu Feb 23 18:28:00 CST 2006
"Bill Baxter" <wbaxter at gmail.com> writes:
> Can someone explain why help(numpy.r_) doesn't contain all the information in
> print(numpy.r_.__doc__)?
> Namely you don't get the helpful example showing usage with 'help' that you get
> with '.__doc__'.
> I'd rather be able to use 'help' as the one-stop shop for built-in
> documentation. It's less typing and just looks nicer.
Huh, odd. Note that in IPython, numpy.r_? and numpy.r_.__doc__ give
the same results.
And I thought I was being clever when I rewrote numpy.r_ :-) Looks
like help() looks at the class __doc__ first, while IPython looks at
the object's __doc__ first.
I've fixed this in svn.
|David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/
|cookedm at physics.mcmaster.ca
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-February/018920.html","timestamp":"2014-04-20T03:36:10Z","content_type":null,"content_length":"3816","record_id":"<urn:uuid:69b91893-c646-4a29-b61b-b29de183ad6e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simple square-route function if you don't want to use cmath
01-14-2003 #16
It is possible to write a funtion that will find a square root in a reasonable amount of time. The catch is that you'll have to use Calculus to do it. It's called Newton's Method. Newton's Method
is used to approximate the zeroes of a function. In the case of finding a square root, you'd want to find the positive zero of a quadratic function.
where num is the number that you want to find the square root of. I know there's a coded solution to this in my textbook, but you should look it up yourself. To use Newton's Method you basically
have to keep guessing at the zero until you get closer and closer to it. It's pretty fast and reasonably easy to code. You can also set up the calculations to continue untill a certain level of
precision is reached. So, go google Newton's Method for approximately zeroes. Good luck!
"The computer programmer is a creator of universes for which he alone is responsible. Universes of virtually unlimited complexity can be created in the form of computer programs." -- Joseph
"If you cannot grok the overall structure of a program while taking a shower, you are not ready to code it." -- Richard Pattis.
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/32410-simple-square-route-function-if-you-don%27t-want-use-cmath-2.html","timestamp":"2014-04-20T01:17:42Z","content_type":null,"content_length":"41806","record_id":"<urn:uuid:872e2a8c-02c7-400d-b50b-2942673befe2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graph examples
To understand how to use the interactive graphs, try some of these examples:
Basic unmodified data
To start with, here are some of the data sets in their unmodified form:
Selecting parts of the data
Sometimes we might only be interested in particular parts of the data:
Cleaning up the data
The raw data is quite 'noisy', so to see patterns we need to clean it up:
In particular, notice how the annual mean of the CO[2] completely removes the annual oscillation.
Deeper cleaning
Sometimes we need to do some deeper cleaning to find longer-term patterns:
Note the bowl curve in the detrended CO[2] shows that the rate of increase has not been constant, and has increased over the period.
Capturing the detail
Alternatively, sometimes the detail is the interesting part:
Estimating trends
The most interesting thing might be just the general trend over time:
It's usually better to plot trend lines together with the data, because the auto-scaling of the graphs means they always look the same otherwise - although the vertical scale is useful, of course.
Fourier analysis
Fourier analysis is a very powerful technique, but needs care to get right. Put simply, Fourier analysis divides a series of data into its individual waves of different frequencies. We can then study
this "frequency domain", or manipulate it, and then convert back to the real-world "time domain" by reversing the process:
In the mix
One of the most interesting things we can do is compare different datasets. By clicking "Add series" you can add multiple series on the same graph. Each one can have different processes applied to
it, but you do have to be careful that the data is still comparable afterwards. Also, the time range and values cover the maximum of any series, so get any detail you may have to ensure they cover
the same range. Here are some examples:
In the Fourier example, we normalise the signal so it fits cleanly on the same graph. We can still compare the peaks and troughs of the signals, but the relative sizes are meaningless.
Test signals
To help test the system and demonstrate the processes, you can start with some internal test signals:
|
{"url":"http://www.woodfortrees.org/examples","timestamp":"2014-04-16T13:09:53Z","content_type":null,"content_length":"8904","record_id":"<urn:uuid:66cfcfb9-fea5-4950-bef9-dc243b405814>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
List of Friend Links
1. School Asia
(Asian Learning/Teaching Technology Resources, Math/Science/English Lesson Plans)
2. Hawthorn WJEC and Further Mathematics Wiki Page
(Further Mathematics Resource Site)
3. Revision World GCSE Maths Page
Revision World A Level Maths Page
( Secondary Level Maths And Science)
(O and A Level Physics Resource Site)
6. Connect@KMTC (Mathematics Specialists)
(O Level Mathematics Solved Problems Site)
( O Level A Maths, E Maths, Biology, Physics, Chemistry, English Resource Site)
( Physical and Human Geography Resources)
9. MathsFiles
( Autograph and Excel software Implementation in Mathematics Learning)
10. Mathorama
( In-depth learning site for Trigonometry, Vectors, Pre-Calculus and Finance)
11. Physics 24/7
(Collection of solved Physics homework problems/quizzes catering to a wide range of topics)
12. Singapore A Level Geography Database
(Repository of Geography related news, examination resources and advice on further studies)
13. Mathemazier
(Collection of Maths Olympiad Problems, solutions to selected puzzles and Maths-related humor)
(Comprehensive H2 Maths tuition blog with discussions on a multitude of interesting problems)
15. HegartyMaths
(Collection of A Level and GCSE Maths video tutorials)
Powered by WebRing.
|
{"url":"http://www.a-levelmaths.com/listoffriendlinks.htm","timestamp":"2014-04-21T00:29:09Z","content_type":null,"content_length":"15536","record_id":"<urn:uuid:9c1ac2af-c3e4-4218-b4c3-7a801a222b61>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A. Governing equations and global flow quantities
B. Flow domain and boundary condition
C. Numerical method and grid system
A. Flow pattern modification
B. Hydrodynamic force characteristics
C. Effects of oscillation frequency, amplitude, and vertical distance
D. Effect of Reynolds number
E. Effectiveness of drag reduction
F. Wake control mechanism behind streamwise oscillating foil
|
{"url":"http://scitation.aip.org/content/aip/journal/pof2/25/5/10.1063/1.4802042","timestamp":"2014-04-24T17:36:02Z","content_type":null,"content_length":"106036","record_id":"<urn:uuid:ca0b1d0c-6b86-42b4-abac-00212ddef9fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
REU 1992
REU 1992: Critical points of real polynomials (A. Durfee)
The REU project for 1992 directed by Alan Durfee continued the work of Durfee's 1989 REU group. The basic topic was the following: Given a real polynomial f(x,y) of two variables of degree d , with
only nondegenerate critical points, what possible combinations of saddles, local maxima and local minima can occur? A detailed description of this problem can be found at the 1989 site. The 1992
group, among other things, found a polynomial with an arbitrary number of local maxima and no other critical points. They also investigated and implemented computer algorithms for finding these
critical points, and studied their graphical representation.
Student participants
• Dana Fabbri, University of Massachusetts '93
• Thomas Feng, Yale University '93
• Ian Robertson, Oberlin College '93
• Sylvia Rolloff, Mount Holyoke College '93
Currently Thomas Feng is a graduate student in mathematics at Princeton University, email tfeng@math.princeton.edu, and Ian Robertson is a graduate student in mathematics at the University of
Chicago, email ian@math.uchicago.edu. (The picture is of all three 1992 REU groups.)
The group produced the following reports:
Dana Fabbri and Silvia Rolloff, A graphical look at polynomials of two variables up to degree three at infinity
Abstract: A study of the behavior at infinity of polynomials with many pictures. The graphs are "scrunched" using the arctan function. (Hardcopy available from the Department of Mathematics,
Mount Holyoke College, S. Hadley MA 01075)
Thomas Feng, Report
Abstract: This report discusses three topics: First, an efficient algorithm for finding simultaneous roots of two real polynomial equations in the plane; second, an improved upper bound on the
number of roots in a special case; and third, a method for combining critical points. (postscript)
Ian Robertson, A polynomial with n maxima and no other critical points
Abstract: This short paper gives an explicit real polynomial of two variables with an arbitrary number of local maxima and no other critical points. Previously known (an easier to construct) were
a polynomial with an arbitrary number of local extrema and no other critical points, a polynomial with two local maxima and no other critical points, and an analytic function with an arbitrary
number of local maxima. (postscript)
Abstract: An outline is given of an algorithm developed by Paul Pedersen for counting (without multiplicity) the number of real roots of a discrete algebraic set that lie within a region defined
by the positivity of some polynomial. The algorithm can be applied to any dimension, though the description here will be confined to dimension two. (postscript)
Ian Robertson, Report
Abstract: An overview of Robertson's activities at the REU. (postscript)
[ REU home page ] [ List of REU projects since 1988 ] [ A. Durfee home page ]
|
{"url":"http://www.mtholyoke.edu/~adurfee/reu/92/reu92.htm","timestamp":"2014-04-16T08:02:46Z","content_type":null,"content_length":"3888","record_id":"<urn:uuid:4d459b4f-4644-48f4-8c84-7e80132d53f2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MASS - Colloquia 2011
Thursday, John Roe, The Pennsylvania State University
January 20
2:30pm Morley’s theorem
MORLEY’S THEOREM is a result in “classical” plane geometry - but its first proof was given in modern times! It states that for any triangle at all, the trisectors of successive angles
ABSTRACT meet at the vertices of an equilateral triangle. I’ll explain how the great French mathematician Alain Connes was motivated to give a new proof by a lunch-time conversation - and how
Napoleon Bonaparte comes into the story as well.
Thursday, George Andrews, The Pennsylvania State University
April 21
2:30pm Ramanujan, Fibonacci numbers, and continued fractions
ABSTRACT This talk focuses on the famous Indian genius, Ramanujan. The first part of the talk will give some account of his meteoric rise and early death. Then we shall lead gently from some
simple problems involving Fibonacci numbers to a discussion of some of Ramanujan's achievements.
Thursday, Sergei Tabachnikov, The Pennsylvania State University
February 3
2:30pm Equiareal dissections
ABSTRACT If a square is dissected into triangles of equal areas then the number of triangles is necessarily even. This "innocently" looking result is surprisingly recent (about 40 years old), and
its only known proof is surprisingly non-trivial: it involves ideas from combinatorial topology and number theory. I shall outline a proof and discuss various variations on this theme.
Thursday, February 17 Anatole Katok, The Pennsylvania State University
2:30pm Billiard table as a mathematician's playground
The title of this lecture may be understood in two ways. Literally, in
a somewhat lighthearted way: mathematicians play by launching
billiard balls on tables of various forms and observe (and also
try to predict) what happens. In a more serious sense, the
ABSTRACT expression ``playground'' should be understood as ``testing grounds'':
various questions, conjectures, methods of solution, etc. in the theory
of dynamical systems are ``tested'' on various types of billiard
problems. I will try to demonstrate using some accessible examples that at least the second
interpretation deserves a serious attention.
Thursday, March 3 Vladimir Dragovic, MI SANU Belgrade/ GFM, University of Lisbon
2:30pm Theorems of Poncelet and Marden -- two of the most beautiful theorems
We are going to present cases of the Siebeck-Marden theorem,
ABSTRACT from the geometric theory of polynomials and of the Poncelet theorem, one
of the most important results about pencils of conics. We are going to
discuss also recently observed relationship between these statements.
Thursday, Vaughn Climenhaga, University of Maryland visiting the Pennsylvania State University
March 17
2:30pm The bigness of things
ABSTRACT It is very natural to ask how "big" something is, but answering this question properly in various settings often requires some new ideas. We will explore this question for the Cantor set,
for which I'll explain why some more familiar notions of "bigness" are unsatisfactory and how a concept of "fractional dimension" arises.
Thursday, Omri Sarig, The Pennsylvania State University
March 31
2:30pm Symbolic dynamics
ABSTRACT "Symbolic dynamics" is a technique for studying chaotic dynamical systems. The idea is to associate to every orbit a sequence of symbols and then study the combinatorial properties of
the resulting sequences. I will describe a particular example: movement on a straight line on a negatively curved surface.
|
{"url":"http://www.math.psu.edu/mass/colloquia/pmass/2011/","timestamp":"2014-04-20T16:20:01Z","content_type":null,"content_length":"9751","record_id":"<urn:uuid:48d3a7de-836f-4bcc-80ba-7deb95a94779>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sound frequency and human are key to understanding equalisation. The frequency of an item can be defined as the number of times that an event occurs with in a given time period.
In the context of sound the given time period is one second, and the frequency, or event, is the number of times a wave form goes through one complete cycle. The time period is known as the periodic
By subtracting T1 from T2 (See Graph) we get the periodic time in seconds. The frequency can then be calculated by dividing the periodic time by one. It is also possible to calculate the time from
the frequency by doing the reverse.
When talking about frequency it can be expressed as the number of complete cycles a wave form goes through in one second. The simplest wave form is a sine wave. A Sine wave can be seen in the period
graph shown above.
Here is an example for a periodic time of 13ms:
When talking about frequency it is also important to understand that the human hearing range is from 20 to 20,000 cycles per second. The higher a given Frequency, the shorter its wave length.
It is much harder to localise the location of low frequency sounds than higher frequency sounds.
You might also like to read this, Setting Mix levels in the Studio
Or possibly this Balanced Audio connections
|
{"url":"http://www.recordandplay.org/frequencyexplained.html","timestamp":"2014-04-17T01:21:19Z","content_type":null,"content_length":"19482","record_id":"<urn:uuid:0f9e8df6-c83a-427a-b788-b215a8f89503>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
|
September 20th 2007, 01:33 PM
Problems 35, 36, 47, and 48 I have been having some trouble with....
35 and 36
For B...i had to find the zeros with a calculator
for part c, I had to find the zeros algebraically
47 and 48
i had to find the zeros and max and min with a calculator
for 47, i am a little confused about the +- part....
I do not know is I am doing these correctly...if someone could take a look at my work, it would be greatly appreciated
September 20th 2007, 02:29 PM
on 35 and 36 the c's look good. that's all i have time to check.
September 21st 2007, 05:22 AM
47 looks good.
Where's the second zero on 48? (Look on the +x side.) And don't understand why you can't find max's and min's on this one? (See below.)
September 22nd 2007, 04:39 PM
How do I simplify part C on 36? would it be Sqrt(2)?
September 22nd 2007, 04:43 PM
I get 2 for the second zero on 48...max: (-2.1915, 19.688)...min: (-1.938E-6, 5)
September 22nd 2007, 04:44 PM
How do you know to use +-
or just + or just minus for the zeros and extrema
September 23rd 2007, 06:30 AM
I have another question, if maybe someone can help me
September 23rd 2007, 07:28 AM
September 23rd 2007, 07:35 AM
Check that zero. I'm getting it to be less than 2. (Or you might be rounding it off too severely.)
And take a look at that graph again: I'm getting two relative maxima, not one. However, what you have listed is good. (Though you could probably round that -1.938 x 10^(-6) to 0 if you wanted.)
September 23rd 2007, 07:36 AM
September 23rd 2007, 07:36 AM
September 23rd 2007, 09:21 AM
i get 1.93 now, for that second zero on 48, i see what i did wrong
September 23rd 2007, 09:24 AM
on 48, for the other maxima, i get: (.915, 5.646)
September 23rd 2007, 09:27 AM
I also forgot to simplify part c on 35.....i got 2 + or - sqrt(3)
September 23rd 2007, 11:11 AM
These all look good to me! :)
|
{"url":"http://mathhelpforum.com/pre-calculus/19261-quadratics-print.html","timestamp":"2014-04-16T12:01:54Z","content_type":null,"content_length":"16481","record_id":"<urn:uuid:4ec69123-8773-44a4-bcb0-82a7f1bb3418>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Haken-Kelso-Bunz model
From Scholarpedia
The Haken-Kelso-Bunz (HKB) Model was originally formulated in 1985 to account for some novel experimental observations on human bimanual coordination that revealed fundamental features of
self-organization: multistability, phase transitions (switching) and hysteresis, a primitive form of memory. Self-organization refers to the spontaneous formation of patterns and pattern change in a
nonequilibrium system composed of very many components that is open to the exchange of matter, energy and information with its surroundings. HKB uses the concepts of synergetics (order parameters,
control parameters, instability, etc) and the mathematical tools of nonlinearly coupled (nonlinear) dynamical systems to account for self-organized behavior both at the cooperative, coordinative
level and at the level of the individual coordinating elements. The HKB model stands as a building block upon which numerous extensions and elaborations have been constructed. In particular, it has
been posssible to derive it from a realistic model of the cortical sheet in which neural areas undergo a reorganization that is mediated by intra- and inter-cortical connections (Jirsa, Fuchs &
Kelso, 1998; see also Fuchs, Jirsa & Kelso, 2000). HKB stands as one of the cornerstones of coordination dynamics, an empirically grounded theoretical framework that seeks to understand coordinated
behavior in living things.
The behaviors of animals and people are functionally ordered spatiotemporal patterns that arise in a system of very many neural, muscular and metabolic components that operate on different time
scales. The ordering is such that we are often able to classify it, like the gaits of a horse, for example, or the limited number of basic sounds (the so-called 'phonemes') that are common to all
languages. Given the ubiquity of coordinated behavior in living things, one might have expected its lawful basis to have been uncovered many years ago. Certainly attempts were made in the classical
works of scientists like C. S. Sherrington (1906), E. von Holst (1937), R.W. Sperry (1961) and N. Bernstein (1967). One drawback to progress has been the absence of a model system that affords the
precise analysis of behavioral patterns and pattern change both in terms of experimental data and theoretical tools. The HKB-model (after Haken, Kelso and Bunz) was the outcome of an experimental
program of research that aimed to understand: 1) the formation of ordered states of coordination in human beings; 2) the multistability of these observed states; and 3) the conditions that give rise
to switching among coordinative states (for review, see Kelso, 1995). Since its publication in 1985, the HKB model has been elaborated and extended in numerous ways and at several different levels of
analysis. Indeed, HKB is probably the most extensively tested quantitative model in the field of human movement behavior (Fuchs & Jirsa, 2008). Because it was the first to establish that coordination
in a complex biological system is an emergent, self-organized process and because it was able to derive emergent patterns of coordinated behavior from nonlinear interactions among the component
subsystems, HKB stands as a basic foundation for understanding coordination in living things.
Phase transitions ('switches') in coordinated movement
The experimental window into the self-organization of behavior was a paradigm introduced by S. Kelso (1981; 1984). HKB is the theoretical model that explicitly accounted for Kelso's observations and
in turn predicted additional aspects. First the basic empirical facts are described; then these observations are mapped onto an explicit model; then the model is derived from a level below, namely
the interacting subsystems. Kelso's original experiments dealt with rhythmical finger and hand movements in human beings. Many studies in humans and monkeys up to that time studied single limb
movements. The Kelso experiments required the coordination between the index fingers of both hands. This precise coordination of the hands requires the coordination within and between the hemispheres
of the brain, later studied using high density EEG and MEG arrays to record cortical activity (such work will not be described here, but see Kelso, et al. (1992) for original MEG work; Wallenstein,
Kelso & Bressler (1995) for EEG correlates, and Jirsa, Fuchs & Kelso (1998) for cortical modelling thereof). The kinematic characteristics of bimanual movements were monitored using infrared
light-emitting diodes attached to the moving parts and were detected by an optoelectronic camera system. On occasion the electromyographic activity of the muscles was also recorded using fine-wire
platinum electrodes (e.g. Kelso & Scholz, 1985), thereby allowing a detailed examination at both kinematic and neuromuscular levels. Subjects oscillated their fingers rhythmically in the transverse
plane (i.e., abduction-adduction) in one of two patterns, in-phase or anti-phase. In the former pattern, homologous muscles contract simultaneously; in the latter, the muscles contract in an
alternating fashion. Subjects may increase the speed at which they perform these movements or they follow a pacing metronome whose oscillation frequency was systematically increased from 1.25 Hz to
3.50 Hz in steps of .25Hz that lasted up to 10 sec. Subjects were instructed to produce one full cycle of movement with each finger for each beat of the metronome. The following features were
• when the subject begins in the anti-phase mode and speed of movement is increased, a spontaneous switch to symmetrical, in-phase movement occurs;
• this transition happens swiftly at a certain critical frequency;
• after the switch has occurred and the movement rate is now decreased the subject remains in the symmetrical mode, i.e. she does not switch back;
• no such transitions occur if the subject begins with symmetrical, in-phase movements.
Thus, while humans are able to produce two patterns at low frequency values, only one--the symmetrical, in-phase mode remains stable as frequency is scaled byond a critical value. Questions of
practice and learning different patterns of behavior have been studied at both behavioral (e.g., Zanone & Kelso, 1992) and brain levels (e.g., Jantzen, et al., 2002) but would require another article
and will not be addressed further here.
Theoretical modeling: mapping behavior onto dynamics
The goal is to account for all the observed patterns of behavior with as small a number of theoretical concepts as possible. In order to understand the observed patterns and pattern switching, the
following questions must be addressed:
1. Given that very many things can be experimentally measured but not all are likely to be relevant, what are the essential coordination variables or order parameters and how can their dynamics be
characterized? Order parameters are quantities that allow for a usually low-dimensional description of the dynamical behavior of a complex, high-dimensional system
2. What are the control parameters that move the system through its coordinative states?
3. How are the subsystems and their interactions to be described?
4. Given a concise model that captures key experimental features, what new observations does it predict?
In a first step, the relative phase or phase relation \(\phi\, \) between the fingers appears to be a suitable coordination variable or order parameter. The reasons are\[\phi\, \] characterizes the
observed patterns of behavior; \(\phi\, \) changes abruptly at the transition and is only weakly dependent on parameters outside the transtion; and \(\phi\, \) has very simple dynamics in which the
behavioral patterns may be characterized as attractors. Since the frequency of oscillation is followed closely in the experiments and does not appear to be dependent on the system (e.g. it has been
demonstrated to be effective also in studies of coordination between two people, Schmidt, et al., 1990), frequency is the control parameter.
The dynamics of \(\phi\, \) can thus be determined from a few basic postulates:
1. The observed patterns of behavior at \(\phi\, \)= 0 deg. and \(\phi\, \)= \(\pm\,\) 180 deg. are modelled as fixed point attractors. The dynamics are therefore assumed to be purely relaxational.
This is a minimality strategy in which only observed attractor types appear in the model;
2. The model must produce the observed patterns of relative phasing behavior: bistability at low frequencies, monostable beyond a critical frequency;
3. Only point attractors of the relative phase dynamics should appear;
4. Due to the fact that relative phase is a cyclic variable--meaning that if a multiple of 2 \(\pi\,\) is added or subtracted the system must remain unchanged--any equation of motion has to be
written in terms of periodic functions, i.e. sines and cosines. Thus a first symmetry argument dictates that the system must be invariant under shifts in the relative phase by multiples of 2 \(\
pi\,\ .\) A general equation of motion with this property reads
\[\tag{1} \dot{\phi} = a_0 + \sum_{k=1}^{\infty} \{ a_k \cos(k\phi) + b_k \sin(k\phi) \} \quad \]
A second symmetry argument comes from the left-right symmetry of the bimanual system itself. Exchanging the left with the right finger and vice-versa does not change the observed phenomena. The model
is thus symmetric under the transformation \( \phi \to -\phi \ ,\) i.e. if we replace \( \phi \to -\phi \) the equation remains the same. The power of symmetries in science cannot be overstated: here
they restrict the equation of motion to a certain class of functions and even assist in eliminating half of them (the constant \( a_0 \) and the cosines). Of course much further work shows that this
symmetry is not perfect. Nature thrives on broken symmetry and coordinated movement is no exception. (Among the factors that have been experimentally demonstrated to break the symmetry of HKB are
handedness, hemispheric asymmetry, attentional allocation, intention to stabilize a particular finger-metronome relationship and so forth. All of these may be considered perturbations of HKB and may
be included in a fine tuning of the modeling procedure). The simplest possible equation of motion --the HKB model--that captures all the observed facts is \[ \dot{\phi} = - a \sin\phi - 2 b \sin 2 \
phi \]
The minus signs in front of the coefficients and the factor 2 in front of the b make life easier because the relevant regions of the parameter space may now be given by a and b positive, and the
factor 2 allows the potential \( V (\phi) \) to be defined without fractions.
\[ V(\phi)=-a \cos\phi - b \cos 2 \phi \]
The equation of motion can be simplified further using rescaling, another powerful tool of nonlinear dynamical systems. Rescaling restricts the parameter space to a single positive parameter without
changing any dynamical features
\[ \dot{\phi} = - \sin\phi - 2 k \sin 2 \phi \]
The parameter \( k \) in the model (\( b/a \) in the original formulation) corresponds to the cycle to cycle period of the finger movements, that is, the inverse of the movement rate or oscillation
frequency in the experiment. An increase in frequency thus corresponds to a decrease in \( k \ .\)
In order to determine whether this equation represents a valid theoretical model of the experimental findings one has to find the fixed points and check their stability. This means solving the
equation for \( \dot{\phi}=0 \ .\) Haken, Kelso and Bunz (1985) showed that this equation captured all the observed experimental facts in the Kelso experiments.
The figure presents different ways to visualize the HKB model. Part (a) shows how the relative phase evolves in time from different initial conditions. Notice for high values of \( k \) corresponding
to slow movements, initial conditions near in-phase and anti-phase converge to their respective attractors. Parts (b), (c) and (d) show the HKB potential (b), the phase portrait (c) and the
bifurcation diagram, respectively. For \( k>0.25 \) relative phase values of \( 0 \) and \( \pm \pi \) are both stable, a condition called bistability. An increase in movement rate, starting in
anti-phase, leads to a switch to in-phase at a critical frequency. Indeed, starting with a large \( k \) and decreasing \( k \) leads to a destabilization of the fixed point at \( \pi \) which
becomes unstable at the value \( k_c=0.25 \) and the system switches spontaneously is into the in-phase pattern at \( \phi=0 \ .\) For parameter values smaller than \( 0.25 \) the fixed points at \(
\pm \pi \) are unstable and the only remaining one is stable at \( \phi=0 \) corresponding to in-phase. Starting in the in-phase pattern for large \( k \) (slow movement) and decreasing \( k \ ,\)
does not lead to a transition because \( \phi=0 \) is stable for all values of \( k \ .\) Likewise, beginning in the \( \phi=0 \) pattern with a small \( k \) (fast movement) and slowing the movement
down does not cause behavior to change. Even beyond the critical value where anti-phase movement is possible, the system stays where it is. This is called hysteresis: there is no reason for the
original pattern to change because the system is already in a stable coordinative state.
Theoretical Predictions and Experimental Confirmation
In HKB loss of stability, also called dynamic instability, causes switching to occur. One is free to inquire about the location of 'switches' inside the system but that is not the key to
understanding what is going on. Stability can be measured in several ways:
1. Critical slowing down. If a small perturbation is applied to the system that drives it away from its stationary state, the time for the system to return to its stationary state (its local
relaxation time) is a measure of its stability. The smaller the local relaxation time, the more stable the attractor. The less stable the pattern the longer it should take to return to the
established pattern. HKB--or more correctly its stochastic equivalent (Schöner, Haken & Kelso, 1986)--predicts critical slowing down. That is, if the antiphase pattern is actually losing
stability as the control parameter of frequency is increased, the local relaxation time should increase as the system approaches the critical point. Excellent agreement with theory was obtained
in careful experiments (Scholz & Kelso, 1989; Scholz, Kelso & Schöner, 1987).
2. Critical fluctuations. A signature feature of non-equilibrium phase transitions in nature is the presence of critical fluctuations. If switching patterns of behavior is due to loss of stability,
direct measures of fluctuations of the order parameter (relative phase) should be detectable as the critical point approaches. Experiments by Kelso et al (1986) showed a striking enhancement of
fluctuations (measured as the standard deviation of the continuous relative phase) for the antiphase pattern as the control parameter approached a critical value. No such increase was observed
over the same parameter range for the in-phase pattern.
Deriving patterns of behavior and pattern change from subsytem interactions at a lower level
The HKB equation characterizes coordinated patterns of behavior and their pattern dynamics in terms of the order parameter or relative phase dynamics. However, it is important to recognize that the
complete HKB model also derives these dynamics from a lower level. To accomplish this step one has to consider the subsystems and how these subsystems interact to produce coordinated states. This
means it is necessary to provide a mathematical description of the fingers (or more generally the limbs) and a coupling between them. Again, it was very important to use experimental facts to guide
theoretical modeling. Kinematic features of amplitude, frequency and velocity relationships were measured by Kelso, et al (1981) and more rigorously by Kay, et al (1987). In particular, the amplitude
of individual finger oscillation was observed to decrease monotonically with frequency. Moreover, additional perturbation and phase resetting experiments by Kay, Kelso & Saltzman (1991) showed that
individual hand movements returned to their cyclical trajectories with finite relaxation times. The HKB model thus maps the stable and reproducible oscillatory performance of each finger onto a limit
cycle attractor in the \( x \) and \(\dot{x}\) phase plane. Again symmetry considerations play an important role. Finger movements consist of repetitive executions of flexion and extension in which
one half cycle of flexion is approximately the inverse of one half cycle of extension. In other words, whether the finger is flexing or extending does not essentially change the dynamics of the
movement. For the equation of motion this means that if \( x \) and all its derivatives are substituted by \( -x \) the equation must remain invariant. The equation of motion up to third order for
the oscillation of a single limb takes the form \[ \ddot{x} + \epsilon \dot{x} + \omega^2 x + \gamma x^2 \dot{x} + \delta \dot{x}^3 = 0 \]
This specific equation has been termed the "hybrid oscillator" because it consists of two types of oscillators known in the literature, i.e. the van-der-Pol oscillator for \( \delta=0 \) and the
Rayleigh oscillator for \( \gamma=0 \ .\) The reason to combine them is to get an accurate representation of the experimentally observed properties of single finger movements. Of course the main goal
is to derive the HKB equation from the level of the individual components and their interaction. A crucial issue is the coupling function. In general, the coupling of two hybrid oscillators leads to
a system of differential equations of the form \[ \ddot{x}_1 + \epsilon \dot{x}_1 + \omega_1^2 x_1 + \gamma x_1^2 \dot{x}_1 + \delta \dot{x}_1^3 = f_1 \{ x_1, \dot{x}_1, x_2, \dot{x}_2 \} \] \[ \ddot
{x}_2 + \epsilon \dot{x}_2 + \omega_2^2 x_1 + \gamma x_2^2 \dot{x}_2 + \delta \dot{x}_2^3 = f_2 \{ x_1, \dot{x}_1, x_2, \dot{x}_2 \} \]
Notice that the same parameters \( \epsilon \ ,\) \(\gamma \ ,\) and \(\delta \) appear for both oscillators differing only in their eigenfrequencies \(\omega_i \) (see Fuchs, et al., 1996). Haken,
Kelso & Bunz (1985) considered a number of coupling structures for the observed phasing patterns and phase transitions. Linear couplings of position and its first order derivatives (velocity) are
inadequate. Quadratic coupling terms violate symmetry requirements. Also, since the amplitudes of the oscillators are almost identical, a coupling based on the difference in the variables will act
only as a small perturbation and not destroy the limit cycle structure of the oscillators. Hence, the simplest coupling that leads to the equation of motion for the relative phase is the sum of the
linear term in the velocities and the cubic term in velocities times displacement squared
\[ f_1 = \alpha (\dot{x}_1 - \dot{x}_2) + \beta (\dot{x}_1 - \dot{x}_2)(x_1-x_2)^2 = (\dot{x}_1 - \dot{x}_2) \{ \alpha +\beta (x_1-x_2)^2 \} \] \[ f_2 = \alpha (\dot{x}_2 - \dot{x}_1) + \beta (\dot
{x}_2 - \dot{x}_1)(x_2-x_1)^2 = (\dot{x}_2 - \dot{x}_1) \{ \alpha +\beta (x_2-x_1)^2 \} \]
Using the above equations, Haken, Kelso & Bunz (1985) derived the final form of the dynamics of the order parameter relative phase as
\[ \dot{\phi} = (\alpha + 2 \beta r^2) \sin\phi - \beta r^2 \sin2\phi \]
thereby establishing in a rigorous fashion the relation between the two levels of description.
The relation between parameters \( a \) and \( b \) at the collective, coordinative level, and the oscillator and coupling parameters \( r \) (amplitude), \( \alpha \) and \( \beta \)
\[ a = -(\alpha + 2 \beta r^2) \quad \mbox{and} \quad b = \frac{1}{2} \beta r^2 \]
yielded the critical frequency where the transition occurs is
\[ k_c = \frac{b}{a} = \frac{1}{4} \quad \mbox{or with} \quad \frac{\frac{1}{2} \beta r^2}{-(\alpha + 2 \beta r^2)} = \frac{1}{4} \] which can be readily solved for the amplitude leading to
\[ r_c^2 = -\frac{\alpha}{4 \beta} \quad . \]
It is possible to provide only a hint of the various conceptual, methodological and practical developments that have arisen from the HKB model and the empirical observations that motivated it. These
developments fall into several, by no means inclusive categories: a vast amount of research has been conducted based on the experimental paradigm itself and issues connected to the paradigm,
including the roles of task context, biomechanical factors, perception, attention, cognitive demands, learning and memory (e.g. Carson, et al., 2000; Mechsner, et al., 2001; Pellecchia, Shockley &
Turvey, 2005; Temprado, et al., 2002). Much of this research is a blend of both traditional and new methods and techniques. Issues of social coordination, the recruitment and coordination of multiple
task components and the integration of movement with different sensory modalities have captured much recent interest. The latest noninvasive neuroimaging methods such as fMRI, MEG and high density
EEG arrays are increasingly being used along with behavioral recording and analysis to identify the neural circuitry and mechanisms of pattern stability and switching (e.g., Aramaki, et al., 2005;
Jantzen & Kelso, in press; Kelso, et al., 1998; Meyer-Lindenberg, et al., 2002; Swinnen, 2002). From a modeling point of view, major steps have included symmetry breaking of the HKB system (Kelso, et
al., 1990) and its numerous conceptual consequences and paradigmatic applications, e.g. its role in the recruitment and coordination of multiple components; how it has revealed the balance of
integrative and segregative processes in the brain (metastability). Discrete as well as rhythmic behaviors of individual and coupled systems have been studied (e.g. Schaal, et al., 2005) and
accommodated in theoretical models (e.g., Jirsa & Kelso, 2005). HKB has also been extended to handle events at a neural level (Jirsa, Fuchs & Kelso, 1998). Although detailed anatomical architectures
will always depend on specific contexts, the power of the approach is that it poses constraints on allowable types of architectures (see, e.g., Daffertshofer et al., 2005; Banerjee & Jirsa, 2006).
When it comes to the brain, the need for at least a two-layer structure between functional units localized in the brain and the input and output components that are coordinated has been recognized by
several research groups (e.g. Beek, Peper & Daffertshoffer, 2002; Jirsa, Fuchs & Kelso, 1998). The incorporation of time delays into explicitly neural models, e.g. of interhemispheric coordination
during bimanual and sensorimotor tasks is under active investigation, as are the behavioral, neural and modelling mechanisms underlying the different ways in which switching and elementary
decision-making occur.
Aramaki, Y., Honda, M., Okada, T., & Sadato, N. (2006) Neural correlates of the spontaneous phase transition during bimanual coordination. Cerebral Cortex, 16, 1338-1348.
Banerjee, A., Jirsa, V.K. (2006) How do neural connectivity and time delays influence bimanual coordination? Biological Cybernetics in press.
Beek, P.J., Peper, C.E., & Daffertshoffer, A. (2002) Modelling rhythmic interlimb coordination: beyond the Haken-Kelso-Bunz model. Brain & Cognition, 1, 149-165.
Bernstein, N. A. (1967) The coordination and regulation of movements. London, Pergammon.
Carson, RG, Riek, S, Smethurst, CJ, Lison-Parraga, JF & Byblow, WD. (2000) Neuromuscular-skeletal constraints upon the dynamics of unimanual and bimanual coordination. Experimental Brain Research,
131 (2), 196-214.
Daffertshofer, A., Peper, C. E., & Beek, P. J. (2005) Stabilization of bimanual coordination due to active interhemispheric inhibition: a dynamical account. Biological Cybernetics, 92, 101-109.
Fuchs, A., & Jirsa, V.K. (Eds.) (2008) Coordination: Neural, Behavioral and Social Dynamics. Heidelberg: Springer.
Fuchs, A., Jirsa, V.K., & Kelso, J.A.S. (2000). Theory of the relation between human brain activity (MEG) and hand movements. NeuroImage, 11, 359-369.
Fuchs, A., Jirsa, V. K., Haken, H., & Kelso, J. A. S. (1996). Extending the HKB-Model of coordinated movement to oscillators with different eigenfrequencies. Biological Cybernetics 74, 21-30.
Haken, H., Kelso, J.A.S., & Bunz, H. (1985). A theoretical model of phase transitions in human hand movements. Biological Cybernetics, 51, 347 356.
Jantzen, K.J., & Kelso, J.A.S. (2007) Neural coordination dynamics of human sensorimotor behavior: A Review. In V.K Jirsa & R. MacIntosh (Eds.) Handbook of Brain Connectivity. Heidelberg: Springer.
Jantzen, K.J., Steinberg, F.L., & Kelso, J.A.S. (2002). Practice-dependent modulation of neural activity during human sensorimotor coordination: A Functional Magnetic Resonance Imaging study.
Neuroscience Letters, 332, 205-209.
Jirsa, V.K. & Kelso, J.A.S. (2005) The excitator as a minimal model for the coordination dynamics of discrete and rhythmic movements. Journal of Motor Behavior, 37, 35-51.
Jirsa, V. K., Fuchs, A., & Kelso, J.A.S. (1998) Connecting cortical and behavioral dynamics: Bimanual coordination. Neural Computation, 10, 2019-2045.
Kay, B.A., Kelso, J.A.S., Saltzman, E.L., & Schöner, G. (1987). The space time behavior of single and bimanual rhythmical movements: Data and a limit cycle model. Journal of Experimental Psychology:
Human Perception and Performance, 13, 178-192.
Kay, B.A., Saltzman, E.L. & Kelso, J.A.S. (1991). Steady state and perturbed rhythmical movements: Dynamical modeling using a variety of analytic tools. Journal of Experimental Psychology: Human
Perception and Performance, 17, 183-197.
Kelso, J.A.S. (1981). On the oscillatory basis of movement. Bulletin of the Psychonomic Society, 18, 63.
Kelso, J.A.S. (1984). Phase transitions and critical behavior in human bimanual coordination. American Journal of Physiology: Regulatory, Integrative and Comparative, 15, R1000-R1004.
Kelso, J.A.S. (1995). Dynamic Patterns: The Self Organization of Brain and Behavior. Cambridge: MIT Press. [Paperback edition, 1997].
Kelso, J.A.S., & Scholz, J.P. (1985). Cooperative phenomena in biological motion. In H. Haken (Ed.), Complex Systems: Operational approaches in neurobiology, physics and computers. Springer Verlag:
Kelso, J.A.S. & Schöner, G. (1987) Toward a physical (synergetic) theory of biological coordination. Springer Proceedings in Physics, 19, 224-237.
Kelso, J.A.S., Scholz, J.P. & Schöner, G. (1986). Nonequilibrium phase transitions in coordinated biological motion: Critical fluctuations. Physics Letters A, 118, 279-284.
Kelso, J.A.S., DelColle, J. & Schöner, G. (1990). Action-Perception as a pattern formation process. In M. Jeannerod (Ed.), Attention and Performance XIII, Hillsdale, NJ: Erlbaum, pp. 139-169.
Kelso, J.A.S., Holt, K.G., Rubin, P. & Kugler, P.N. (1981). Patterns of human interlimb coordination emerge from the properties of nonlinear oscillatory processes: Theory and data. Journal of Motor
Behavior, 13, 226-261.
Kelso, J.A.S., Bressler, S.L., Buchanan, S., DeGuzman, G.C., Ding, M., Fuchs, A. & Holroyd, T. (1992). A phase transition in human brain and behavior. Physics Letters A, 169, 134-144.
Kelso JAS, Fuchs A, Lancaster R, Holroyd T, Cheyne D, Weinberg H (1998) Dynamic cortical activity in the human brain reveals motor equivalence. Nature 392: 814-818
Mechsner, F., Kerzel, D., Knoblich, G., & Prinz, W. (2001). Perceptual basis of bimanual coordination. Nature, 414, 69-73.
Meyer-Lindenberg A, Ziemann U, Hajak G, Cohen L, Berman KF (2002) Transitions between dynamical states of differing stability in the human brain. Proceedings of the National Academy of Sciences (USA)
99: 10948-10953
Pellecchia, G., Shockley, K., & Turvey, M. T. (2005). Concurrent cognitive task modulates coordination dynamics. Cognitive Science, 29, 531-557
Schaal S., Sternad D., Osu R. & Kawato M. (2004). Rhythmic arm movements are not discrete. Nature Neuroscience 7, 1136-1143.
Schmidt, R.C., Carello, C. & Turvey, M.T. (1990). Phase transitions and critical fluctuations in the visual coordination of rhythmic movement between people. Journal of Experimental Psychology: Human
Perception and Performance, 16, 227-247.
Scholz, J.P. & Kelso, J.A.S. (1989) A quantitative approach to understanding the formation and change of coordinated movement patterns. Journal of Motor Behavior, 21, 122-144.
Scholz, J.P., Kelso, J.A.S. & Schöner, G. (1987). Nonequilibrium phase transitions in coordinated biological motion: Critical slowing down and switching time. Physics Letters A, 8, 390-394.
Schöner, G. & Kelso, J.A.S. (1988) Dynamic pattern generation in behavioral and neural systems. Science, 239, 1513-1520. Reprinted in K. L. Kelner & D. E. Koshland, Jr. (Eds.), Molecules to Models:
Advances in Neuroscience, pp 311-325.
Schöner, G., Haken, H., & Kelso, J.A.S. (1986). A stochastic theory of phase transitions in human hand movement. Biological Cybernetics, 53, 247-257.
Sherrington, C. S. (1906) The integrative action of the nervous system. London, Constable.
Sperry, R. W. (1961) Cerebral organization and behavior. Science, 133, 1749-1757.
Swinnen SP (2002) Intermanual coordination: From behavioural principles to neural-network interactions. Nature Reviews Neuroscience 3: 350-361.
Temprado JJ, Monno A, Zanone PG, Kelso JAS (2002) Attentional demands reflect learning-induced alterations of bimanual coordination dynamics. European Journal of Neuroscience 16: 1390-1394
von Holst, E. ((1939/1973). The behavioral physiology of man and animals. Coral Gables, FL., University of Miami Press.
Wallenstein, G.V., Kelso, J.A.S. & Bressler, S.L. (1995). Phase transitions in spatiotemporal patterns of brain activity and behavior. Physica D, 84, 626-634.
Zanone, P.G. & Kelso, J.A.S. (1992). The evolution of behavioral attractors with learning: Nonequilibrium phase transitions. Journal of Experimental Psychology: Human Perception and Performance, 18/
2, 403-421.
Internal references
• John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815.
• John Guckenheimer (2007) Bifurcation. Scholarpedia, 2(6):1517.
• Valentino Braitenberg (2007) Brain. Scholarpedia, 2(11):2918.
• Giovanni Gallavotti (2008) Fluctuations. Scholarpedia, 3(6):5893.
• Mark Aronoff (2007) Language. Scholarpedia, 2(5):3175.
• Howard Eichenbaum (2008) Memory. Scholarpedia, 3(3):1747.
• Rodolfo Llinas (2008) Neuron. Scholarpedia, 3(8):1490.
• Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.
• Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
• David H. Terman and Eugene M. Izhikevich (2008) State space. Scholarpedia, 3(3):1924.
• Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459.
• Hermann Haken (2007) Synergetics. Scholarpedia, 2(1):1400.
• J. A. Scott Kelso (2008) Synergies. Scholarpedia, 3(10):1611.
See also
Coordination dynamics, Self-organization, Synchronization, Synergies
|
{"url":"http://www.scholarpedia.org/article/Haken-Kelso-Bunz_model","timestamp":"2014-04-21T10:00:04Z","content_type":null,"content_length":"65243","record_id":"<urn:uuid:a7ce1b1f-3c9c-44e3-aa3e-d18911f8186a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parallel Dense Linear Algebra Libraries
Saturday, March 15
4:00 PM-6:00 PM
Greenway C-E
Parallel Dense Linear Algebra Libraries
This minisymposium will focus on the art of high-performance parallel dense linear algebra libraries. Such libraries are used, for example, in applications involving boundary element formulations,
such as electromagnetics and acoustics, as well as eigenproblems arising in computational chemistry. In addition, dense subproblems also occur in sparse linear systems. Many researchers have resorted
to the development of custom implementations for individual routines. In this minisymposium, the speakers describe recent development in the area of parallel dense linear algebra libraries. They will
discuss the development of general purpose parallel dense linear algebra libraries: ScaLAPACK (developed by the LAPACK project at the University of Tennessee, Oak Ridge National Laboratory, and the
University of California, Berkeley) and PLAPACK (developed at the University of Texas at Austin). They will also discuss projects that target more specialized problem domains: the PRISM library and
the PEigS library, both of which primarily target dense linear eigenproblems.
Organizer: Robert A. van de Geijn
University of Texas, Austin
4:00 PLAPACK: Parallel Linear Algebra Package
Robert van de Geijn, Organizer; Philip Alpatov, Greg Baker, Carter Edwards, John Gunnels, Greg Morrow, and James Overfelt, University of Texas, Austin
4:30 ScaLAPACK: A Linear Algebra Library for Message-Passing Computers
Jack Dongarra, University of Tennessee, Knoxville and Oak Ridge National Laboratory; L. S. Blackford, University of Tennessee, Knoxville; J. Choi, Soongsil University, Korea; A. Cleary,
University of Tennessee, Knoxville; E. D'Azevedo, Oak Ridge National Laboratory; J. Demmel and I. Dhillon, University of California, Berkeley; S. Hammarling, The Numerical Algorithms Group, Ltd.;
G. Henry, Intel; A. Petitet, University of Tennessee, Knoxville; K. Stanley, University of California, Berkeley; D. Walker, University of Wales, Cardiff; and R. C. Whaley, University of
Tennessee, Knoxville
5:00 Parallel SBR: A PLAPACK Based PRISM Kernel
Yuan-Jye J. Wu and Christian H. Bischof, Argonne National Laboratory
5:30 The Performance of a New Algorithm in Eigensystem Problems For Computational Chemistry
George Fann, Pacific Northwest National Laboratory, and Inderjit Dhillon and Beresford Parlett, University of California, Berkeley
PP97 Homepage | Program Updates|
Registration | Hotel Information | Transportation | Program Overview | Program-at-a-Glance
MMD, 1/24/97
|
{"url":"http://www.siam.org/meetings/archives/pp97/ms13.htm","timestamp":"2014-04-20T10:12:48Z","content_type":null,"content_length":"3326","record_id":"<urn:uuid:482fcd4d-059d-4536-a526-5ecb21b8fe68>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How does this simplification work? Can someone show me the intermediate steps?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/503f1d9ce4b00ea5789838d3","timestamp":"2014-04-19T17:19:50Z","content_type":null,"content_length":"198319","record_id":"<urn:uuid:39e343d3-a2a6-4dc5-87f6-72c1258e64a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
find a quadratic function f(x)=ax^2+bx+c whose graph has a maximum value at 25 and x-intercepts -3 and 2
• 10 months ago
• 10 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51b8dfb4e4b0862d0498dd81","timestamp":"2014-04-19T04:44:47Z","content_type":null,"content_length":"45597","record_id":"<urn:uuid:ec9d491a-128c-4ff4-a91f-af13acd77404>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prove this random variables family is independent
November 1st 2009, 11:37 AM #1
Nov 2009
Prove this random variables family is independent
$\Omega=[0,1]$ with its borelian subset and P the Lebesgue measure over [0,1].
Let $A_{n}=\bigcup_k]\frac{2(k-1)}{2^n},\frac{2k-1}{2^n}]$ , k from 1 to 2^(n-1).
I have showed that the $(A_{n})$ family is mutually independent.
Let $X_{n}=I_{A_{n}}=\sum_k I_{]\frac{2(k-1)}{2^n},\frac{2k-1}{2^n}]}, n \in N$ from [0,1] in {0,1} (k from 1 to 2^(n-1))
I have to show that the random variables are mutually independent too, but I can't get there...
Thank you for helping me
Last edited by Babaorumi; November 1st 2009 at 11:58 AM.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/advanced-statistics/111731-prove-random-variables-family-independent.html","timestamp":"2014-04-16T05:13:48Z","content_type":null,"content_length":"30761","record_id":"<urn:uuid:c437f3ad-3fd2-479e-8b4c-069c1b68b6dc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Mathematics as governing intuition by formal methods
Vladimir Sazonov sazonov at logic.botik.ru
Thu Jan 8 16:10:11 EST 1998
In the previous posting I formulated the following "definition":
Mathematics (in a wide sense) deals with governing our intuitions
(abstractions, idealizations, imaginations, illusions, fantasies,
abilities to foresee and anticipate, etc.) on the base of appropriate
formal/deductive/axiomatic methods/systems/theories/rules.
(This seems to be in some agreement with the definitions of Moshe'
Machover and Solomon Feferman also cited there.)
I also wrote that this definition suggests to consciously *manipulate*
our intuitions or even illusions instead of canonizing them. Let me
illustrate this by showing how intuition of natural numbers may be
changed by a critique of the Induction Axiom (and some related things).
Solomon Feferman wrote:
> I. The positive integers are conceived within the structure of objects
> obtained from an initial object by unlimited iteration of its adjunction,
> e.g. 1, 11, 111, 1111, .... , under the operation of successor.
I have questions. What does it mean "unlimited" explicitly? I.e., what
are these dots "...."? I think it may be understood in various ways.
Also, why "unlimited"? Is this a necessary component of any intuition
on the positive integers? Further, is Induction Axiom which is usually
attributed to natural numbers inevitable? Is it true that we do not
loose anything essential by choosing once and for all the traditional
approach to the natural numbers via Peano Arithmetic? How to formulate
this "traditional approach" and corresponding intuition in a clear way
which is not reduced simply to writing down, say, the axioms of PA (or
PA + ?)?
Actually, I myself have and present below some answers to some of these
questions. Not all of this is probably very original. But usually some
essential points are not taken into account at all at the beginnings of
arithmetic. So let me fix them. I am also interested in the opinions on
these considerations of other participants of the FOM list.
First, everybody know that there are resource bounds. And these bounds
may prevent us to choose or use non-critically the term "unlimited"
above. Thus, we could in principle admit existence of the biggest
natural number which I call Box (and denote as \Box in LaTex or as [])
to emphasize the idea of something "bounded" like a room bounded by the
walls. We may consider corresponding version PA_[] (or
"Box-arithmetic") of Peano Arithmetic (PA) *relativized* (together with
Induction Axiom) to this indefinitely finite row 0,1,2,...,[]-1,[] of
natural numbers. Moreover, it reasonable to consider that this
"uniformly, or globally bounded" arithmetic PA_[] involves symbols for
*all* recursive functions relativized to []. (This is a quite
meaningful notion whose precise definition may be known and clear to
those acquainted with descriptive complexity theory or with
corresponding approaches in finite model theory.) Then this
Box-arithmetic proves to be just an *arithmetic of polynomial time
computability*. (There exists corresponding theorem about this
Note, that interrelation of the first-order version of this arithmetic
with the second-order one (both in terms of definability and
proof-theory) is essentially connected with the problem "P=NP?". (Here
a well-known definability result of Ronald Fagin (1974) is implicitly
Let me formulate some problems: Is induction axiom over second-order
formulas provable in the (first-order) PA_[]? Is second-order version
of PA_[] conservative over PA_[]? What are corresponding approaches to
predicativity notions analogous to the ordinary arithmetic. What about
constructive (second-order) versions of PA_[]? A bit *informally*, are
all (finite!) functions over the row 0,1,2,...,[]-1,[] of natural
numbers recursive in the above sense? What about *fixing* [] = 1000, or
= 2^1000, or = any "nonstandard" number?
By the way, it seems to me that this approach is sufficiently serious
reason to consider the problem "P=NP?" as foundational one, in contrast
to the opposed opinion of Professor Solomon Feferman in his recent
posting on "P=NP". Note, that even with a *fixed* value of []
analogue of this problem remains interesting and non-trivial. Moreover,
it looses its traditional asymptotic character. (May be some reasonable
changing this problem will be fruitful for finding its solution?)
Actually, we could say that it is a problem on the *nature* of finite
objects. What are they? Are they *all* ("all"?) recursive, or
constructive, constructible? (Of course, this reminds Kolmogorov's
This is one possibility of formalizing natural numbers,
alternative to PA.
Another possibility:
Let us postulate that there exists no last natural number (as in the
ordinary PA), but restrict Induction Axiom (and probably some other
formal rules) in some way. Why? Because it is unclear that this axiom
is necessarily "true". Consider, e.g. its equivalent reformulation as
the Minimum Principle
\exists x A(x) => \exists x (A(x) & \forall y < x ~A(y)).
This principle is true for *short* initial segments of natural numbers
like 0,1,2,...,10 and for *simple* properties A(x). But, if we are
dealing with numbers more *abstractly*, we have also another,
"negative" experience. E.g., what about existence of "the least natural
number which cannot be denoted by a written phrase containing no more
than one thousand of symbols"? I also doubt that Minimum Principle
should inevitably "hold" for any *pure* arithmetical formula with
quantifiers over "all" natural numbers. ("All" is something extremely
Yes, it seems reasonable to extrapolate the minimum principle to "all"
natural numbers, whatever they are, and to all arithmetical formulas
A(x), especially because this extrapolation in the form of PA does not
seem to lead to a contradiction. But why we *must* do that? Tradition?
Intuition? Some kind of Mathematical Religion? What about other
possibilities and other intuitions (and even "religions")? Who can say
without sufficient trials that these possibilities do not exist or are
fruitless? It seems that such (and may be some other) alternatives
should be at least formulated explicitly when the nature of natural
numbers is discussed.
Note, that intuitionists and constructivists have explicitly
articulated doubts on meaningfulness of quantification over "all"
natural numbers. But they made somewhat different conclusion, just on
the low of excluded middle, instead of Induction Axiom.
There exists somewhat different way of approaching Induction Axiom
(Rule). If A(n) => A(n+1) is provable then we can prove by using the
cut rule (or transitivity of implication) that A(0) => A(n*) for any
fixed numeral n*. Then we *extrapolate* this metatheorem to the
conclusion A(0) => A(n) where n is a (universally quantified) variable
ranging over "all" natural numbers. This extrapolation is based on
identifying abstract, semantical objects (numbers denoted by the
variable n) with concrete, syntactical ones (numerals and therefore -
via Goedel numbering - with arbitrary terms, formulas, proofs, etc.).
Such identification of entities of very different nature (semantical
and syntactical, abstract and concrete) does not seem in general
sufficiently convincing. It makes the (infinite!) row of natural
numbers much "longer" than it is "intended to be". (By the way, it
follows that the sentence Consis_PA does not have precisely the
intended meaning.)
With the induction axiom (and also with the cut rule and with other
tools which also could be criticized; cf. my previous postings to FOM
starting from 5 NOV 1997) we can prove existence of very big numbers
(like 2^1000) whose meaning and existence may be considered as doubtful
or questionable.
I conclude that in principle there are alternatives to PA and
corresponding traditional arithmetical intuition. One alternative
discussed above is PA_[]. Another example is Feasibly Constructive
Arithmetic of Stephen Cook, and closely related Bounded Arithmetic a
version of which was considered by Rohit Parikh in 1971. (My
own interest to BA (and to Bounded Set Theory) started exactly with
PA_[] whose second-order version is essentially equivalent to BA.) Much
more unusual, even strange (and, I would like to hope, promising too)
is also Feasible Arithmetic (feasible - in an essentially stronger
sense of this word than that of Cook) discussed in my previous postings
in November 1997. The first mathematically rigorous approach to
(almost consistent) Feasible Arithmetic was started also by R.Parikh in
1971. Let me also mention corresponding ideas of Alternative Set Theory
of Petr Vopenka.
All of this is related with paying main attention to resources needed
in computations on the level of mathematical foundations rather than on
the level of the traditional complexity theory.
Besides numerous literature on Bounded Arithmetic, cf. also my papers
related to BA, PA_[], Bounded Set Theory (BST) and Feasible Numbers
[1] "Polynomial computability and recursivity in finite domains".
Elektronische Informationsverarbeitung und Kybernetik, Vol.16,
N7, 1980, p.319--323.
[2] "A logical approach to the problem P=?NP", LNCS 88, 1980.
(See corrections in [3,4])
[3] "On existence of complete predicate calculus in
metamathematics without exponentiation", MFCS'81, LNCS, N118,
Springer, New York, 1981, P.483--490.
[4] Also more late paper essentially subsuming [2]:
"On equivalence between polynomial constructivity of Markov's
principle and the equality P=NP", 1988 (In Russian). Cf. English
translation in Siberian Advances in Math., Allerton Press Inc.,
1991, Vol. 1, N4, 92-121.
[5] "On bounded set theory" in Logic and Scientific Methods, 1997,
Kluwer Acad. Publ., 85-103. (Accessible also by my WWW-page)
[6] "On feasible numbers", in: Leivant D., ed., Logic and
Computational Complexity, LNCS Vol. 960, Springer, 1995,
pp.30--51. (Accessible also by my WWW-page)
Vladimir Sazonov
Program Systems Institute, | Tel. +7-08535-98945 (Inst.),
Russian Acad. of Sci. | Fax. +7-08535-20566
Pereslavl-Zalessky, | e-mail: sazonov at logic.botik.ru
152140, RUSSIA | http://www.botik.ru/~logic/SAZONOV/
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000712.html","timestamp":"2014-04-20T23:27:39Z","content_type":null,"content_length":"13231","record_id":"<urn:uuid:3c69db7d-e25c-416b-8141-b7a36129819b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Variables and Scales of Measurement
EDUR 7130
Educational Research On-Line
Variables and Scales of Measurement
Assigned Readings
See supplemental readings below.
A variable is simply anything that varies, anything that assumes different values or categories. For example, sex varies because there is more than one category or classification: female and male.
Race is a variable because there is more than one category: Asian, Black, Hispanic, etc. Age is a variable because there is more than one category: 1 year, 2 years, 3 years, etc.
Conversely, a constant is anything that does not vary or take different values or categories. For example, everyone participating in this course is a student, so that is not a variable since it does
not vary since it has only one category. As another example, consider a group of white females. With this group, neither race nor sex varies, so race and sex are constants for these people.
Exercise: Identifying Variables
In the following statements, identify the variables.
1. What is the relation between intelligence and achievement?
2. Do students learn more from a supportive teacher or a non-supportive teacher?
3. Are students aged 55 and older more likely to drop out of college than students of ages between 30 and 40?
4. What is the relationship between grade point average and dropping out of high school?
5. How do three counseling techniques rational-emotive, gestalt, and no-counseling differ in their effectiveness in decreasing test anxiety in high school juniors?
6. What is the relationship among leadership skills, intelligence, and achievement motivation of high school seniors?
1. Intelligence and achievement.
2. Level of support (with two categories: support/non-support) and student learning.
3. Age (with two categories: 55 and over, 30 to 40) and dropping out of college (also with two categories: in or out).
4. Grade point average and dropping out of high school (two categories: in or out).
5. Counseling techniques (with three categories: rational-emotive, gestalt, and no-counseling) and test anxiety.
6. Leadership skills, intelligence, and achievement motivation.
Scales of Measurement
Measurement is the process of assigning labels to categories of variables. Categories of variables carry different properties, which are identified below. If one can only identify categories, then
that variable is referred to as a nominal variable.
If the categories of a variable can be ranked, such as from highest to lowest or from most to least or from best to worst, then that variable is said to be ordinal.
If the categories can be ranked, and if they also represent equal intervals, then the variable is said to be interval. Equal interval means that the difference between two successive categories are
the same. For example, temperature measured with Fahrenheit has equal intervals; that is, the difference between temperatures of 30 and 31 degrees is 1 degree, and the difference between 100 and 101
degrees is 1 degree. No matter where on the scale that 1 degree is located, that 1 degree represents the same amount of heat. Similarly, when using a ruler to measure the length of something, the
difference between 2 and 3 inches is 1 inch, and the difference between 10 and 11 inches is 1 inch -- no matter where on the ruler that 1 inch lies, it still represents the same amount of distance,
so this indicates equal intervals. As another example, time in the abstract sense never ends or begins. Since time is measured precisely with equal intervals, such as one second, one minute, etc., it
can be viewed as an interval measure in the abstract.
The last scale is ratio. This is just like interval, except that a variable on the ratio scale has a true zero point--a beginning or ending point. While time in the abstract (no ending or beginning)
sense is interval, in practice time is a ratio scale of measurement since time is usually measured in lengths or spans which means time does have a starting or ending point. For example, when timing
someone on a task, the length of time required to complete the task is a ratio measure since there was a starting (and ending) point in the measurement. One way to identify ratio variables is to
determine whether one can appropriately make ratios from two measurements. For example, if I measure the time it takes me to read a passage, and I measure the length of time it takes you to read the
same passage, we can construct a ratio of these two measures. If it took me 30 seconds and took you 60 seconds, it took you (60/30 = 2) twice as long to read it. One cannot form such mathematical
comparisons with nominal, ordinal, or interval data. Note that the same can be done with counting variables. If I have 15 items in my pockets, and you have 5, I have three times as many items as you
(15/5 = 3).
For most purposes, especially in education, the distinction between interval and ratio is not important. In fact, it is difficult to find examples of interval or ratio variables in education.
Below is a table that specifies the criteria that distinguishes the four scales of measurement, and the following table provides examples for each scale.
│ Scales │ Criteria │
│ Nominal │ categories │
│ Ordinal │ categories, rank │
│ Interval │ categories, rank, equal, interval │
│ Ratio │ categories, rank, equal, interval, true zero point │
│ Scales │ Examples │
│ Nominal │ types of flowers, sex, dropout/stay-in, vote/abstain │
│ Ordinal │ socioeconomic status (S.E.S.), Likert scales responses, class rank │
│ Interval │ time in abstract (see discussion above), temperature │
│ Ratio │ age, weight, height, time to complete a task │
Classification of Variables
In research it is often important to distinguish variables by the supposed or theoretical function they play. For example, if one states that a child's intelligence level influences the child's
academic achievement in school, then the variable intelligence thought to have some impact, some effect on academic performance in school. In this example, intelligence is called the independent
variable and academic achievement is the dependent variable. The logic here holds that achievement depends, to some degree, upon intelligence, hence it is called a dependent variable. Since
intelligence does not depend upon achievement, intelligence in this example is referred to as the independent variable.
Here are two methods for identify8ing independent variables (IV) and dependent variables (DV). First, think in terms of chronological sequence--in terms of the time order. Which variable comes first,
one's sex or one's achievement in school? Most would answer that one is born with a given sex (female or male), so it naturally precedes achievement in school. The variable that comes first in the
time order is the IV and the variable that comes afterwards is the DV.
A second method for identifying the IVs and DVs is the ask yourself about the notion of causality. That is, if one does this with variable A, then what happens to variable B? For example, if one
could increase intelligence, then achievement in school may result. But, if one increased achievement school, would this have any logical impact on one's intelligence? In this example, intelligence
is the IV because it can affect achievement in school, and achievement is the DV because it is unlikely to affect intelligence.
Alternative labels for IV are cause and predictor, and other labels for the DV are effect and criterion.
Often it can be difficult to properly identify whether a variable is nominal, ordinal, interval, or ratio. A simpler approach is to identify variables as either qualitative (or categorical) or
quantitative (or continuous). A qualitative/categorical variable is one that has categories that are not ranked--i.e., a nominal variable. All other variables have categories that can be ranked,
therefore the categories differ by degree. These variables are quantitative or continuous, and are represented by the ordinal, interval, and ratio scales.
For simplicity, variables that have only two categories, even if they can be ranked, will be referred to as qualitative variables since this will be important later when determining which statistical
tests may be used for analysis.
Practice Exercise
Here is a practice exercise to help you distinguish between IV and DVs. Using the same practice exercise for IVs and DVs, also determine whether each IV and DV is qualitative or quantitative. To make
these determinations, sometimes there will not be enough information about the measurement process--how the variables were actually measured. In these cases, it is important to consider the variable
carefully to determine if the variable logically has ranked categories or not. If it appears to have ranked categories, then classify the variable as quantitative. See illustrated examples in the
practice exercise for further clarification of this issue.
Supplemental Reading
Dr. Jacqueline McLaughlin, Assistant Professor of Biology, and Jane S. Noel, Instructional Development Specialist, of Penn State University provide useful information on variables.
Wikipedia entry on scales of measurement (note IQ is identified as interval here; this entry is questionable)..
Ronald Mayer of San Fransico State University also discusses measurement.
Wadsworth provides a nice multi-page discussion of scales of measurement.
Copyright 2000, Bryan W. Griffin
Last revised on
|
{"url":"http://www.bwgriffin.com/gsu/courses/edur7130/content/variables_and_scales_of_measurement.htm","timestamp":"2014-04-20T18:24:20Z","content_type":null,"content_length":"12432","record_id":"<urn:uuid:f74f3797-2c90-4d96-a7c3-e9516fa6c858>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Class template static_rational
boost::units::static_rational — Compile time rational number.
// In header: <boost/units/static_rational.hpp>
template<integer_type N, integer_type D = 1>
class static_rational {
// types
typedef unspecified tag;
typedef static_rational< Numerator, Denominator > type; // static_rational<N,D> reduced by GCD
// construct/copy/destruct
// public static functions
static integer_type numerator() ;
static integer_type denominator() ;
static const integer_type Numerator;
static const integer_type Denominator;
This is an implementation of a compile time rational number, where static_rational<N,D> represents a rational number with numerator N and denominator D. Because of the potential for ambiguity arising
from multiple equivalent values of static_rational (e.g. static_rational<6,2>==static_rational<3>), static rationals should always be accessed through static_rational<N,D>::type. Template
specialization prevents instantiation of zero denominators (i.e. static_rational<N,0>). The following compile-time arithmetic operators are provided for static_rational variables only (no operators
are defined between long and static_rational):
● mpl::negate
● mpl::plus
● mpl::minus
● mpl::times
● mpl::divides
Neither static_power nor static_root are defined for static_rational. This is because template types may not be floating point values, while powers and roots of rational numbers can produce floating
point values.
|
{"url":"http://www.boost.org/doc/libs/1_39_0/doc/html/boost/units/static_rational.html","timestamp":"2014-04-23T17:43:53Z","content_type":null,"content_length":"9618","record_id":"<urn:uuid:6c3738f6-8f12-475b-bfb8-c5d454024761>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
relativity problems
1. Two ships make a trip from the planet Melmack to the planet Ork. The red ship makes the trip at a speed of 0.35c relative to the planets. The blue ship makes the trip at a speed of 0.48c relative
to the planets, and while traveling, its science officer measures the distance between the planets to be 17 light-years.
a) What is the proper length of the trip between these planets?
b) Find the length (in light-years) the red ship's science officer measures the trip to be, as
they travel.
2. Two events occur. Juan measures a time interval between the events of Δt=3.5 yrs and a distance between the events Δx=2.2 light-years.
a) Find the space-time interval Δs2.
b) Is this interval space-like, time-like, or light-like?
c) Will any observer observe the events as simultaneous? If so, how does that observer
move relative to Juan? If not, prove it.
3. A ruby laser at rest would emit light of wavelength 694 nm. You observe a moving ruby laser to emit light of wavelength 702 nm.
a) Is the laser moving toward you or away from you?
b) How fast?
4. A Borg spaceship is on its way to Earth. A Federation spaceship is sent from Earth at a speed of 0.9c to intercept it. The speed of the Borg ship relative to the Federation ship is 0.95c.
a) Draw a diagram, and label a positive direction.
b) What is the speed and direction of the Borg ship relative to the Earth?
Does anyone have any idea about these four problems. One of these is going to be on my exam tomorrow. I'll be really and infact a million times thankful to anyone who helps me with these problems
soon. very soon please because I'm not good in physics. I really need to do this exam good. Please somebody help!!!
|
{"url":"http://www.physicsforums.com/showthread.php?t=163060","timestamp":"2014-04-20T23:36:36Z","content_type":null,"content_length":"26052","record_id":"<urn:uuid:5c00c50c-cbdf-4bb7-8f5e-34222267ba18>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Surface Area and Volume of Cones ( Read ) | Geometry
What if you wanted to use your mathematical prowess to figure out exactly how much waffle cone your friend Jeff is eating? This happens to be your friend Jeff’s favorite part of his ice cream
dessert. A typical waffle cone is 6 inches tall and has a diameter of 2 inches. What is the surface area of the waffle cone? (You may assume that the cone is straight across at the top). Jeff decides
he wants a “king size” cone, which is 8 inches tall and has a diameter of 4 inches. What is the surface area of this cone? After completing this Concept, you'll be able to answer questions like
Watch This
CK-12 Foundation: Chapter11ConesA
Learn more about the surface area of cones by watching the video at this link.
Watch this video to learn about the volume of cones.
A cone is a solid with a circular base and sides taper up towards a common vertex.
It is said that a cone is generated from rotating a right triangle around one leg in a circle. Notice that a cone has a slant height, just like a pyramid.
Surface Area
We know that the base is a circle, but we need to find the formula for the curved side that tapers up from the base. Unfolding a cone, we have the net:
From this, we can see that the lateral face’s edge is $2 \pi r$$l$
$\frac{Area \ of \ circle}{Area \ of \ sector} & = \frac{Circumference}{Arc \ length}\\\frac{ \pi l^2}{Area \ of \ sector} & = \frac{2\pi l}{2 \pi r}=\frac{l}{r}$
Cross multiply: $l(Area \ of \ sector) & = \pi rl^2\\Area \ of \ sector & = \pi rl$
Surface Area of a Right Cone: The surface area of a right cone with slant height $l$$r$$SA= \pi r^2+ \pi rl$
If the bases of a cone and a cylinder are the same, then the volume of a cone will be one-third the volume of the cylinder.
Volume of a Cone: If $r$$h$$V=\frac{1}{3} \pi r^2 h$
Example A
What is the surface area of the cone?
In order to find the surface area, we need to find the slant height. Recall from a pyramid, that the slant height forms a right triangle with the height and the radius. Use the Pythagorean Theorem.
$l^2 & = 9^2+21^2\\& = 81+441\\l & = \sqrt{522} \approx 22.85$
The surface area would be $SA= \pi 9^2+ \pi (9)(22.85) \approx 900.54 \ units^2$
Example B
The surface area of a cone is $36 \pi$
Plug in what you know into the formula for the surface area of a cone and solve for $r$
$36 \pi & = \pi r^2+ \pi r(5) && \text{Because every term has} \ \pi, \ \text{we can cancel it out}.\\36 & = r^2+5r && \text{Set one side equal to zero, and this becomes a factoring problem}.\\r^
2+5r-36& = 0\\(r-4)(r+9)&=0 && \text{The possible answers for} \ r \ \text{are} \ 4 \ \text{and} \ -9. \ \text{The radius must be positive,}\\&&&\text{so our answer is} \ 4.$
Example C
Find the volume of the cone.
To find the volume, we need the height, so we have to use the Pythagorean Theorem.
Now, we can find the volume.
$V=\frac{1}{3}(5^2)\left ( 10 \sqrt{2} \right ) \pi \approx 370.24$
Watch this video for help with the Examples above.
CK-12 Foundation: Chapter11ConesB
Concept Problem Revisited
The standard cone has a surface area of $\pi + 6 \pi =7 \pi \approx 21.99 \ in^2$$4 \pi + 16 \pi = 20 \pi \approx 62.83$
A cone is a solid with a circular base and sides that taper up towards a vertex. A cone has a slant height.
Surface area is a two-dimensional measurement that is the total area of all surfaces that bound a solid. Volume is a three-dimensional measurement that is a measure of how much three-dimensional
space a solid occupies.
Guided Practice
1. Find the volume of the cone.
2. Find the volume of the cone.
3. The volume of a cone is $484 \pi \ cm^3$
1. To find the volume, we need the height, so we have to use the Pythagorean Theorem.
Now, we can find the volume.
$V=\frac{1}{3}(5^2)\left ( 10 \sqrt{2} \right ) \pi \approx 370.24$
2. Use the radius in the formula.
$V=\frac{1}{3} \pi (3^2)(6)=18 \pi \approx 56.55$
3. Plug in what you know to the volume formula.
$484 \pi & = \frac{1}{3} \pi r^2 (12)\\121 & = r^2\\11&=r$
Find the surface area and volume of the right cones. Leave your answers in terms of $\pi$
Challenge Find the surface area of the traffic cone with the given information. The gone is cut off at the top (4 inch cone) and the base is a square with sides of length 24 inches. Round answers to
the nearest hundredth.
4. Find the area of the entire square. Then, subtract the area of the base of the cone.
5. Find the lateral area of the cone portion (include the 4 inch cut off top of the cone).
6. Now, subtract the cut-off top of the cone, to only have the lateral area of the cone portion of the traffic cone.
7. Combine your answers from #4 and #6 to find the entire surface area of the traffic cone.
For questions 8-11, consider the sector of a circle with radius 25 cm and arc length $14 \pi$
8. What is the central angle of this sector?
9. If this sector is rolled into a cone, what are the radius and area of the base of the cone?
10. What is the height of this cone?
11. What is the total surface area of the cone?
Find the volume of the following cones. Leave your answers in terms of $\pi$
15. If the volume of a cone is $30\pi \ cm^2$
16. If the volume of a cone is $105\pi \ cm^2$
17. A teepee is to be built such that there is a minimal cylindrical shaped central living space contained within the cone shape of diameter 6 ft and height 6 ft. If the radius of the entire teepee
is 5 ft, find the total height of the teepee.
|
{"url":"http://www.ck12.org/geometry/Surface-Area-and-Volume-of-Cones/lesson/Cones---Intermediate/","timestamp":"2014-04-17T20:00:57Z","content_type":null,"content_length":"122685","record_id":"<urn:uuid:20109baa-db22-4217-a191-bc79350c4656>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Physics
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Physlet^® Quantum Physics 2E
[1] D. Styer, "Quantum Mechanics: See it Now," AAPT Kissimmee, FL January, 2000 and http://www.oberlin.edu/physics/dstyer/TeachQM/see.html.
[2] D. Styer, "Common Misconceptions Regarding Quantum Mechanics", Am. J. Phys. 64, 31-34 (1996).
[3] R. W. Robinett, Quantum Mechanics: Classical Results, Modern Systems, and Visualized Examples, Oxford, New York, 1997.
[4] E. Cataloglu and R. Robinett, "Testing the Development of Student Conceptual and Visualization Understanding in Quantum Mechanics through the Undergraduate Career," Am. J. Phys. 70, 238-251
[5] D. Zollman, et al., "Research on Teaching and Learning of Quantum Mechanics", Papers Presented at the National Association for Research in Science Teaching (1999).
[6] C. Singh, "Student Understanding of Quantum Mechanics", Am. J. Phys. 69, 885-895 (2001).
[7] R. Muller and H. Wiesner, "Teaching Quantum Mechanics on the Introductory Level", Am. J. Phys. 70, 200-209 (2002).
[8] L. Bao and E. Redish, "Understanding Probabilistic Interpretations of Physical Systems: A Prerequisite to Learning Quantum Physics", Am. J. Phys. 70, 210-217 (2002).
[9] D. Zollman, N. S. Rebello, and K. Hogg, "Quantum Mechanics for Everyone: Hands-on Activities Integrated with Technology", Am. J. Phys. 70, 252-259 (2002).
[10] S. Brandt and H. Dahmen, The Picture Book of Quantum Mechanics, Springer-Verlag, New York, 2001.
[11] J. Hiller, I. Johnston, D. Styer, Quantum Mechanics Simulations, Consortium for Undergraduate Physics Software, John Wiley and Sons, New York, 1995.
[12] B. Thaller, Visual Quantum Mechanics, Springer-Verlag, New York, 2000.
[13] M. Joffre, Quantum Mechanics CD-ROM in J. Basdevant and J. Dalibard, Quantum Mechanics, Springer-Verlag, Berlin, 2002.
[14] A. Goldberg, H. M. Schey, and J. L. Schwartz, "Computer-generated Motion Pictures of One-dimensional Quantum-mechanical Transmission and Reflection Phenomena," Am. J. Phys. 35, 177-186 (1967).
[15] M. Andrews, "Wave Packets Bouncing Off of Walls," Am. J. Phys. 66 252-254 (1998).
[16] M. A. Doncheski and R. W. Robinett, "Anatomy of a Quantum 'Bounce,' " Eur. J. Phys. 20, 29-37 (1999).
[17] M. Belloni, M. A. Doncheski, and R. W. Robinett, "Exact Results for 'Bouncing' Gaussian Wave Packets," Phys. Scr. 71, 136-140 (2005).
[18] J. J. Sakurai, Advanced Quantum Mechanics, Addison-Wesley (1967).
[19] R. E. Scherr, P. S. Shaffer, and S. Vokos, "The Challenge of Changing Deeply Held Student Beliefs about the Relativity of Simultaneity," Am. J. Phys. 70, 1238 (2002).
[20] R. E. Scherr, P. S. Shaffer, and S. Vokos, "Student Understanding of Time in Special Relativity: Simultaneity and Reference Frames," Phys. Educ. Res., Am. J. Phys. Suppl. 69, S24 (2001).
[21] K. Krane, Modern Physics, 2nd edition, John Wiley and Sons (1996).
[22] P. A. Tipler and R. A. Llewellyn, Modern Physics, W. H. Freeman and Company (1999).
[23] J. R. Taylor, C. H. Zafiratos, and M. A. Dubson, Modern Physics for Scientists and Engineers, Prentice Hall (2004).
[24] S. Thornton and A. Rex, Modern Physics for Scientists and Engineers, 2nd ed, Brooks/Cole (2002).
[25] W. E. Lamb, Jr. and M. O. Scully, "The Photoelectric Effect without Photons," in Polarisation, Matierer et Rayonnement, Presses University de France (1969).
[26] G. Greenstein and A. G. Zajonc, The Quantum Challenge, Jones and Bartlett (1997).
[27] J. J. Thorn, M. S. Neel, V. W. Donato, G. S. Bergreen, R. E. Davies, and M. Beck, "Observing the Quantum Behavior of Light in an Undergraduate Laboratory," Am. J. Phys. 72 1210-1219 (2004).
[28] D. F. Styer, et al., "Nine Formulations of Quantum Mechanics," Am. J. Phys. 70, 288-297 (2002).
[29] M. Belloni, M. A. Doncheski, and R. W. Robinett, "Zero-curvature solutions of the one-dimensional Schrödinger equation," to appear in Phys. Scr. 2005.
[30] L. P. Gilbert, M. Belloni, M. A. Doncheski, and R. W. Robinett, "More on the Asymmetric Infinite Square Well: Energy Eigenstates with Zero Curvature," to appear in Eur. J. Phys. 2005.
[31] L. P. Gilbert, M. Belloni, M. A. Doncheski, and R. W. Robinett, "Piecewise Zero-curvature Solutions of the One-Dimensional Schrödinger Equation," in preparation.
[32] R. W. Robinett, "Quantum Wave Packet Revivals," talk given at the 128th AAPT National Meeting, Miami Beach, FL, Jan. 24-28 (2004).
[33] R. Shankar, Principles of Quantum Mechanics, Plenum Press (1994).
[34] M. Morrison, Understanding Quantum Physics: A Users Manual, Prentice Hall, Upper Saddle River, NJ, 1990.
[35] M. Bowen and J. Coster, "Infinite Square Well: A Common Mistake," Am. J. Phys. 49, 80-81 (1980)
[36] R. C. Sapp, "Ground State of the Particle in a Box," Am. J. Phys. 50, 1152-1153 (1982)
[37] L. Yinji and H. Xianhuai, "A Particle Ground State in the Infinite Square Well," Am. J. Phys. 54, 738 (1986).
[38] C. Dean, "Simple Schrödinger Wave Functions Which Simulate Classical Radiating Systems," Am. J. Phys. 27, 161-163 (1959).
[39] R. W. Robinett, "Quantum Wave Packet Revivals," Phys. Rep. 392, 1-119 (2004).
[40] R. Bluhm, V. A. Kostelecky, and J. Porter, "The Evolution and Revival Structure of Localized Quantum Wave Packets," Am. J. Phys. 64, 944-953 (1996).
[41] I. Sh. Averbukh and N. F. Perelman, "Fractional Revivals: Universality in the Long-term Evolution of Quantum Wave Packets Beyond the Correspondence Principle Dynamics," Phys. Lett. A139, 449-453
[42] D. L. Aronstein and C. R. Stroud, Jr., "Fractional Wave-function Revivals in the Infinite Square Well," Phys. Rev. A 55, 4526-4537 (1997).
[43] R. Liboff, Introductory Quantum Mechanics, Addison Wesley (2003).
[44] F. Bloch, Z. Physik, 52 (1928).
[45] M. A. Doncheski and R. W. Robinett, "Comparing classical and quantum probability distributions for an asymmetric well", Eur. J. Phys. 21, 217-228 (2000).
[46] A. Bonvalet, J. Nagle, V. Berger, A. Migus, J.-L. Martin, and M. Joffre, "Femtosecond Infrared Emission Resulting from Coherent Charge Oscillations in Quantum Wells," Phys. Rev. Lett. 76,
4392-4395 (1996).
[47] C. Kittel and H. Kroemer, Thermal Physics, 2nd ed, W. H. Freeman, 1980.
[48] R. Eisberg and R. Resnick, Quantum Physics, Wiley, 1974.
|
{"url":"http://www.compadre.org/PQP/preface/bibliography.cfm","timestamp":"2014-04-19T02:20:31Z","content_type":null,"content_length":"16794","record_id":"<urn:uuid:33f2bcf0-d7a6-41ca-b1aa-d9d4cfdc0db6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ifpack Package Browser (Single Doxygen Collection)
Ifpack Ifpack: a function class to define Ifpack preconditioners
Ifpack_Container Ifpack_Container: a pure virtual class for creating and solving local linear problems
Ifpack_CrsIct Ifpack_CrsIct: A class for constructing and using an incomplete Cholesky factorization of a given Epetra_CrsMatrix
Ifpack_CrsIlut Ifpack_CrsIlut: ILUT preconditioner of a given Epetra_RowMatrix
Ifpack_CrsRick Ifpack_CrsRick: A class for constructing and using an incomplete lower/upper (ILU) factorization of a given Epetra_CrsMatrix
Ifpack_CrsRiluk Ifpack_CrsRiluk: A class for constructing and using an incomplete lower/upper (ILU) factorization of a given Epetra_RowMatrix
Ifpack_DenseContainer Ifpack_DenseContainer: a class to define containers for dense matrices
Ifpack_DiagonalFilter Ifpack_DiagonalFilter: Filter to modify the diagonal entries of a given Epetra_RowMatrix
Ifpack_DiagPreconditioner Ifpack_DiagPreconditioner: a class for diagonal preconditioning
Ifpack_DropFilter Ifpack_DropFilter: Filter based on matrix entries
Ifpack_Graph Ifpack_Graph: a pure virtual class that defines graphs for IFPACK
Ifpack_Graph_Epetra_CrsGraph Ifpack_Graph_Epetra_CrsGraph: a class to define Ifpack_Graph as a light-weight conversion of Epetra_CrsGraph's
Ifpack_Graph_Epetra_RowMatrix Ifpack_Graph_Epetra_RowMatrix: a class to define Ifpack_Graph as a light-weight conversion of Epetra_RowMatrix's
Ifpack_IlukGraph Ifpack_IlukGraph: A class for constructing level filled graphs for use with ILU(k) class preconditioners
Ifpack_LocalFilter Ifpack_LocalFilter a class for light-weight extraction of the submatrix corresponding to local rows and columns
Ifpack_OverlapFactorObject Ifpack_OverlapFactorObject: Supports functionality common to Ifpack overlap factorization classes
Ifpack_OverlapGraph Ifpack_OverlapGraph: Constructs a graph for use with Ifpack preconditioners
Ifpack_OverlapSolveObject Ifpack_OverlapSolveObject: Provides Overlapped Forward/back solve services for Ifpack
Ifpack_Preconditioner Ifpack_Preconditioner: basic class for preconditioning in Ifpack
Ifpack_ReorderFilter Ifpack_ReorderFilter: a class for light-weight reorder of local rows and columns of an Epetra_RowMatrix
Ifpack_Reordering Ifpack_Reordering: basic class for reordering for a Ifpack_Graph object
Ifpack_SingletonFilter Ifpack_SingletonFilter: Filter based on matrix entries
Ifpack_SparsityFilter Ifpack_SparsityFilter: a class to drop based on sparsity
Thyra::IfpackPreconditionerFactory Concrete preconditioner factory subclass based on Ifpack
Generated on Thu Sep 18 12:37:28 2008 for Ifpack Package Browser (Single Doxygen Collection) by
|
{"url":"http://trilinos.sandia.gov/packages/docs/r7.0/packages/ifpack/browser/doc/html/annotated.html","timestamp":"2014-04-20T11:46:20Z","content_type":null,"content_length":"13088","record_id":"<urn:uuid:b99163d1-ab39-448a-bc90-a98b17a0f09d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: September 2000 [00089]
[Date Index] [Thread Index] [Author Index]
Re:what mathematical formula can generate a mobius strip?
• To: mathgroup at smc.vnet.net
• Subject: [mg25059] Re:[mg25044] what mathematical formula can generate a mobius strip?
• From: "Ingolf Dahl" <f9aid at fy.chalmers.se>
• Date: Thu, 7 Sep 2000 22:28:02 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
VIKTORA6 at aol.com asked the following question:
"what mathematical formula can generate a mobius strip?"
If you only want to plot it, use
<< Graphics`Shapes`
Show[Graphics3D[MoebiusStrip[2, 1, 80]]]
If you want to play more, and have control over the parameters, use
r1 = 5; r2 = 2.5; theta0 = 6*Pi/2.;
ParametricPlot3D[{(r1 - r2*u*Sin[(theta - theta0)/2])*Cos[theta],
(r1 - r2*u*Sin[(theta - theta0)/2])*Sin[theta],
r2*u*Cos[(theta - theta0)/2]}, {theta, 0, 2*Pi}, {u, -1, 1}]
With the given values of r1, r2 and theta0, you get almost the same curve as
the curve from MoebiusStrip, but you can also experiment with other values.
If you only want the outer edge, use
r1 = 5; r2 = 2.5; theta0 = 6*Pi/2.;
ParametricPlot3D[{(r1 - r2*Sin[(theta - theta0)/2])*Cos[theta],
(r1 - r2*Sin[(theta - theta0)/2])*Sin[theta],
r2*Cos[(theta - theta0)/2]}, {theta, 0, 4*Pi}]
Ingolf Dahl
Chalmers University
f9aid at fy.chalmers.se
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Sep/msg00089.html","timestamp":"2014-04-19T19:58:24Z","content_type":null,"content_length":"35285","record_id":"<urn:uuid:4b490e87-ed39-46af-ab98-8641bd810160>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What assumptions and methodology do metaproofs of logic theorems use and employ?
up vote 6 down vote favorite
In logic modules, theorems like Soundness and completeness of first order logic are proved. Later, Godel's incompleteness theorem is proved. May I ask what are assumed at the metalevel to prove such
statements? It seems to me that whatever is assumed at the metalevel should not be more than whatever is being formulated at the symbolic level.
I'm also asking about methodology. at the meta level, it seems like classical logic is used. So if proving statements about other kinds of logic like paraconsistent logic, then isn't there a
discrepancy between what methodology is being formulated and what methodology is being used to prove the statement?
lo.logic model-theory formal-languages metamathematics
Cf. mathoverflow.net/questions/11699/… – Charles Stewart Jan 14 '10 at 8:20
add comment
3 Answers
active oldest votes
It depends on what you're trying to prove, and for what purpose you are proving these metatheorems.
So, the notion of "more" you're appealing to in asking about the metalevel is not completely well-defined. One common definition of the strength of a logic is "proof-theoretic strength",
which basically amounts to the strongest induction principle the logic can justify (in set-theoretic terms, this is the same as identifying the largest set the logic's consistency proves
well-ordered). The theory of ordinals classifies logic by this idea. This is natural for two reasons, both arising from Godel's incompleteness theorem. The incompleteness theorem tells us
that we cannot prove the consistency of a stronger theory using a weaker one, so to prove consistency it's always the case that you need to assume a stronger logic in the metatheory than
of the logic you're proving consistency of. More abstractly, this fact gives rise to a natural partial order on formal logical systems.
However, consistency proofs are not the only thing people are interested in!
Another purpose for giving semantics of logic is to help understand what people who use a different language than you do mean, in your own terms. For example, giving a classical semantics
up vote 8 to intuitionistic logic gives traditional mathematicians a way of understanding what an intuitionist means, in their own terms. Likewise, semantics of classical logic in intuitionistic
down vote terms explains to intuitionists what classical logics mean. For this purpose, it doesn't matter how much mathematical machinery you bring to bear, as long as it brings insight.
This is the end that something that ends up having big mathematical payoffs. It can illuminate all sorts of surprising facts. For example, Brouwer didn't just have strong opinions about
logic, he also made assertions about geometry -- for instance, that the continuum was indivisible -- that are flat-out false, in naive classical terms. A priori, it's not clear what this
has to do with the excluded middle. But it turns out that he wasn't just a crazy Dutchman; the logic of smooth analysis is intuitionistic, and using intuitionistic logic exactly manages
piles of side-conditions that you'd have to manage by hand if you worked explicitly in its model.
Conversely, studying classical logic in intuitionistic terms gives you a way of exploring the computational content of classical logic. Often, non-classical arguments (such as the use of
double-negation elimination) amount to an appeal to the existence of a kind of backtracking search procedure, and sometimes you can show that an apparently-classical proof is constructive
because this search is actually decidable. Martin Escardo's work on "exhaustible sets" is a delightful example of this, like a magician's trick. He shows that exhaustive search over some
kinds of infinite sets is decidable (it's related to the fact that e.g. the Cantor space is compact).
But (it seems to me) you haven't really gotten to the heart of Colin's question, which would ask in response "In which proof system do we "prove" Godel's incompleteness theorem, and
how do we know our results in that system are correct?" For example, perhaps an intuitionist would not accept the proof of the incompleteness theorem - then in what sense have we
really "proved" something about "all logics"? – Zev Chonoles Dec 14 '09 at 17:53
I have wondered this myself many times, as it seems like a glaring omission in what most people say is the statement of the incompleteness theorem. Indeed, for any theorem, in logic or
1 otherwise, the correct statement is "Under the assumption of logic X, the result ____ is true." The problem with some results in logic is that, at least as it seems to me and Colin, we
often appear to be using X = classical logic, and proving statements about other logics, in particular, logics which are inconsistent or stronger than classical logic - how does this
make sense? – Zev Chonoles Dec 14 '09 at 17:58
Zev, I think you share some of the concerns you have, and I think it would be a good idea if you would post these explicity in a separate question for people to answer. – Colin Tan Dec
15 '09 at 13:28
add comment
Here is some more information for the first question. I think that to prove the meta theorems in mathematics, in particular, soundness, completeness, incompleteness, heuristic logic does not
exhaust the meta principles being used. Some meta principles relating to heuristic set theory or heuristic category theory must to be used as well. That is because when we talk about a model
and a statements true in a model we need to have some notion of set or something equivalent.
It is difficult to understand these meta principles, so we need to cast them into the language of formal logic, formal set theory. I choose Zermelo Fraenkel set theory here. The formal
counter part of your question can be considered the question which axioms of set theory necessary for the proof of the formalized version of the theorems. The choice of Zermerlo Fraenkel set
theory is in a sense general enough. You might decide that category theory is the genuine language of mathematics but to phrase and prove soundness, completeness, you need to use something
of equal strength. The expression may change with different choices of mathematical language, but the mathematical phenomenon remains invariant.
The following is not a very precise answer ( I did not checked the material carefully). I think this might be an approximation to the answer that you want:
up vote Soundness for First Order Logic: We need classical logic, ZF \ {Powerset,Replacement, Infinity}. The checking of Soundness is basically mechanical so we don't need much.
1 down
vote Completeness for First Order Logic: Need all these thing and some weak version of choice, may be Konig lemma. The proof of completness is basically cook up a model of the theory base on the
language. We need to add in many constant to make the theory have Henkin property, and this is an iterated process, so we need choice somewhere. I don't think we need power set.
First incompleteness: We need axiom of foundation and axiom of power set in the universe of set and the identification of theory of natural number with theory of $\omega$ with the
respectively defined +, x, <. I don't think we need choice here.
Second incompleteness: We still need axiom of founation (PA still must be enumerated by $\omega$ in this situation). I don't think we need power set anymore.
To have the precise answer, we can always look closely at the steps of the proof or search the existing material.
add comment
Certainly, classical logic is used in metalogic. I can't think, offhand, of any cases where I think its use is necessary. The methodology of reverse mathematics seems to offer a suitable,
constructivist framework for discussing the kind of result that Tran Chieu Minh speaks of: our weak metalogic tells us that, e.g., we need something at least as strong as König's lemma to
prove completeness, and as it happens, the converse implication is also true.
I agree with the questioner that "whatever is assumed at the metalevel should not be more than whatever is being formulated at the symbolic level." The danger is that one's metalogical
up vote 1 assumptions might be leakier than one thinks.
down vote
If you accept this, then it follows some things that some people take to be the task of the metalogic are not: in particular, it is not the purpose of the metalogic to justify the system
being studied; indeed, if one can, that tells one that the metalogic may not be well fitted to the task. And, furthermore, the strength of, say, constructivist logics as metalogics provides
no kind of case that mathematics should be constructivist.
One example of a place where constructivity is necessary: normalization-by-evaluation relies on the constructive content of a pair of soundness and completeness theorems. – Neel
Krishnaswami Jan 13 '10 at 13:39
I don't see where you are going. Maybe light is cast if I point out that PRA is enough to show the equivalence of SN for System F and consistency of second-order arithmetic? Here, one can
claim the object theory is constructive, but the metatheory is not strong enough to show that proofs in System F have normal forms. Where is there any metatheoretic reliance on the
constructive content of the object theory? PRA is as agnostic about the constructive content of System F as it is about the consistency of Z2. – Charles Stewart Jan 13 '10 at 14:43
In normalization by evaluation, you (basically) start with a logic with cut. Then, you give the universal/syntactic model of the logic (concretely the Kripke model of contexts ordered by
inclusion), and the composition of (constructive) soundness and completeness is the NBE algorithm! Here's some Agda code illustrating this idea: cs.nott.ac.uk/~dwm/nbe/html/NBE.html –
Neel Krishnaswami Jan 13 '10 at 17:30
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic model-theory formal-languages metamathematics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/8853/what-assumptions-and-methodology-do-metaproofs-of-logic-theorems-use-and-employ","timestamp":"2014-04-17T13:08:47Z","content_type":null,"content_length":"72846","record_id":"<urn:uuid:57b105c4-17a9-4202-a977-80fe0f32823e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Description and simulation of an active imaging technique utilizing two speckle fields: root reconstructors
Quasi-monochromatic light will form laser speckle upon reflection from a rough object. This laser speckle provides information about the shape of the illuminated object. Further information can be
obtained if two colors of coherent light are used, provided that the colors are sufficiently close in wavelength that the interference is also measurable. It is shown that no more than two
intensities of two speckle patterns and their interference are required to produce an unambiguous band-limited image of an object, to within an overall spatial translation of the image, in the
absence of measurement errors and in the case where all roots of both fields and their complex conjugates are distinct. This result is proven with a root-matching technique, which treats the electric
fields as polynomials in the pupil plane, the coefficients of which form the desired complex object. Several root-matching algorithms are developed and tested. These algorithms are generally slow and
sensitive to noise. So motivated, several other techniques are applied to the problem, including phase retrieval, expectation maximization, and probability maximization in a sequel paper [J. Opt.
Soc. Am. A 19, 458 (2002)]. The phase-retrieval and expectation-maximization techniques proved to be most effective for reconstructions of complex objects larger than 10 pixels across.
© 2002 Optical Society of America
OCIS Codes
(030.6140) Coherence and statistical optics : Speckle
(100.3010) Image processing : Image reconstruction techniques
(100.5070) Image processing : Phase retrieval
(120.6150) Instrumentation, measurement, and metrology : Speckle imaging
R. B. Holmes, K. Hughes, P. Fairchild, B. Spivey, and A. Smith, "Description and simulation of an active imaging technique utilizing two speckle fields: root reconstructors," J. Opt. Soc. Am. A 19,
444-457 (2002)
Sort: Year | Journal | Reset
1. V. I. Tatarskii, Wave Propagation in a Turbulent Medium, translated by R. S. Silverman (McGraw-Hill, New York, 1961).
2. K. T. Knox and B. J. Thompson, “Recovery of images from astronomically degraded short-exposure photographs,” Astrophys. J. Lett. 193, L45–L48 (1974).
3. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. 22, 4028–4037 (1983).
4. A. C. S. Readhead, T. S. Nakajima, T. J. Pearson, G. Neugebauer, J. B. Oke, and W. L. W. Sargent, “Diffraction-limited imaging with ground-based optical telescopes,” Astron. J. 95, 1278–1296
5. J. Hardy, J. Lefebvre, and C. Koliopoulis, “Real time atmospheric compensation,” J. Opt. Soc. Am. 67, 360–369 (1977).
6. J. W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena, J. C. Dainty, ed. (Springer-Verlag, New York, 1975), pp. 9–68.
7. Paul S. Idell, J. R. Fienup, and Ron S. Goodman, “Image synthesis from nonimaged laser-speckle patterns,” Opt. Lett. 12, 858–860 (1987).
8. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge U. Press, Cambridge, UK, 1999), p. 356.
9. R. A. Hutchin, “Sheared coherent interferometric photography: a technique for lensless imaging,” in Digital Image Recovery and Synthesis II, P. S. Idell, ed., Proc. SPIE 2029, 161–168 (1993).
10. S. M. Stahl, R. M. Kremer, P. W. Fairchild, K. Hughes, B. A. Spivey, and R. Stagat, “Sheared-beam coherent image reconstruction,” in Applications of Digital Image Processing XIX, A. G. Tescher,
ed., Proc. SPIE 2847, 150–158 (1996).
11. M. Born and E. Wolf, Principles of Optics, 7th ed., pp. 572–577 (Cambridge U. Press, Cambridge, UK, 1999).
12. J. W. Goodman, Statistical Optics (Wiley, New York, 1985), Chap. 5.
13. Yu. M. Bruck and L. G. Sodin, “On the ambiguity of the image reconstruction problem,” Opt. Commun. 30, 304–308 (1979).
14. H. B. Deighton, M. S. Scivier, and M. A. Fiddy, “Solution of the two-dimensional phase-retrieval problem,” Opt. Lett. 10, 250–251 (1985).
15. R. G. Lane, W. R. Fright, and R. H. T. Bates, “Direct phase retrieval,” IEEE Trans. Acoust. Speech Signal Process. ASSP-35, 520–525 (1987).
16. D. Israelevitz and J. S. Lim, “A new direct algorithm for image reconstruction from Fourier transform magnitude,” IEEE Trans. Acoust. Speech Signal Process. ASSP-35, 511–519 (1987).
17. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4, 118–123 (1987).
18. R. G. Lane and R. H. T. Bates, “Automatic multidimensional deconvolution,” J. Opt. Soc. Am. A 4, 180–188 (1987).
19. J. R. Fienup and C. C. Wackerman, “Phase-retrieval stagnation problems and solutions,” J. Opt. Soc. Am. A 3, 1897–1907 (1986).
20. C. C. Wackerman and A. E. Yagle, “Use of Fourier domain real-plane zeros to overcome a phase retrieval stagnation,” J. Opt. Soc. Am. A 8, 1898–1904 (1991).
21. C. C. Wackerman and A. E. Yagle, “Phase retrieval and estimation with use of real-plane zeros,” J. Opt. Soc. Am. A 11, 2016–2026 (1994).
22. P. J. Bones, C. R. Parker, B. L. Satherley, and R. W. Watson, “Deconvolution and phase retrieval with use of zero sheets,” J. Opt. Soc. Am. A 12, 1842–1857 (1995).
23. T. J. Schulz, “Multiframe blind deconvolution of astronomical images,” J. Opt. Soc. Am. A 10, 1064–1073 (1993).
24. R. H. T. Bates, B. K. Quek, and C. R. Parker, “Some implications of zero sheets for blind deconvolution and phase retrieval,” J. Opt. Soc. Am. A 7, 468–479 (1990).
25. P. Chen, M. A. Fiddy, A. H. Greenaway, and Y. Wang, “Zero estimation for blind deconvolution from noisy sampled data,” in Digital Image Recovery and Synthesis II, P. S. Idell, ed., Proc. SPIE
2029, 14–22 (1993).
26. D. C. Ghiglia, L. A. Romero, and G. A. Mastin, “Systematic approach to two-dimensional blind deconvolution by zero-sheet separation,” J. Opt. Soc. Am. A 10, 1024–1036 (1993).
27. B. R. Hunt, T. L. Overman, and P. Gough, “Image reconstruction from pairs of Fourier transform magnitude,” Opt. Lett. 23, 1123–1125 (1998).
28. B. Ya Zeldovich, Principles of Phase Conjugation (Springer-Verlag, New York, 1985), Chap. 3.
29. M. S. Scivier and M. A. Fiddy, “Phase ambiguities and the zeros of multidimensional band-limited functions,” J. Opt. Soc. Am. A 2, 693–697 (1985).
30. E. P. Wallner, “Optimal wave-front correction using slope measurements,” J. Opt. Soc. Am. 73, 1771–1776 (1983).
31. J. D. Downie and J. W. Goodman, “Optimal wave-front correction with segmented mirrors,” Appl. Opt. 28, 5326–5332 (1989).
32. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992).
33. R. Holmes, K. Hughes, P. Fairchild, B. Spivey, and A. Smith, “Description and simulation of an active imaging technique utilizing two speckle fields: iterative reconstructors,” J. Opt. Soc. Am. A
19, 458–471 (2002).
34. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plan pictures,” Optik 35, 237–246 (1972).
35. V. S. R. Gudimetla and J. F. Holmes, “Probability density function of the intensity for a laser-generated speckle field after propagation through the turbulent atmosphere,” J. Opt. Soc. Am. 72,
1213–1218 (1982), and references therein.
36. M. H. Lee, J. F. Holmes, and J. R. Kerr, “Statistics of speckle propagation through the turbulent atmosphere,” J. Opt. Soc. Am. 66, 1164–1172 (1976).
37. G. Parry, “Speckle patterns in partially coherent light,” in Laser Speckle and Related Phenomena, J. C. Dainty, ed. (Springer-Verlag, New York, 1975), Eq. 3.19.
38. P. S. Idell and A. Webster, “Resolution limits for coherent optical imaging: signal-to-noise analysis in the spatial frequency domain,” J. Opt. Soc. Am. A 9, 43–56 (1992).
39. R. B. Holmes, B. Spivey, and A. Smith, “Recovery of images from two-color, pupil-plane speckle data using object-plane root-matching and pupil-plane error minimiziation,” in Digital Image
Reconstruction and Synthesis IV, P. S. Idell and T. J. Schulz, eds., Proc. SPIE 3815, 70–89 (1999).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-19-3-444","timestamp":"2014-04-19T09:12:12Z","content_type":null,"content_length":"178505","record_id":"<urn:uuid:255b8fa7-07e8-4043-962f-9cea8858745f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regression Methods for Ophthalmic Glucose Sensing Using Metamaterials
Journal of Electrical and Computer Engineering
Volume 2011 (2011), Article ID 953064, 12 pages
Research Article
Regression Methods for Ophthalmic Glucose Sensing Using Metamaterials
^1Institute for System Dynamics, University of Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany
^24th Physics Institute, University of Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany
Received 31 May 2011; Accepted 5 August 2011
Academic Editor: David Hamilton
Copyright © 2011 Philipp Rapp et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We present a novel concept for in vivo sensing of glucose using metamaterials in combination with automatic learning systems. In detail, we use the plasmonic analogue of electromagnetically induced
transparency (EIT) as sensor and evaluate the acquired data with support vector machines. The metamaterial can be integrated into a contact lens. This sensor changes its optical properties such as
reflectivity upon the ambient glucose concentration, which allows for in situ measurements in the eye. We demonstrate that estimation errors below 2% at physiological concentrations are possible
using simulations of the optical properties of the metamaterial in combination with an appropriate electrical circuitry and signal processing scheme. In the future, functionalization of our sensor
with hydrogel will allow for a glucose-specific detection which is insensitive to other tear liquid substances providing both excellent selectivity and sensitivity.
1. Introduction
Diabetes is the direct cause of over 1.1 million deaths in 2005, and the diabetes death rate is estimated to double by 2030. The World Health Organization (WHO) indicates in [1] that nowadays more
than 220 million people have to live with diabetes. In order to allow the patients to maintain a healthy life avoiding coronary artery, peripheral arterial and cerebral vascular disease, or heart
failure, early diagnosis and continuous management are crucial. Current practice for diabetes management relies on intensive insulin therapy involving frequent blood glucose measurements. Using
invasive glucose sensors means that patients have to prick their finger for a drop of blood multiple times a day, about 1800 times per year, which also involves higher risk of infection. For these
reasons, in the last decades new techniques have been employed to develop noninvasive devices for blood glucose monitoring.
The technologies under consideration include infrared (IR) spectroscopy [2], fluorescence spectroscopy, Raman spectroscopy, optical polarization rotation measurements, photoacoustic probes, and
surface plasmon resonances. However, none of these devices has been made commercially available or was approved to substitute direct invasive glucose measurement. In order to overcome these
shortcomings, alternative approaches have been developed to measure glucose concentration in an accessible body fluid, including urine, saliva, and tear fluid.
The undeniable advantage of estimating blood glucose levels through tear fluid lies in the facts that tears are more simply and noninvasively accessible than other body fluids, more continuously
obtainable, and less susceptible to dilution than urine. Tear fluid provides a unique opportunity to develop a noninvasive interface between a sensor and the body that could be used to monitor
several physiological and metabolic indicators, most notably glucose. The noninvasive feature would be the main advantage of this sensing scheme.
1.1. Ophthalmic Glucose Sensing
Tear fluid is the aqueous layer on the ocular surface and has many functions as part of the optical system, that is, lubrication and nourishing. Tear fluid consists of over 20 components, including
salt water, proteins, lactate, urea, pyruvate, ascorbate, glucose, as well as some small metallic ions. Its average rate of production lies in the range of 0.52–2.2μL/min; about 0.72–3.2mL of tears
are secreted per day.
The idea of using tear fluid as a medium for glucose monitoring has been discussed since the 1930s involving human and animal models to estimate correlation between tear glucose and blood glucose.
The current technique is to collect tear fluid samples in capillary tubes and then assay the samples for glucose ex situ using standard laboratory instrumentation. Using this technique, there are
many reports demonstrating that tear glucose is higher in diabetic subjects than in healthy ones and that there effectively exists correlation of tear glucose and blood glucose. It should be noted
that the discrepancy of the correlation coefficient between blood glucose and tear glucose can be attributed to the different tear collection methods, for example, filter paper or microcapillary
methods. In [3] a profound review of several studies resumes the most important findings.
However even after 70 years of research, there are no clinical studies that have satisfactorily resolved the relationship between tear and blood glucose concentrations. Disagreements between reports
may not invalidate the correlation between tear and blood glucose because, regardless of the exact mechanism of glucose transport into tear fluid, the individual accuracy holds true for each set of
experimental conditions.
An alternative approach developed recently uses an in vivo glucose sensing method that can be placed in the tear canal and that therefore reduces variability due to probe extraction technique [4]. It
allows measurements to be carried out in situ. This amperometric sensor is comprised of three electrodes that are screen-printed on a flexible polyamide substrate which allows the sensor to be wound
into a tight roll that fits in the tear canal for in situ monitoring.
Definitely, integrating a glucose sensor into a contact lens would provide a way to continuously and reliably sense metabolites and especially glucose in tear fluid. Different ideas to implement such
a sensor have been proposed and are at present in different stages of development. They rely on placing a photonic sensor in a contact lens and envision a handheld readout unit for measuring the
signal. Thus far, holographic hydrogels and fluorescent indicators have been explored as glucose-responsive elements. In [5] a polarimetric glucose sensor for monitoring ocular glucose is developed.
There it is indicated that the time lag between blood glucose and anterior aqueous humor glucose concentrations was on average about five minutes. Another approach is based on a contact-lens-based
sensor [6, 7].
It is likely that contact-lens-based glucose sensors have great potential to realize continuous and noninvasive diabetes control, that is, contact lenses have applications beyond vision correction.
Luminescent/fluorescent contact-lens-based sensors represent a feasible technique because they require no electrodes or electric circuits. Further efforts are needed to improve the resolution and
sensitivity of the new device and to determine a physiologically relevant and baseline tear glucose concentration [8, 9].
Existing methods of fluorescent glucose sensing apply Fluorescence Resonance Energy Transfer (FRET) [10]. This method is based on the dual measure, that is, the FRET and fluorescence intensity
measurements. FRET is an inexpensive and very sensitive method to apply to molecule imaging. However, barriers to secure a feasible contact lens sensor include the photobleaching of fluorescence
molecules, low concentration of tear samples, low fluorescence intensity, and vision influence. In addition, one safety concern is that some harming substances may be released from the lens into the
1.2. Our Concept: Metamaterial-Based Biosensing
In the present contribution a revolutionary concept for tear glucose measurement is developed. This sensing is based on the use of metamaterials, that is, artificial materials with special
electromagnetic properties that do not occur naturally. In [11] a method how to manufacture such metamaterials is reported for the first time: a periodic structure design with unit cells much smaller
than the wavelength of the incident radiation leads to a specific electromagnetic response on a wide spectral range. Also based on this work, the concepts of perfect lens as well as cloaking are
developed in [12, 13]. Tailoring of optical properties using the plasmonic analogue of EIT offers the possibility to obtain sharp resonances in the transmittance profile of a material leading to
enhanced spectral features that can eventually be pushed to the limit of detecting single molecules [14]. Other designs such as plasmonic oligomers are also possible [15, 16]. They rely on the
formation of suitable sharp spectral Fano resonances [17].
Metamaterials are able to detect even minute changes in the dielectric properties of their environment, hence selectivity to a particular type of molecule has to be added. This is achieved by
covering the metamaterial with a glucose-sensitive hydrogel [18]. When using inverse opal photonic crystals, the optical diffraction changes upon glucose exposure [19]. In Figure 1 a schematic of our
proposed design is shown: a contact lens material supports a few nanometers of a gold-based metamaterial which is functionalized with glucose-sensitive hydrogel. This design is transparent in the
visible and near-infrared range and thus can be designed as contact lens to be inserted into the patient's eye. The readout is carried out by an external light-emitting diode (LED) in the infrared
(eye-safe range at wavelengths longer than 1.4μm) which is used as light source, and the reflected light is captured by a photodiode whose intensity response is evaluated. Signal postprocessing
stages based on regression methods allow the reliable estimation of the tear glucose content.
This new method has the potential to be extremely successful for noninvasive glucose sensing for several reasons.(1)Glucose selectivity: this sensor does not take advantage of the rather poor optical
differences between the glucose molecule and other substances contained in the surrounding fluid (blood stream, tear fluid, etc.), but rather on the ability of the glucose to change selectively the
refractive index of a specific material, that is, the hydrogel in the vicinity of the metamaterial.(2)Sensitivity: due to the fact that the metamaterial is sensitive to even minute changes in the
refraction index (molecular changes in the hydrogel) the measurement can be performed in the range of physiological glucose concentration in the tear fluid.(3)Biocompatibility: the metamaterial is
made of a several nanometers thick gold structure, transparent for the human eye, and absolutely biocompatible due to the properties of noble metals. The hydrogel is commonly used for contact lenses
and therefore well characterized. The optical readout is based on an eye-safe LED.(4)Nondegrading: during the lifetime of the sensor (up to 24 hours) both the metamaterial and the hydrogel maintain
its optical properties even when immersed in body fluid.
2. Methods
2.1. Metamaterials
The metamaterial structures are fabricated by electron beam lithography. For laboratory experiments, a 30–40nm layer of gold is deposited on a mm^2 quartz substrate using electron-beam evaporation.
Next, a negative photo resist is spin-coated on top of the substrate, allowing the desired structures to be defined by electron-beam lithography. After development of the resist, directed argon ion
beam etching is carried out to transfer the structure into the gold layer.
Multilayer designs can be achieved by combining this process with a stacking technique [20]. In this case, one starts with the evaporation of several gold alignment marks with a thickness of about
250nm using positive resist with subsequent gold evaporation and lift-off. The first layer can then be manufactured following exactly the procedure given for a single layer. Afterwards, a spacer
layer is applied by spin coating. The spacer currently consists of a hardenable photopolymer and can vary in height from ten to several hundreds of nanometers. Additional layers may be added by
repetition of those steps while accurate alignment between the layers is assured using the gold marks during the electron beam exposure.
2.2. Biosensing
In general, broadband electromagnetic radiation in the optical domain is used to investigate the respective properties of nanostructures in sensing applications. One possibility is the recording of
transmittance or reflectance spectra which exhibit characteristic dips and peaks. Due to the localized electric field in and around the metallic pattern, the resonance positions are highly sensitive
to changes of the electric permittivity or the refractive index, respectively, in the nearest vicinity of the plasmonic nanostructures. Exploiting this fact allows to monitor, for example, the
concentration of pure solutions on top of the structure by evaluating the shift of a distinct spectral feature [19].
However, such gold structures are not able to detect specific substances in an unfunctionalized fashion. To realize a chemically selective sensor, we have to assure that the changes in the refractive
index are exclusively caused by the desired analyte. For biological sensing, the existence of molecule pairs with strong affinity can be beneficial. Ranking among the strongest noncovalent
interactions known in nature, the biotin-streptavidin complex, for example, is a commonly used system for proof of concept experiments (see Figure 2). The vitamin biotin can be functionalized with a
thiol group by utilising polyethylene glycol as a spacer. This allows the whole molecule to bind to the gold nanostructures. If the structure is now rinsed with an analyte containing streptavidin,
the molecules will attach to the biotin and due to their presence affect the dielectric environment of the gold structure. This effect, and therefore the detectable change in the optical spectrum,
will remain even after washing away other substances that may have an impact on the measurement [21].
From a conceptual point of view, the method of embedding the functionalization into a hydrogel is similar. Hydrogels are polymer networks that, due to their hydrophilic properties, absorb a
considerable amount of water which causes substantial swelling. Lee et al. have shown that replacing several sites in the polymer chains with a molecule which will form a charged complex with a
glucose molecule establishes a relation between the swelling of the hydrogel and the glucose concentration in the surrounding water [18]. As those changes in volume also imply a varying refractive
index, they again are subject to detection by the metamaterial structure.
The resulting spectra in both cases can be analyzed in different ways. An important value is the so-called sensitivity which describes the shift in nm or eV of the resonance per refractive index unit
(RIU). According to Sherry et al. the linewidth of the resonance also plays an important role [22]. Therefore one can define a figure of merit where FWHM denotes the full width at half maximum.
These values have in common that a spectrometer is needed to determine both. This rather complex and cost-extensive method is only applicable in scientific research. In commercial products it is more
likely that intensity changes at a specific wavelength are evaluated. This leads to the intensity dependent sensitivity and the related figure of merit describing the relative intensity change per
refractive index unit.
2.3. Simulation Model
Scattering matrix theory was used to simulate the spectra of the metamaterial structures. This method which uses a Fourier modal decomposition of the electric and magnetic fields has been introduced
by Whittaker and Culshaw [23] and has later been extended and improved by Tikhodeev et al. [24] as well as recently by Weiss et al. [25].
The dielectric functions for the materials used to define the periodically repeated unit cell can be retrieved from a database or entered as parameters for the Drude model.
In the definition of the structure as well as in the calculations, the design is separated into single layers, beginning at the superstrate, down to the substrate, each homogeneous along the z-axis.
The first step is to solve Maxwell's equations for every layer.
The structured slab couples the incident light with frequency and wave vector to all Bragg-orders retrieved from Maxwell's equations with the same frequency and wave vector with the reciprocal
lattice vector and the lattice constant .
Hence, the S-matrix method is able to calculate the outbound harmonics from the system (). The method is exact for . In reality, only a limited number of lattice vectors are used for the calculation.
Because of the fact that the calculation time increases with , computing power is the limiting factor. A typical number for is .
The method can be accelerated and improved in accuracy by using adaptive spatial resolution and the customisation of the coordinate system, depending on the individual structure.
In the next step, the amplitudes of the waves in the single layers have to be concatenated. Therefore, the respective solutions of Maxwell's equations have to be separated into a set of eigenvectors
parallel to the z-axis. The amplitudes of the plane waves can now be written as vectors All components heading to the positive (negative) -direction are labelled with + (−). With the aid of a
so-called transfer matrix, those vectors are linked at different positions (-values) in the layer: The transition from one layer () to another () can be described similarly: In general, it would be
possible to calculate the propagation of light in layered structures using the transfer matrix formalism. However in case of evolving evanescent waves, this method may fail. This is the reason for
using the scattering matrix algorithm. All amplitudes of waves incident on the sample, as well as the outbound waves, are combined into one vector: Here, means “vacuum” (above the sample), and means
“substrate” (below the sample). The scattering matrix concatenates both vectors: The whole S-matrix can be obtained by iteration, beginning with the unit matrix for layers and subsequently
calculating the matrix for layers with the aid of the inverse transfer matrix.
Using scattering matrix theory, it is possible to calculate reflectance, transmittance, extinction, and absorption spectra of metallic structures. Additionally, information about the electric and
magnetic field distribution can be obtained.
2.4. Regression Methods
The aim in regression is to find a functional connection between some input (for example, ) in input space and some output , that is, . Once this connection is established using some training data,
it is validated by applying the regression model on data that was not used during the training process. The validated model is then employed on unknown input data for the task of output prediction.
In this paper, the method of support vector machine regression (SVR) [26] is used.
Support vector machines emerged from the field of learning theory. They are constructed using training data. Compared to other learning methods, overfitting is avoided by implementing the paradigm of
structural risk minimization (SRM) [27].
The first step consists of defining Vapnik's -insensitive loss function where is the th pair of training data. This can be thought of as a punishment for the deviation of the estimated value from the
given value . The affine ansatz is made, where denotes the scalar product. Implementing the SRM requires minimization of the weighted sum of the capacity of the machine and the training error , thus
leading to Slack variables ( and ) are introduced to account for outliers [26]. This leads to the Lagrangian function in primal space, which has to be minimized with respect to the primal variables
and and maximized with respect to the dual variables and , which are the Lagrange multipliers.
Plugging in the necessary conditions for a saddle point yields the Lagrangian function in dual space, which has to be maximized with respect to subject to the constraints In order to achieve
nonlinear regression, a mapping from input space to feature space is introduced. Usually holds true. The nonlinear regression in input space corresponds to a linear regression in some feature space.
Instead of actually performing the mapping, which might be computationally expensive, the so-called kernel trick is applied. It depends on the fact that the training data only occurs in the form of
scalar products and that scalar products in feature space can be calculated in input space using the kernel according to
3. Results
3.1. Simulated Reflectance Spectra
For a first overview, we simulated spectra for a broad concentration range of aqueous glucose solutions on top of different metamaterials, namely, a simple plasmonic dipole and a stacked EIT-type
metamaterial. Starting with pure water, we added weight percent of glucose, corresponding to values from about 40mg/dL up to 22g/dL.
Our EIT metamaterial uses a 60nm displacement of the dipole bar from the central symmetry axis. The length of the dipole bar is 340nm, whereas the quadrupole bar is 345nm long. Their width is
80nm, the gold thickness is 40nm, and the spacer thickness is 70nm.
The simple dipole structure shows one distinct peak, whereas the coupled dipole and quadrupole antenna develops an additional reflectance dip in the center of the broad peak (Figures 3(a)–3(d)).
Highly confined electric fields are responsible for sensitivity and the possibility of extremely small sensing volumes (Figure 3(e)). The resulting sensitivities are and . The is 6.0, and the is 9.5.
3.2. Sensitivity Analysis
This section deals with the identification of those parameters and noise contributions that may have influence on the expected measurement results. To this end, the metamaterial simulation tool is
extended by a model of the signal processing units containing noise sources and nonstationary parameter sets. The block diagram used for the simulation is depicted in Figure 4. The source consists of
a modulated steering signal that drives the laser diode, from which both the output power and the actual wavelength are measured. Laser diodes show some deviation from their nominal wavelength due to
the manufacturing processes, and their wavelength also varies significantly with temperature.
The contact lens block contains the embedded metamaterial spectra which functionally attenuates the power output to the received reflected power , depending on the actual wavelength .
The reflected laser ray is detected by the photodiode. Figure 5 shows a scheme of the circuitry which is used to amplify the current of the photodiode [28]. A characteristic feature for this kind of
feedback amplifier is the virtual ground at the node of the inverting input terminal which enables a higher bandwidth [29]. The bias resistor reduces the effect of the bias current, which is nonzero
for any real operational amplifier. In the simulation model, the current of the photodiode is converted to an output voltage using unit amplification.
Finally, demodulation and filtering are performed. In order to avoid higher frequency noise contributions and to detect the steady state value, low-pass filtering is performed and its output signal
is then evaluated.
The selected wavelengths represent the spectra at their steepest slope. In fact, for the dipole metamaterials, is used, and for the EIT metamaterials, .
3.2.1. Noise Sources
Considering this simulation model the noise sources are analyzed qualitatively in order to find out which of them are relevant.
According to [28, 30], the total noise in a photodiode is the sum of its thermal noise (Johnson-Nyquist noise), shot noise, 1/f noise, and generation-recombination noise.
The thermal noise and the shot noise are calculated according to with the real part of the impedance, Boltzmann's constant , the temperature in Kelvin, the electron charge , the diode current , and
the noise bandwidth . Both thermal and shot noise are modelled as white noise. They form the 0dB line of the noise filter (Figure 6), which takes into account 1/f-noise and generation-recombination
noise. In the presented simulations, the sampling frequency is used. The Nyquist frequency is therefore .
According to the modeled noise sources the noise power of the photodiode noise decreases as the frequency increases. Exactly for this reason, the laser signal is modulated with , which is chosen to
be for the presented simulations, thus taking advantage of the noise reduction at higher frequencies. This fact becomes apparent when analyzing the power spectral density (PSD) of the photodiode
current: almost the entire signal power is contained within the band around (see Figure 7), where the noise can be disregarded. Thus we conclude that the influence of the photodiode noise can be
neglected with regard to the glucose measurement results.
3.2.2. Parameter Variation
Significant parameter variation is expected with regard to temperature and laser wavelength.
(i) Temperature
A change in temperature modifies the wavelength of the laser and thus is considered to be a crucial parameter.
The influence of the temperature is investigated for a constant concentration and for both metamaterial structures (dipole and EIT). The results are depicted in Figure 8, where we observe that the
normalized photodiode current sensitivity to temperature deviations is of for dipole metamaterial and of for the EIT metamaterial. Therefore, the temperature drift must be taken into account when
evaluating measurements.
(ii) Wavelength
The deviation of the laser wavelength from its nominal value is investigated in relation to the photodiode current. Figure 9 shows the photodiode current over for for both the dipole and the EIT
metamaterial for a constant concentration of . The resulting wavelength deviations sensitivity is for dipole metamaterial and for EIT metamaterial.
Thus, the laser diode wavelength drift is an even more significant parameter than temperature.
3.2.3. Glucose Concentration Sensitivity
Finally an evaluation of the measurement sensitivity is performed. In Figure 10, the photodiode current is shown as a function of the glucose concentration for both dipole metamaterial and EIT
metamaterial. One can observe that the dynamic range of the current is larger for the EIT metamaterial due to the steeper slope. The measurement sensitivity results in for the dipole metamaterial and
in for EIT metamaterial.
3.3. Estimation Using Support Vector Regression
The analysis of measurement errors in glucose monitoring systems presents a particularly troublesome problem, because the importance (that is, the clinical consequence) of any particular error
depends on the absolute value of both the reference and measured values and not just on the percentage of deviation. Moreover, this dependence is not easily described by any simple mathematical
relationship. Although Error Grid Analysis (EGA) was introduced in the mid-1980s [31], an evaluation based on standardized signal processing and statistical tools is more meaningful for a preclinical
analysis, which suits our purpose.
In the presented systems the glucose level concentration corresponds to the concentration used in the spectrum simulation. In order to obtain a predicted concentration, support vector regression is
employed. The training data is the simulated photodiode current as independent variable and the associated glucose concentration as dependent variable. The results are validated using k-fold
Support vector regression will also be employed in the actual measurement device. In that case, the training data consists of the measured photodiode current as independent variable as well as the
associated glucose concentration as dependent variable. Given a measured photodiode current, the SVR is used to predict the corresponding glucose level.
Two different kernels are employed for the SVR [26]. (1)Gaussian radial basis function kernel (2)Complete polynomial of degree
The first simulations are carried out for and the nominal wavelength of the respective laser; see Figure 11. The relative error as function of the estimated glucose concentration given in percent is
depicted over the respective glucose concentration. The polynomial kernel of degree corresponds to linear regression. It turns out that the error is very large in this case. That motivates the use of
nonlinear support vector regression. The Gaussian radial basis function kernel with yields estimation errors below 2% for physiological concentrations.
Next, the influence of temperature variation is investigated; see Figure 12. The Gaussian radial basis function kernel with is used in each simulation run. The temperature is varied from 25°C to
Finally, simulations are carried out varying the wavelength deviation (Figure 13). Once again, the Gaussian radial basis function kernel with is employed.
4. Discussion
4.1. Metamaterial Shape
The metamaterial shape (dipole or EIT) plays a key role for the available maximum slope in the spectrum. A steeper slope in turn leads to a broader dynamic range of the photodiode current. Therefore,
the use of the EIT shape is preferable.
When comparing simple dipole plasmonic structures with plasmonic EIT sensors, we find that in terms of concentration sensitivity, the EIT concept is superior by at least a factor of two over the
simple dipole. However, as a drawback, the EIT concept due to its steeper resonances is also more prone to wavelength shifts due to temperature variations. However, this problem can be circumvented
by using a temperature stabilization scheme for the laser diode.
On top of that, the EIT shape offers four specific wavelengths with a large slope, compared to the dipole shape with only two points. This increases the flexibility for choosing a specific
wavelength, as not every wavelength is available in commercial lasers.
4.2. Statistical Evaluation
Special care has to be taken when the temperature or the wavelength differ from their nominal values. As the SVR does not contain those parameters, the prediction deteriorates. The relative errors
for the determination of the glucose concentration are presented in Section 3.3. They become very large even for small variations of those parameters.
In order to overcome this issue, temperature will be included as independent variable in the SVR in future work. On top of that, the system will be calibrated in order to correct the wavelength
deviation of a specific laser.
5. Conclusion and Further Work
5.1. Proof of Concept
The present paper demonstrates a novel concept for in vivo sensing of glucose using metamaterials in combination with automatic learning systems.
The novelty of the approach lies in the fact that this sensor does not take advantage of the rather poor optical differences between the glucose molecule and other substances contained in the
surrounding fluid (blood stream, tear fluid, etc.), but rather on the ability of the glucose to change selectively the refractive index of a specific metamaterial.
High sensitivity of our detection scheme is warranted because metamaterials are able to detect even minute changes in the dielectric properties of their environment. The basic concept relies on a
contact lens material that supports a few nanometers of a gold-based metamaterial which is functionalized with glucose-sensitive hydrogel. This design is transparent in the visible and near-infrared
range and thus can be designed as contact lens to be inserted into the patient's eye. The readout is carried out by an external LED, and the reflected light is captured by a photodiode whose
intensity response is evaluated. Signal postprocessing stages based on regression methods allow the reliable estimation of the tear glucose content.
A complex simulation environment is built to evaluate the main signal contributions together with the most important noise sources as well as the most relevant parameter uncertainties. The simulation
results have shown that estimation errors below 2% at physiological concentrations are possible.
5.2. Functionalization with Glucose-Sensitive Hydrogel for Contact Lens Implementation
The plasmonic sensor concept has proven to be suitable for glucose detection at physiological concentrations.
In the future, we are going to implement a glucose selective layer on the plasmonic structure. This includes a functionalization layer with a glucose-specific hydrogel [18, 32].
The hydrogel allows only glucose to penetrate the functionalization layer and not other chemical agents that are present in the tear fluid.
Furthermore, the hydrogel is quite biocompatible, in particular for the human eye environment. In fact, soft contact lenses already use those kinds of hydrogels as surface layers.
The authors would like to thank the Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg as well as BMBF and DFG (Open Access Publishing Fonds) for granting financial support.
1. World Health Organization, Diabetes. Fact Sheet N 312, WHO, Geneva, Switzerland, 2010, http://www.who.int/mediacentre/factsheets/fs312/en/.
2. Y. C. Shen, A. Davies, E. Linfield, T. Elsey, P. Taday, and D. Arnone, “The use of fourier-transform infrared spectroscopy for the quantitative determination of glucose concentration in whole
blood,” Physics in Medicine and Biology, vol. 48, no. 13, pp. 2023–2032, 2003. View at Publisher · View at Google Scholar · View at Scopus
3. J. Zhang, W. Hodge, C. Hutnick, and X. Wang, “Noninvasive diagnostic devices for diabetes through measuring tear glucose,” Journal of Diabetes Science and Technology, vol. 5, no. 1, pp. 166–172,
4. J. Wang, “In vivo glucose monitoring: towards 'Sense and Act' feedback-loop individualized medical systems,” Talanta, vol. 75, no. 3, pp. 636–641, 2008. View at Publisher · View at Google Scholar
· View at PubMed · View at Scopus
5. B. H. Malik and G. L. Coté, “Modeling the corneal birefringence of the eye toward the development of a polarimetric glucose sensor,” Journal of Biomedical Optics, vol. 15, no. 3, pp.
037012–037018, 2010.
6. C. O'Donnell, N. Efron, and A. J. M. Boulton, “A prospective study of contact lens wear in diabetes mellitus,” Ophthalmic and Physiological Optics, vol. 21, no. 3, pp. 127–138, 2001. View at
Publisher · View at Google Scholar · View at Scopus
7. W. March, B. Long, W. Hofmann, D. Keys, and C. McKenney, “Safety of contact lenses in patients with diabetes,” Diabetes Technology and Therapeutics, vol. 6, no. 1, pp. 49–52, 2004. View at
Publisher · View at Google Scholar · View at PubMed · View at Scopus
8. V. L. Alexeev, S. Das, D. N. Finegold, and S. A. Asher, “Photonic crystal glucose-sensing material for noninvasive monitoring of glucose in tear fluid,” Clinical Chemistry, vol. 50, no. 12, pp.
2353–2360, 2004. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
9. R. Badugu, J. R. Lakowicz, and C. D. Geddes, “Wavelength-ratiometric probes for the selective detection of fluoride based on the 6-aminoquinolinium nucleus and boronic acid moiety,” Journal of
Fluorescence, vol. 14, no. 6, pp. 693–703, 2004. View at Publisher · View at Google Scholar · View at Scopus
10. M. R. G. A. Ballerstadt, C. Evans, R. McNichols, and A. Gowda, “Concanavalin a for in vivo glucose sensing: a biotoxicity review,” Biosensors and Bioelectronics, vol. 22, no. 2, pp. 275–284,
2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
11. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, “Magnetism from conductors and enhanced nonlinear phenomena,” IEEE Transactions on MTT, vol. 47, no. 11, pp. 2075–2084, 1999. View at
12. J. B. Pendry, “Negative refraction makes a perfect lens,” Physical Review Letters, vol. 85, no. 18, pp. 3966–3969, 2000. View at Publisher · View at Google Scholar · View at Scopus
13. J. B. Pendry, D. Schurig, and D. R. Smith, “Controlling electromagnetic fields,” Science, vol. 312, no. 5781, pp. 1780–1782, 2006. View at Publisher · View at Google Scholar · View at PubMed ·
View at Scopus
14. N. Liu, T. Weiss, J. Kästel, M. Fleischhauer, T. Pfau, and H. Giessen, “Plasmonic analogue of electromagnetically induced transparency at the Drude damping limit,” Nature Materials, vol. 8, no.
9, pp. 758–762, 2009. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
15. M. Hentschel, M. Saliba, R. Vogelgesang, H. Giessen, A. P. Alivisatos, and N. Liu, “Transition from isolated to collective modes in plasmonic oligomers,” Nano Letters, vol. 10, no. 7, pp.
2721–2726, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
16. M. Hentschel, D. Dregely, R. Vogelgesang, H. Giessen, and N. Liu, “Plasmonic oligomers: the role of individual particles in collective behavior,” ACS Nano, vol. 5, no. 3, pp. 2042–2050, 2011.
View at Publisher · View at Google Scholar · View at PubMed
17. B. Lukyanchuk, N. I. Zheludev, S. A. Maier et al., “The Fano resonance in plasmonic nanostructures and metamaterials,” Nature Materials, vol. 9, no. 9, pp. 707–715, 2010. View at Publisher · View
at Google Scholar · View at PubMed · View at Scopus
18. Y.-J. Lee, S. A. Pruzinsky, and P. V. Braun, “Glucose-sensitive inverse opal hydrogels: analysis of optical diffraction response,” Langmuir, vol. 20, no. 8, pp. 3096–3106, 2004. View at Scopus
19. N. Liu, T. Weiss, M. Mesch et al., “Planar metamaterial analogue of electromagnetically induced transparency for plasmonic sensing,” Nano Letters, vol. 10, no. 4, pp. 1103–1107, 2010. View at
Publisher · View at Google Scholar · View at PubMed · View at Scopus
20. N. Liu, H. Guo, L. Fu, S. Kaiser, H. Schweizer, and H. Giessen, “Realization of three-dimensional photonic metamaterials at optical frequencies,” Nature Materials, vol. 7, no. 1, pp. 31–37, 2008.
View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
21. G. Raschke, S. Kowarik, T. Franzl et al., “Biomolecular recognition based on single gold nanoparticle light scattering,” Nano Letters, vol. 3, no. 7, pp. 935–938, 2003. View at Publisher · View
at Google Scholar · View at Scopus
22. L. J. Sherry, R. Jin, C. A. Mirkin, G. C. Schatz, and R. P. van Duyne, “Localized surface plasmon resonance spectroscopy of single silver triangular nanoprisms,” Nano Letters, vol. 6, no. 9, pp.
2060–2065, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
23. D. M. Whittaker and I. S. Culshaw, “Scattering-matrix treatment of patterned multilayer photonic structures,” Physical Review B, vol. 60, no. 4, pp. 2610–2618, 1999. View at Scopus
24. S. G. Tikhodeev, A. L. Yablonskii, E. A. Muljarov, N. A. Gippius, and T. Ishihara, “Quasiguided modes and optical properties of photonic crystal slabs,” Physical Review B, vol. 66, no. 4, Article
ID 045102, pp. 451021–4510217, 2002. View at Scopus
25. T. Weiss, G. Granet, N. A. Gippius, S. G. Tikhodeev, and H. Giessen, “Matched coordinates and adaptive spatial resolution in the Fourier modal method,” Optics Express, vol. 17, no. 10, pp.
8051–8061, 2009. View at Publisher · View at Google Scholar · View at Scopus
26. B. Schölkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Cambridge, UK, 2002.
27. L. Wang, Support Vector Machines: Theory and Applications, Springer, New York, NY, USA, 2005.
28. C. D. Motchenbacher and J. A. Connelly, Low-Noise Electronic System Design, John Wiley & Sons, New York, NY, USA, 1993.
29. U. Tietze and C. Schenk, Halbleiterschaltungstechnik, Springer, New York, NY, USA, 2002.
30. F. N. Hooge, “1/f noise sources,” IEEE Transactions on Electron Devices, vol. 41, no. 11, pp. 1926–1935, 1994. View at Publisher · View at Google Scholar · View at Scopus
31. W. L. Clarke, D. Cox, L. A. Gonder-Frederick, W. Carter, and S. L. Pohl, “Evaluating clinical accuracy of systems for self-monitoring of blood glucose,” Diabetes Care, vol. 10, no. 5, pp.
622–628, 1987. View at Scopus
32. S. A. Asher, V. L. Alexeev, A. V. Goponenko et al., “Photonic crystal carbohydrate sensors: low ionic strength sugar sensing,” Journal of the American Chemical Society, vol. 125, no. 11, pp.
3322–3329, 2003. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
|
{"url":"http://www.hindawi.com/journals/jece/2011/953064/","timestamp":"2014-04-18T08:29:48Z","content_type":null,"content_length":"268126","record_id":"<urn:uuid:74474c2f-7dee-44c1-a577-40a2dfc2b83a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Blocks of Defect Zero and Products of
Murray, John C. (1999) Blocks of Defect Zero and Products of Elements of Order p. Journal of Algebra, 214 (2). pp. 385-399. ISSN 0021-8693
Suppose that G is a finite group and that F is a field of characteristic p)0 which is a splitting field for all subgroups of G. Let e0 be the sum of the block idempotents of defect zero in FG, and
let V be the set of solutions to g ps1 in G. We show that e0sVq. 2, when p is odd, and e0sVq. 3, when ps2. In the latter case Vq. 2sRq, where R is the set of real elements of 2-defect zero. So
e0sVqRqsRq. 2. We also show that e0sVqVq4sVq4 . 2, when ps2, where V4 is the set of solutions to g 4s1. These results give us various criteria for the existence of p-blocks of defect zero.
Repository Staff Only(login required)
|
{"url":"http://eprints.nuim.ie/2037/","timestamp":"2014-04-17T07:28:26Z","content_type":null,"content_length":"20706","record_id":"<urn:uuid:962c9b4c-6b66-4112-9a86-6b35ba1f1d77>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Philadelphia Ndc, PA
Find a Philadelphia Ndc, PA Precalculus Tutor
...I enjoy helping students identify the easiest path to a solution and the steps they should employ to get there. My first teaching job was in a school that specifically serviced special needs
students. Each teacher received special training on how to aide students with a variety of differences, including ADD and ADHD.
58 Subjects: including precalculus, reading, chemistry, calculus
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching
because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including precalculus, calculus, physics, ACT Math
...I have obtained a bachelor's degree in mathematics from Rutgers University. One of the classes I took there was an upper level geometry class, which dealt with the subject on a level much more
advanced than one finds in high school (I had to write a paper for that class, that I think was about 1...
16 Subjects: including precalculus, English, calculus, physics
...With a physics and engineering background, I have the knowledge of physics fundamentals, but as a tutor I can walk the student through a concept, show them the steps to solve a problem, and
help them master the material needed to get through their class. As a tutor with a primary focus in math a...
9 Subjects: including precalculus, calculus, physics, geometry
...I have prepared high school students for the AP Calculus exams (both AB and BC), undergraduate students for the math portion of the GRE, and have helped many other students with math skills
ranging from basic arithmetic all the way up to Calculus 3 and basic linear algebra. In my free time, I en...
22 Subjects: including precalculus, calculus, geometry, statistics
Related Philadelphia Ndc, PA Tutors
Philadelphia Ndc, PA Accounting Tutors
Philadelphia Ndc, PA ACT Tutors
Philadelphia Ndc, PA Algebra Tutors
Philadelphia Ndc, PA Algebra 2 Tutors
Philadelphia Ndc, PA Calculus Tutors
Philadelphia Ndc, PA Geometry Tutors
Philadelphia Ndc, PA Math Tutors
Philadelphia Ndc, PA Prealgebra Tutors
Philadelphia Ndc, PA Precalculus Tutors
Philadelphia Ndc, PA SAT Tutors
Philadelphia Ndc, PA SAT Math Tutors
Philadelphia Ndc, PA Science Tutors
Philadelphia Ndc, PA Statistics Tutors
Philadelphia Ndc, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/philadelphia_ndc_pa_precalculus_tutors.php","timestamp":"2014-04-16T10:36:09Z","content_type":null,"content_length":"24738","record_id":"<urn:uuid:3a558ec1-a52c-418a-88ae-ba664e3888c0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Turn and Face the Strange Changes: Does Throwing Changeups Help Pitchers Sustain Lower BABIP?
Earlier today (or yesterday, depending on which timezone you're in and when you're reading this), woodman663 posted a really interesting article demonstrating that changeup specialists may have a
predilection for sustaining low babip. Many of the examples of pitchers that he looked at (for example, Ted Lilly) were extreme flyball pitchers. Since flyballs are more likely to turn into outs than
grounders, it forces us to disentangle these two factors from one another.
Woodman and I have talked back and forth on the piece a bit and I suggested that we run some statistical analyses so that we could tease out whether the changeup effect was truly meaningful or if it
was just an artifact of the flyball effect. I said much of what is in this post in the comments section, but here it is full-blown and with the output (which is important, in case I'm making mistakes
here -- please let me know if you notice any).
I included all starting pitchers with 300+ innings since 2009 and used R v2.12.1 to fit a linear model for babip to fixed effects of flyball-rate, strikeout-rate, changeup frequency, total value by
linear weights of all changeups, and value by linear weights per changeup. At Woodman's suggestion (and as justified in the body of the post), I included splitters as changeups.
Keep in mind that the p-values refer to whether the evidence suggests that a factor is significant (the lower the p-value, the more confident we can be that the effect is real) and the R-squared
values refer to how well the model describes the variance (the higher the R-squared value, the better the description).
The model accounted for about one quarter of the variance in pitcher babip. After testing the significance of effects, I also used the Lindeman, Merenda and Gold (lmg) method to determine the
relative importances of contributions from each factor. Here is the output:
lm(formula = babip$BABIP ~ babip$fly + babip$K + babip$chfreq +
babip$chtot + babip$chperc, data = babip)
Min 1Q Median 3Q Max
-0.040995 -0.007808 -0.000659 0.008768 0.032989
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.322e-01 8.910e-03 37.283 < 2e-16 ***
babip$fly -1.213e-01 2.342e-02 -5.179 9.2e-07 ***
babip$K 1.609e-02 3.379e-02 0.476 0.6348
babip$chfreq 9.532e-03 2.125e-02 0.448 0.6546
babip$chtot -5.992e-05 1.570e-04 -0.382 0.7034
babip$chperc -2.969e-03 1.544e-03 -1.924 0.0568 .
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.01358 on 119 degrees of freedom
(1 observation deleted due to missingness)
Multiple R-squared: 0.2564, Adjusted R-squared: 0.2251
F-statistic: 8.206 on 5 and 119 DF, p-value: 1.100e-06
The "1 observation deleted due to missingness" (what a great word, by the way), was Tommy Hanson, who has zero changeups and splitters on record. Anyway, what we find is that the effects of
flyball-rate are highly significant (p = 2 × 10**-16). The effects of value per changeup are moderately significant (p = 0.0568). None of the other effects (including K%!) were significant. A model
including only those two factors actually fit the data slightly better than the initial model, which also included k-rate, changeup frequency and total changeup value. Here is the output for that
> summary(fit2)
lm(formula = babip$BABIP ~ babip$fly + babip$chperc, data = babip)
Min 1Q Median 3Q Max
-0.040973 -0.008040 -0.001156 0.008629 0.032851
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.3340417 0.0078308 42.657 < 2e-16 ***
babip$fly -0.1159427 0.0213364 -5.434 2.87e-07 ***
babip$chperc -0.0031551 0.0008792 -3.589 0.00048 ***
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.01344 on 122 degrees of freedom
(1 observation deleted due to missingness)
Multiple R-squared: 0.254, Adjusted R-squared: 0.2418
F-statistic: 20.77 on 2 and 122 DF, p-value: 1.726e-08
Next, I used the Lindeman, Gold and Merenda (1990) (lgm) method to describe the relative importances of each factor. Here is the output for the first model:
> calc.relimp(fit1,type=c("lmg","last","first","pratt"), rela=TRUE)
Response variable: babip$BABIP
Total response variance: 0.0002381301
Analysis based on 125 observations
5 Regressors:
babip$fly babip$K babip$chfreq babip$chtot babip$chperc
Proportion of variance explained by model: 25.64%
Metrics are normalized to sum to 100% (rela=TRUE).
Relative importance metrics:
lmg last first pratt
babip$fly 0.66176163 0.862561629 0.48939293 0.72622533
babip$K 0.02597355 0.007290960 0.04968005 -0.02154839
babip$chfreq 0.05245325 0.006468146 0.10981283 -0.03433426
babip$chtot 0.08818158 0.004685193 0.14603247 0.05044396
babip$chperc 0.17162998 0.118994071 0.20508171 0.27921336
Average coefficients for different model sizes:
1X 2Xs 3Xs 4Xs
babip$fly -0.1141989299 -0.1126788278 -0.1150599466 -1.190098e-01
babip$K -0.0518129769 -0.0316319422 -0.0157599046 1.072460e-03
babip$chfreq -0.0425860063 -0.0276718376 -0.0142778771 -1.016283e-03
babip$chtot -0.0002423128 -0.0001693865 -0.0001124105 -7.509733e-05
babip$chperc -0.0030463088 -0.0028538654 -0.0028884462 -3.005977e-03
babip$fly -0.1213197877
babip$K 0.0160889358
babip$chfreq 0.0095322944
babip$chtot -0.0000599228
babip$chperc -0.0029691976
In terms of relative importance, flyball-rate was most important but the changeup inputs made important contributions to the model as well. K-rate made the least important contribution (just 2%
relative importance). We can either combine the relative contributions of the changeups here or use this method to calculate relative importances for our second model. Here is the output for the
second model:
> calc.relimp(fit2,type=c("lmg","last","first","pratt"), rela=TRUE)
Response variable: babip$BABIP
Total response variance: 0.0002381301
Analysis based on 125 observations
2 Regressors:
babip$fly babip$chperc
Proportion of variance explained by model: 25.4%
Metrics are normalized to sum to 100% (rela=TRUE).
Relative importance metrics:
lmg last first pratt
babip$fly 0.7004244 0.6963282 0.7046952 0.7005284
babip$chperc 0.2995756 0.3036718 0.2953048 0.2994716
Average coefficients for different model sizes:
1X 2Xs
babip$fly -0.114198930 -0.115942720
babip$chperc -0.003046309 -0.003155121
Basically, this tells us that flyball-rate accounts for about 70% of the usefulness of the model and changeups account for about 30% its usefulness.
On the overall, according to the methods and models described above, flyball-rate accounts for about 17.8% of pitcher babip variability. The total contributions of per pitch changeup value, total
changeup value, and changeup frequency account for about 7.6% of pitcher babip variability.
Of course, this method has a critical flaw. The problem with using linear weights pitch value data is that those linear weights values are affected by BABIP, so they aren't independent of one
another. However, changeup frequency should be independent of babip, so we can use a simple model that looks only at flyball-rate and changeup frequency. Here is the output:
> summary(fit2)
lm(formula = babip$BABIP ~ babip$fly + babip$chfreq, data = babip)
Min 1Q Median 3Q Max
-0.039630 -0.009275 -0.000403 0.009383 0.032987
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.333697 0.008176 40.813 < 2e-16 ***
babip$fly -0.107534 0.022798 -4.717 6.44e-06 ***
babip$chfreq -0.024325 0.017948 -1.355 0.178
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.01402 on 122 degrees of freedom
(1 observation deleted due to missingness)
Multiple R-squared: 0.1875, Adjusted R-squared: 0.1742
F-statistic: 14.08 on 2 and 122 DF, p-value: 3.158e-06
As we can see, taking the linear weight values out of the model and using only changeup-frequency weakens the model quite a bit and does not demonstrate as clear a relationship between changeups and
babip. We can also look at the relative importance of each factor in the model using the lgm method described earlier:
> calc.relimp(fit2,type=c("lmg","last","first","pratt"), rela=TRUE)
Response variable: babip$BABIP Total response variance: 0.0002381301 Analysis based on 125 observations 2 Regressors: babip$fly babip$chfreq Proportion of variance explained by model:
Metrics are normalized to sum to 100% (rela=TRUE). Relative importance metrics: lmg last first pratt babip$fly
0.92373356 0.8167360 0.8801962 babip$chfreq
0.07626644 0.1832640 0.1198038 Average coefficients for different model sizes: 1X 2Xs babip$fly -0.11419893 -0.10753360 babip$chfreq -0.04258601 -0.02432455
These values merely confirm that the influence of flyball-rate is still relatively much more important than the influence of changeup frequency (which may still be a somewhat important factor).
So this more conservative approach, which excludes linear weights, does not sufficiently demonstrate a significant relationship. The non-significant relationship demonstrated by this conservative
approach suggests that changeup frequency may account for about 2.5% of pitcher babip variance.
Overall, K-rate is extremely unlikely to be a significant factor and, even if it were, it would be an extremely unimportant one, accounting for only about 0.5% of pitcher babip variance. As a
side-note, this also serves as further evidence that there are serious flaws in the calculation of SIERA. I propose that SIERA should be reconstructed so as to include the effects of flyball-rate,
NOT K-rate, on babip. Essentially, the only reason it works slightly better than xFIP or FIP is because it uses K-rate as a proxy for flyball-rate. Since flyball-rate is easily measured and batted
ball data are readily available, there's no reason to proxy flyball-rate.
So what do you all think? What are some other factors we can test for effects on babip?
Thanks to David Bowie and Woodman663!
|
{"url":"http://www.bluebirdbanter.com/2011/9/29/2459312/turn-and-face-the-strange-changes-does-throwing-changeups-help","timestamp":"2014-04-21T07:07:26Z","content_type":null,"content_length":"102116","record_id":"<urn:uuid:77b2475d-a0f1-48bf-affc-939b9f151e4c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
inkscape 2D - blender 3d
Working on a 3D letter design, as mentioned before in another
As simple as designing a serif letter can get, it became complicated instead.
That is, because I figured out to use a 3D shape as a pen tool to draw the letter as a calligraphy.
Here are two renders of the current version of the letter:
Since then, posted a thread on blenderartists
if they could give some advices on some modelling problems that came up.
As for creating a clean model with a nice topology, this method seems to work:
Drawing iso-lines -horizontal cutlines- of the 3D pen shape,
using each cut line to draw separate letters on the same strokes,
and putting them together in blender to have a 3D model.
Here is a render of that 3D pen shape:
Here are some dimensions on it to make it clearer:
Here is how the strokes should be constructed:
I could construct some cut lines in inkscape described in that previous topic, and asked if anyone had an idea on using each as a pen shape.
Now that it hadn't been answered, here is what I came up with recently:
Using one of the cut-lines, tried to draw a stroke.
This first one is drawn with the motion extension.
As a path curve would be broke down to small straight segments, the overall stroke would look exactly like this.
A straight segment, then a small part of the edge of the pen shape, then another straight segment, and so on.
I would need a circle arc for the axis of the stroke,
so even steps in the rotation, even segment length seemed reasonable.
Now that the overall shape was done, I was thinking of you could always chose a smaller segment length,
the part of the pen shape would be less and less.
Resulting in a curved line, and in an exact point, where a pen shape would be tangent to the edge of the overall stroke.
This wild guess is depicted here:
These are two svg-s, so that you can open it in inkscape for a closer look.
-On a side note, how awesome is that 3D rod effect?
Well, if gradient meshes will be available... will it be implemented?-
So back to the point: now for that discover in the stroke structure, on the tangents,
the cut lines should have all their nodes preferably with exact tangents.
Not that it would be too hard to construct from the cutlines I already have, but
impossible to make them clean:
by rephrasing the path to nodes with the right tangents, they appeared to be in a bit too random position to eachother.
I would need to construct clean cutlines of the pen shape, which was put together from toruses.
Modelling, manual constructing couldn't help, maybe mathematics -and scripting?- could help.
AS it appears that the cut lines are not ordinary curves, but they are so famous, that they have their own name-names.
Namely, Cassini ovals.
The Cassini ovals are a family of quartic curves, also called Cassini ellipses, described by a point such that the product of its distances from two fixed points a distance 2a apart is a constant
Haven't find a parametric way to describe them yet, but it has some sweet formula described on that site.
So basically now it is a connect two torus's data to a function for a quartic plane curve job.
Then, construct the points with the exact tangents.
After that, placing every pen shape on the stroke's axises, and connecting them for a hull.
This would be enough to start a good 3D model.
Any tips?
-Hope there will be such tool in inkscape as in gimp, that you can put a shape along a path and use it as a pen. Suggestion on the coding?
Last edited by Lazur URH on Tue May 07, 2013 3:05 am, edited 1 time in total.
microUgly wrote:People who figure things out for themselves will be far better at their craft than people who are told how or have someone else do the work for them.
Re: inkscape 2D - blender 3d
Not so sure if it would be really needed as the curved stroke is the only part that I should use this kind of cut-line method,
where the small errors from the recent iso-lines wouldn't show up that much.
I understand the limitations of that these are not cad programs,
but come on, how is that, you can draw more accurate things when zoomed in by hand, then the thing the program calculates?
microUgly wrote:People who figure things out for themselves will be far better at their craft than people who are told how or have someone else do the work for them.
Re: inkscape 2D - blender 3d
These things are so parametric, might make a mock-up for that suggestion out of it :http://www.inkscapeforum.com/viewtopic.php?f=32&t=14016&hilit=+node+editor#p54843
Parameters are:
height data of a cross section's starting point
distance data to span
height of the curve
arches rotation ratio between the small and the large circle
radius of the toruses
arch lenght data on a tangent circle around the toruses on a sideview
thus creating
height data of deisred iso-line cuts
functions for te right Cassini-ovals
booleaning the two torus's Cassini-ovals
derivant function on the Cassini-oval's function,
which, described by another parameter, would produce points of the Cassini-ovals with right tangents.
Another extension for aligning the this way created Cassini-ovals on a path,
rephrasing the path into nodes with set tangent data, and finally connecting each defined point to create iso-lines on a top-view.
With a "shader modifier", all 3D specular light would be produced in gradient meshes that could be set, and actual shades drawn by paths.
That would make all this 3D caslligraphy a productive inkscape thing.
But for now I'm searching for how to create and work with latex files, to create a good presentation on the mathematical background on the "ever be scripts".
microUgly wrote:People who figure things out for themselves will be far better at their craft than people who are told how or have someone else do the work for them.
Re: inkscape 2D - blender 3d
Did a similar parametric node-based editor panel schema for a possible 3D meethod in blenderartists forum.
Starting a new topic on the programming.
microUgly wrote:People who figure things out for themselves will be far better at their craft than people who are told how or have someone else do the work for them.
|
{"url":"http://www.inkscapeforum.com/viewtopic.php?f=20&t=14017","timestamp":"2014-04-19T06:51:37Z","content_type":null,"content_length":"28095","record_id":"<urn:uuid:51c1af48-647f-4822-a099-93f2690a8ccb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2D draw logic [Archive] - OpenGL Discussion and Help Forums
I have a window and I want to draw a line for each side of the window in OGL.
I do this by setting the projection and modelview matrix to the identity matrix
then I call
glVertex3d(-1, 1,0);
for left side of the window for example.
The code works fine, the line appears on the left side of the window(line width is 3).
My problem is: on what principle OGL convert this 3D vertices in 2D points on my window. How came that the vertices are not clipped(any value greater the 1 for the coords of the line is clipped).
I know that by setting to the identity matrix, the vertices remain the same, but what happens next -> 3D to 2D, clipping?
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-124715.html","timestamp":"2014-04-21T07:33:19Z","content_type":null,"content_length":"4922","record_id":"<urn:uuid:07676b83-da0e-4253-85d9-de40688c17b4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berichte der Arbeitsgruppe Technomathematik (AGTM Report)
13 search hits
A Numerical Method for Kinetic Semiconductor Equations in the Drift Diffusion limit (1997)
Axel Klar
An asymptotic-induced scheme for kinetic semiconductor equations with the diffusion scaling is developed. The scheme is based on the asymptotic analysis of the kinetic semiconductor equation. It
works uniformly for all ranges of mean free paths. The velocity discretization is done using quadrature points equivalent to a moment expansion method. Numerical results for different physical
situations are presented.
Industrial Mathematics - Ideas and Examples (1997)
Helmut Neunzert Abul Hasan Siddiqi
Particle Methods in Fluid Dynamics (1997)
Jens Struckmeier
The mathematical simulation of the liquid transport in a multilayered nonwoven (1997)
Raimondas Ciegis Aivars Zemitis
In this report we treat an optimization task, which should make the choice of nonwoven for making diapers faster. A mathematical model for the liquid transport in nonwoven is developed. The main
attention is focussed on the handling of fully and partially saturated zones, which leads to a parabolic-elliptic problem. Finite-difference schemes are proposed for numerical solving of the
differential problem. Paralle algorithms are considered and results of numerical experiments are given.
Spherical panel clustering and its numerical aspects (1997)
Willi Freeden Oliver Glockner Michael Schreiner
In modern approximation methods linear combinations in terms of (space localizing) radial basis functions play an essential role. Areas of application are numerical integration formulas on the
uni sphere omega corresponding to prescribed nodes, spherical spline interpolation, and spherical wavelet approximation. the evaluation of such a linear combination is a time consuming task,
since a certain number of summations, multiplications and the calculation of scalar products are required. This paper presents a generalization of the panel clustering method in a spherical
setup. The economy and efficiency of panel clustering is demonstrated for three fields of interest, namely upward continuation of the earth's gravitational potential, geoid computation by
spherical splines and wavelet reconstruction of the gravitational potential.
A Wavelet-Based Test for Stationarity (1997)
Rainer von Sachs Michael H. Neumann
We develop a test for stationarity of a time series against the alternative of a time-changing covariance structure. Using localized versions of the periodogram, we obtain empirical versions of a
reasonable notion of a time-varying spectral density. Coefficients w.r.t. a Haar wavelet series expansion of such a time-varying periodogram are a possible indicator whether there is some
deviation from covariance stationarity. We propose a test based on the limit distribution of these empirical coefficients.
Self-organization property of Kohonen's map with general type of stimuli distribution (1997)
Ali A. Sadeghi
Here the self-organization property of one-dimensional Kohonen's algorithm in its 2k-neighbour setting with a general type of stimuli distribution and non-increasing learning rate is considered.
We prove that the probability of self-organization for all initial values of neurons is uniformly positive. For the special case of a constant learning rate, it implies that the algorithm
self-organizes with probability one.
Grid-Free Particle Method for the Inhomogeneous Enskog Equation and its Application to a Riemann-Problem (1997)
Lars Popken
Starting from the mollified version of the Enskog equation for a hard-sphere fluid, a grid-free algorithm to obtain the solution is proposed. The algorithm is based on the finite pointset method.
For illustration, it is applied to a Riemann problem. The shock-wave solution is compared to the results of Frezzotti and Sgarra where a good agreement is found.
Nonparametric curve estimation by wavelet thresholding with locally stationary errors (1997)
Rainer von Sachs Brenda MacGibbon
In the modeling of biological phenomena, in living organisms whether the measurements are of blood pressure, enzyme levels, biomechanical movements or heartbeats, etc., one of the important
aspects is time variation in the data. Thus, the recovery of a "smooth" regression or trend function from noisy time-varying sampled data becomes a problem of particular interest. Here we use
non-linear wavelet thresholding to estimate a regression or a trend function in the presence of additive noise which, in contrast to most existing models, does not need to be stationary. (Here,
nonstationarity means that the spectral behaviour of the noise is allowed to change slowly over time.). We develop a procedure to adapt existing threshold rules to such situations, e.g., that of
a time-varying variance in the errors. Moreover, in the model of curve estimation for functions belonging to a Besov class with locally stationary errors, we derive a near-optimal rate for the
L2-risk between the unknown function and our soft or hard threshold estimator, which holds in the general case of an error distribution with bounded cumulants. In the case of Gaussian errors, a
lower bound on the asymptotic minimax rate in the wavelet coefficient domain is also obtained. Also it is argued that a stronger adaptivity result is possible by the use of a particular location
and level dependent threshold obtained by minimizing Stein's unbiased estimate of the risk. In this respect, our work generalizes previous results, which cover the situation of correlated, but
stationary errors. A natural application of our approach is the estimation of the trend function of nonstationary time series under the model of local stationarity. The method is illustrated on
both an interesting simulated example and a biostatistical data-set, measurements of sheep luteinizing hormone, which exhibits a clear nonstationarity in its variance.
Industrial Mathematics - Ideas and Examples (1997)
Helmut Neunzert Abul Hasan Siddiqi
|
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16164/start/0/rows/10/yearfq/1997/doctypefq/preprint","timestamp":"2014-04-18T23:56:32Z","content_type":null,"content_length":"42324","record_id":"<urn:uuid:0ba1ed05-2c26-4111-9d2f-dd4e8a36fcc7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hillsborough, CA
San Mateo, CA 94403
Enthusiastic Tutor of Mathematics and Physics
...I like to first establish the knowledge base of my student. We then work together to build on it, with lots of practice, to expand knowledge and mastery. Pre-
, geometry, analytic (e.g., Cartesian) geometry, calculus, simple differential equations,...
Offering 10+ subjects including algebra 1 and algebra 2
|
{"url":"http://www.wyzant.com/geo_Hillsborough_CA_algebra_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-17T05:18:19Z","content_type":null,"content_length":"60444","record_id":"<urn:uuid:528fe14c-4521-4812-b857-119a4f7bd765>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Four Colours Suffice
, 2009
"... The four colour theorem states that the vertices of every planar graph can be coloured with at most four colours so that no two adjacent vertices receive the same colour. This theorem is famous
for many reasons, including the fact that its original 1977 proof includes a non-trivial computer verifica ..."
Cited by 2 (0 self)
Add to MetaCart
The four colour theorem states that the vertices of every planar graph can be coloured with at most four colours so that no two adjacent vertices receive the same colour. This theorem is famous for
many reasons, including the fact that its original 1977 proof includes a non-trivial computer verification. Recently, a formal proof of the theorem was obtained with the equational logic program Coq.
In this paper we use the computational method for evaluating (in a uniform way) the complexity of mathematical problems presented in [8, 6] to evaluate the complexity of the four colour theorem. Our
method uses a Diophantine equational representation of the theorem. We show that the four colour theorem has roughly the same complexity as the Riemann hypothesis and almost four times the complexity
of Fermat’s last theorem. 1
, 2009
"... Interaction between sensornet nodes and the physical environment in which they are embedded implies real-time requirements. Application tasks are divided into smaller subtasks and distributed
among the constituent nodes. These subtasks must be executed in the correct place, and in the correct order, ..."
Cited by 1 (1 self)
Add to MetaCart
Interaction between sensornet nodes and the physical environment in which they are embedded implies real-time requirements. Application tasks are divided into smaller subtasks and distributed among
the constituent nodes. These subtasks must be executed in the correct place, and in the correct order, for correct application behaviour. Sensornets generally have no global clock, and incur
unacceptable cost if traditional synchronisation protocols are implemented. We present a lightweight primitive which generates a periodic sequence of synchronisation events which are coordinated
across large sensornets structured into clusters or cells. Two biologically-inspired mechanisms are combined; desynchronisation within cells, and synchronisation between cells. This hierarchical
coordination provides a global basis for local application-driven timing decisions at each node. 1
, 2009
"... In this paper we provide a computational method for evaluating in a uniform way the complexity of a large class of mathematical problems. The method, which is inspired by NKS1, is based on the
possibility to completely describe complex mathematical problems, like the Riemann hypothesis, in terms of ..."
Cited by 1 (0 self)
Add to MetaCart
In this paper we provide a computational method for evaluating in a uniform way the complexity of a large class of mathematical problems. The method, which is inspired by NKS1, is based on the
possibility to completely describe complex mathematical problems, like the Riemann hypothesis, in terms of (very) simple programs. The method is illustrated on a variety of examples coming from
different areas of mathematics and its power and limits are studied.
"... I, the undersigned, hereby declare that the work contained in this dissertation is my own original work and that I have not previously in its entirety or in part submitted it at any university
for a degree. ..."
Add to MetaCart
I, the undersigned, hereby declare that the work contained in this dissertation is my own original work and that I have not previously in its entirety or in part submitted it at any university for a
, 2008
"... From a philosophical viewpoint, mathematics has often and traditionally been distinguished from the natural sciences by its formal nature and emphasis on deductive reasoning. Experiments — one
of the corner stones of most modern natural science — have had no role to play in mathematics. However, dur ..."
Add to MetaCart
From a philosophical viewpoint, mathematics has often and traditionally been distinguished from the natural sciences by its formal nature and emphasis on deductive reasoning. Experiments — one of the
corner stones of most modern natural science — have had no role to play in mathematics. However, during the last three decades, high speed computers and sophisticated software packages such as Maple
and Mathematica have entered into the domain of pure mathematics, bringing with them a new experimental flavor. They have opened up a new approach in which computer-based tools are used to experiment
with the mathematical objects in a dialogue with more traditional methods of formal rigorous proof. At present, a subdiscipline of experimental mathematics is forming with its own research problems,
methodology, conferences, and journals. In this paper, I first outline the role of the computer in the mathematical experiment and briefly describe the impact of high speed computing on mathematical
research within the emerging sub-discipline of experimental mathematics. I then consider in more detail the epistemological claims put forward within experimental mathematics and comment on some of
the discussions that experimental mathematics has provoked within the mathematical community in recent years. In the second part of the paper, I suggest the notion of exploratory experimentation as a
possible framework for understanding experimental mathematics. This is illustrated by discussing the so-called PSLQ algorithm.
, 2008
"... In this paper we provide a computational method for evaluating in a uniform way the complexity of a large class of mathematical problems. The method is illustrated on a variety of examples
coming from different areas of mathematics and its power and limits are studied. 1 ..."
Add to MetaCart
In this paper we provide a computational method for evaluating in a uniform way the complexity of a large class of mathematical problems. The method is illustrated on a variety of examples coming
from different areas of mathematics and its power and limits are studied. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9451769","timestamp":"2014-04-18T00:00:08Z","content_type":null,"content_length":"24451","record_id":"<urn:uuid:8d170f22-a6dd-406a-8cb5-30315c22af39>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
100 (Differential Calculus)
• The final exam has been scheduled on December 18, 8:30 am
• Answers to odd-numbered questions from Course Notes are posted on the Recommended Homework web page.
• Midterm 1 solutions: Section 103, Section 105.
• Midterm 2 solutions: Section 103, Section 105.
Mathematics 100 (Differential Calculus), Fall 2007
Section 103: MWF 12:00-12:50, MCLD 228.
Section 105: MWF 2:00-2:50, Buch A202.
Instructor: Professor Izabella Laba
Office: Math Bldg 239.
Phone: 822 2450.
E-mail: ilaba@math.ubc.ca.
Office hours: Mon 3-4, Fri 11-12, and by appointment. I will also be available for a few minutes after each class.
Essential information: General information for all sections of Math 100 is posted on the Math 100 common page. Please familiarize yourself with it. The additional information and policies posted here
are specific to Sections 103 and 105.
Your course mark will be based on the homeworks (10%), two midterms (20% each), and the final exam (50%). Grades will be scaled as explained on the common web page. Both midterms and the final exam
will be strictly closed-book: no formula sheets, calculators, or other aids will be allowed.
• The midterms will be held in class on Wednesdays, October 3 and November 7.
• The final examination will be scheduled by the Registar later in the term. Attendance at the final examination is required, so be careful about making other commitments (such as travel).
• 7 homework assignments will be collected and graded. They will be due on Wednesdays, September 19, 26, October 17, 24, 31, November 14, 21. Each assignment will be announced at least a week in
advance and posted on the homework web page. The homework part of your course mark will be based on your best 5 scores.
Academic concession. Missing a midterm, or handing in a homework after the due date, will normally result in a mark of 0. Exceptions may be granted in two cases: prior consent of the instructor, or a
medical emergency. Supporting documentation (e.g. a doctor's note) will be required. If your request for academic concession is approved, your course mark will be based on your remaining coursework
(in the case of a missed midterm, this usually means 10% homework, 30% the other midterm, 60% final exam).
Useful links: [Mathematics Department] [University of British Columbia]
|
{"url":"http://www.math.ubc.ca/~ilaba/teaching/math100_F2007/","timestamp":"2014-04-19T04:34:47Z","content_type":null,"content_length":"4536","record_id":"<urn:uuid:19ced943-995a-484c-b00f-15da8b1e01f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homann Transfers
Day 1
In order to understand the Hohmann Transfer, students must first know a little about orbits. The Hohmann Transfer consists of two circular orbits and one elliptical orbit.
For circular orbits students should know:
Discuss the radius of the orbit with students. Is the radius constant or does it change and why?
Students need to understand the difference between speed and velocity.
Speed is a scalar quantity which refers to "how fast an object is moving." A fast-moving object has a highspeed while a slow-moving object has a low speed. An object with no movement at all has a
zero speed.
Velocity is a vector quantity which refers to "the rate at which an object changes its position." Imagine a person moving rapidly - one step forward and one step back - always returning to the
original starting position. While this might result in a frenzy of activity, it would result in a zero velocity.
* Speed is a scalar and does not keep track of direction; velocity is a vector and is direction-aware.
There are two types of forces involved in orbits. F1 is the force that pushes down on the Earth (in this case) from the other object and F2 is the force between two objects in general.
We define F1 and F2 as:
Discuss with students the speed of an object in a circular orbit (constant because the radius is constant) and the force of an object (also constant). F2 always equals a constant and is always equal
to F1.
Now that students know this information, they can calculate the velocity of an object in a circular orbit.
Let F1 = F2 and solve for V. We get,
Also, discuss the inverse relationship between the radius of the circular orbit and the velocity of the object. Ask them what happens to the velocity as the radius increases and when it decreases.
For student note sheet on Day 1, click here.
Day 2
For elliptical orbits students should know:
Again, disucss with the students the speed of an object in an elliptical orbit. The speed is not constant because the radius changes. When the radius is the greatest, at the Apogee, velocity will be
at the minmum. When the radius is the smallest, at the Perigee, velocity is at the maximum.
All that is left is to find the velocity of an object in an elliptical orbit. This is where it gets a little tricky.
*** Teacher's, from this point on use your discretion on how much information you want to give your studnets. How much you lead them will depend on the level of your students.
In order to find the velocity, students need to have a basic understanding of kinetic energy and potential energy. Potential energy is stored energy, while kinetic energy is energy of motion. It is
the energy it possesses because of its motion. If we subtract an objects potential energy from its kinetic energy, we get the total amount of energy an object possesses. We know the formula for the
potential energy, kinetic energy and the total energy of an object, so we can substitute those into the formula and solve for velocity.
To simplify things, substitute
For student note sheet on Day 2, click here.
Day 3
Now we know how to find the velocity of an object in a circular orbit and an elliptical orbit. Ask students what else they think they need to know to successfully make a Hohmann Transfer. Talk a
little more about what it is and how it is done to lead them to the conclusion that they also need to know the velocity at the apogee and perigee in order to transfer the spacecraft from one orbit to
the next.
In order to find the velocity at the apogee and perigee, it is important that students understand the eccentricity of ellipses and how to label a and c on an ellipse. Have students refer to the
diagram below.
We already know that the velocity of an object in a elliptical orbit is
In order to find the velocity at A and P, we need to put the formula in terms of A and P. This is where eccentricity and our diagram come into play.
Talk about whether velocity is faster at the apogee or perigee. Students should come to the conclusion that Vp > Va because the radius is the shortest at the perigee, which means velocity is at its
Now that students can find the velocity of an object in a circular and elliptical orbit, and the velocity of an object at the apogee and perigee of an elliptical orbit, they can begin to explain how
to move a spacecraft from one circular orbit to another.
For student note sheet on Day 3, click here.
Day 4
The following worksheet , adapted from Andrew Izsak's EMAT 6550 course on conic sections at UGA, will lead them in performing a Hohmann Transfer.
Click here for an answer sheet.
|
{"url":"http://jwilson.coe.uga.edu/EMAT6680Fa05/Bacon/hohmanntransfers.html","timestamp":"2014-04-17T01:12:16Z","content_type":null,"content_length":"12434","record_id":"<urn:uuid:2b5bde72-70b2-4ca7-8ca0-19069fc4f79d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
|