content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
[13.01] Evolution of the GJ876 Planets into the 2:1 Orbital Resonance
DDA2001, April2001
Session 13. Extra Solar Planets
Wednesday, 2:30-4:00pm
[Previous] | [Session 13] | [Next]
[13.01] Evolution of the GJ876 Planets into the 2:1 Orbital Resonance
M.H. Lee, S.J. Peale (UCSB)
The evolution of originally more widely separated orbits into the currently observed 2:1 orbital resonance between the two ~ Jupiter mass planets about GJ876 (Marcy et al. 2001) is essentially
independent of the means of orbital convergence. The best fit dynamically determined coplanar orbits (Laughlin and Chambers 2001), using both Keck and Lick data and corresponding to \sin{i}\approx
0.77, yield a system with \lambda[1]-2\lambda[2]+\varpi[1], \lambda[1]-2\lambda[2]+\varpi[2] and \varpi[1]-\varpi[2] all librating about 0^\circ with remarkably small amplitudes, where \lambda[1,2]
are the mean longitudes of the inner and outer planets respectively and \varpi[1,2] are the longitudes of periapse. The eccentricities of the planets are forced by the resonance and are constrained
to the particular observed ratio by the requirement that the retrograde secular periapse motions be identical. If the outer planet migrates inward relative to the inner planet from dissipative
interactions with the nebula, the system is automatically captured into all the resonant librations for sufficiently small initial eccentricities and evolves to an equilibrium configuration with
constant eccentricities as the orbits continue to shrink with constant semimajor axis ratio a[1]/a[2]. The equilibrium eccentricities so obtained are independent of the rate of evolution and are
remarkably close to the best fit values, although the amplitudes of libration of the resonance variables are somewhat smaller than those in the best fit solution. If the system is evolved by allowing
either da[1]/dt>0 or da[2]/dt<0 with the cause unspecified, all three resonance variables are again automatically trapped into libration about 0^\circ, but now the eccentricities can grow to very
large values while the system remains stably librating. The amplitudes of the librations at any time depend on the initial values of the eccentricities, and for particular initial values, the
amplitudes match the best fit amplitudes as the current values of the eccentricities are passed. The robustness of the evolution into the resonance whatever means is chosen and the damped nature and
extreme stability of the best fit solution means that the system is almost certainly correctly represented by the Laughlin and Chambers best fit solution.
[Previous] | [Session 13] | [Next]
|
{"url":"http://aas.org/archives/BAAS/v33n3/dda2001/55.htm","timestamp":"2014-04-18T23:39:25Z","content_type":null,"content_length":"3750","record_id":"<urn:uuid:4e73bc4d-1c15-4db8-9646-8b0dd71ea566>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maksim Zhukovskii - On the construction of a web graph and PageRank actualization
In the talk I'll tell about some methods of constructing web graph and its modelling. We compare web graphs constructed in different ways by comparing effectiveness of PageRank of pages from such
graphs. Especially we focus on different methods of accounting redirects in the web graph. We measure the effectiveness by evaluating the NDCG@ metrics of linear combination of PageRank and BM-25 and
an assessors' rating. In the end of the talk I'll tell about the new ways of calculating PageRank sensible to pages' freshness. We obtain the new algorithm ``Actual PageRank'' and prove that it is
better than both classic PageRank and one of the most advanced recency-sensitive link analysis algorithms T-Fresh.
|
{"url":"http://www.cc.gatech.edu/~dovrolis/wite12/abstracts/maksim.html","timestamp":"2014-04-19T07:27:22Z","content_type":null,"content_length":"1312","record_id":"<urn:uuid:764c35ca-8a24-4e42-948f-242a79f54819>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Belvedere, GA Math Tutor
Find a Belvedere, GA Math Tutor
...In this position, I work as a part of the Student Support Team, teaching phonics, math, and reading to struggling students. As a 1st grade interventionist, the majority of my day is used to
help Kindergarten and 1st grade students who are struggling with phonics. It is such a difficult subject to grasp for beginning readers and they need all the support they can get!
14 Subjects: including prealgebra, reading, geometry, algebra 1
...During college, I have helped several students pass their tests in Linear Algebra. I also used it during my Differential Equations course. I have a Bachelors degree in Mechanical Engineering.
12 Subjects: including probability, algebra 1, algebra 2, calculus
...I am very good at recognizing difficulties in learning math and have the ability to correct those difficulties. I have taught middle school math for 3 years and taught high school math for 6
years. I also taught in the college environment for over 10 years and I am currently teaching Math.
20 Subjects: including calculus, discrete math, GRE, GMAT
...I have also taken multiple seminars on religion during my 19 years as a teacher, both in Pennsylvania as well as in Georgia. My first teaching job was as an ESL teacher. I am fully bilingual
in English/Spanish.
21 Subjects: including prealgebra, algebra 1, English, reading
...I have experience in working with students from pre-k to 6th grade. Most recently I have been a lead teacher in a combined 4th/5th grade classroom at a small, progressive, private school. I am
most interested in working with 3rd to 6th graders in developing their learning strategies throughout the core curriculum.
10 Subjects: including prealgebra, reading, writing, grammar
Related Belvedere, GA Tutors
Belvedere, GA Accounting Tutors
Belvedere, GA ACT Tutors
Belvedere, GA Algebra Tutors
Belvedere, GA Algebra 2 Tutors
Belvedere, GA Calculus Tutors
Belvedere, GA Geometry Tutors
Belvedere, GA Math Tutors
Belvedere, GA Prealgebra Tutors
Belvedere, GA Precalculus Tutors
Belvedere, GA SAT Tutors
Belvedere, GA SAT Math Tutors
Belvedere, GA Science Tutors
Belvedere, GA Statistics Tutors
Belvedere, GA Trigonometry Tutors
Nearby Cities With Math Tutor
Avondale Estates Math Tutors
Briarcliff, GA Math Tutors
Decatur, GA Math Tutors
Dunaire, GA Math Tutors
Embry Hls, GA Math Tutors
North Atlanta, GA Math Tutors
North Decatur, GA Math Tutors
Overlook Sru, GA Math Tutors
Rockbridge, GA Math Tutors
Scottdale, GA Math Tutors
Snapfinger, GA Math Tutors
Stone Mountain Math Tutors
Tuxedo, GA Math Tutors
Vinnings, GA Math Tutors
Vista Grove, GA Math Tutors
|
{"url":"http://www.purplemath.com/Belvedere_GA_Math_tutors.php","timestamp":"2014-04-17T00:57:24Z","content_type":null,"content_length":"23897","record_id":"<urn:uuid:724d94ea-a5d6-493c-9a2b-056d7dc2d94a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Random Number Generation
Scheme Cookbook
has a few recipes on random number generation. Why not turn them into a library?
Once upon a time I got bitten by the random number generation bug and did just that. I wrote large parts of the random.plt library available through
"Huh - where exactly?", I hear you say. Well, the library were submitted when PLaneT were in its infancy, so it appears under the 2xx-page. After two major updates since the 200-series no one checks
the old libraries (not even the
PLT Source Browser
), so not surprisingly the random number library is forgotten.
After "porting" (well, I *did* change a single line) it is now available again.
contains the gory details, so I'll just give a few examples:
> (require (planet "random.ss" ("schematics" "random.plt" 1 0)))
; Good old Gaussian distribution
> (random-gaussian)
; A "stochastic variable"
> (define X (random-source-make-gaussians default-random-source))
> (X)
> (X)
; Other stuff
> (random-gamma 1)
> (random-permutation 5)
#5(3 1 2 4 0)
> (random-chi-square 1)
The source contains an implementation by Sebastian Egner of a very interesting algorithm. It is the generation of random number following a discrete distribution, which surprisingly can be
implemented efficiently.
(random-source-make-discretes s w)
given a source s of random bits in the sense of SRFI-27
and a vector w of n >= 1 non-negative real numbers,
this procedure constructs and returns a procedure rand
such that (rand) returns the next integer x in {0..n-1}
from a pseudo random sequence with
Pr{x = k} = w[k] / Sum(w[i] : i)
for all k in {0..n-1}.
Think about it - how would you implement it?
|
{"url":"http://blog.scheme.dk/2006/12/random-number-generation.html","timestamp":"2014-04-20T15:51:10Z","content_type":null,"content_length":"24343","record_id":"<urn:uuid:598dc88d-2ba6-4d8b-a938-bdd75a04c94e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to understand the diffeomorphism in the Cheeger-Gromov compactness
up vote 2 down vote favorite
Recall that $(M_{k}, g_k,O_k)$ of complete pointed Riemannian manifolds converges smoothly in the sense of Cheeger-Gromov to $(M_{\infty}, g_{\infty}, O_{\infty})$ if there exists an exhaustion of
open sets $U_k$ of $M_{\infty}$ containing $ O_{\infty}$, and a sequence of diffeomorphism $\phi_k$ from $U_k$ to $V_k=\phi_k(U_k) \subset M_k$ with $\phi_k (O_{\infty})=O_k$, such that the pull back
$(U_k, \phi_k^{\ast} (g_k))$ converges to $(M_{\infty}, g_{\infty})$ uniformly on compact sets in $M_{\infty}$.
These days I am interested in how wild the diffeomorphism $\phi_k$ can be. Consider a sequence of rotationaly invariant metrics on $\mathbb{R}^n$ with uniformly bounded geometry and injective radius
at the origin has a uniform lower bound (which can be ensured by for example, choosing a uniform bounded positive cone angle at infinity for the sequence metrics), fix the sequence of points to be
the origin, by the compactness theorem we expect a limit metric which is still rotationaly invariant. However, I am not sure if the limit is necessarily rotationaly invariant. If so does it mean that
in this special case, we can choose the sequence of diffeomorphism to be a sequence of rotations around the origin?
I understand that a way to think about this symmetric case is to go over the general construction on diffeomorphism and the limit metric in the proof of Compactness Theorem. It is a little involved
and I am still on the way to understand it. Any suggestions or help will be appreciated.
Added after posted:
To make the point of "understand the diffeomorphism $\phi_k$" clear, let me continue on the rotationally symmetric example, Let us put one more restrition on the sequence $(M_k, g_k, O_k)=(R^n,
g_k,O)$ that the scalar curvature $R(O)$ attain the maximum for each $g_k$, then after the taking limit we can get a metric $(M_\infty, g_\infty,O_\infty)$ which also have $R(O_\infty)$ as a maximum
on $M_\infty$, But if we take $O_k$ to be a sequence points $P_k$ suitably close to the origin $O$, the limit will be a new metric $(N_\infty, h_\infty,P_\infty)$. It is reasonably to believe the new
metric could be the same as $(M_\infty, g_\infty,O_\infty)$. However $R(P_\infty)$ might not be the maximum of scalar curvature on $N_\infty$ in general, So it is natural to find a maximum of scalar
curvature on $N_\infty$ first, name it $Q_\infty \in N_\infty$, then check how far the pull back sequence $\Phi_k(Q_\infty) \in M_k$ is from $O$. It sounds to me that it is important to understand
the behavior of $\Phi_k$. However I feel that it is a little wild even for this example.
mg.metric-geometry dg.differential-geometry
Not sure what you mean by "wild". As for rotational symmetry, $g_k$ has a subsequence that converges to a rotational symmetric metric because of precompactness in the equivariant GH-topology
(described in papers of Fukaya). By uniqueness of the limit $g_\infty$ is rotationally symmetric. However, I do not see how it would help you learn about the diffeomorphisms $\phi_k$ because as you
say they are defined not on the whole manifold $M_\infty$ but on its compact domains. – Igor Belegradek Dec 28 '12 at 14:27
The diffeomorphisms $\phi_k$ defined on compact domains won't be equivariant though, they will be almost equivariant, which is why it is unclear how this restricts $\phi_k$. – Igor Belegradek Dec
28 '12 at 14:50
Thanks Igor for suggesting equivariant version of convergence. Sorry for the vague question. Just add one more example to clarify. – Bo_Y Dec 28 '12 at 22:05
I do not understand the added paragraph. I think it makes no sense in several places. For example the maximum of scalar curvature need not be preserved under GH convergence. Also changing the base
point can change the limit metric, so your "reasonably to believe" seems unfounded. As far as I can see your question is not precisely stated, and hence is not appropriate for MO. You need to think
what you wish to ask, and then repost. – Igor Belegradek Dec 29 '12 at 3:10
add comment
1 Answer
active oldest votes
I also don't understand the question at all. The diffeomorphisms are constructed to "normalize" the Riemannian metrics, so that the metrics differ as little as possible. So by construction
they are as far fron being wild as possible. Cheeger-Gromov compactness is the statement that such "tame" diffeomorphisms exist and, since they behave so nicely, subsequences of both the
diffeomorphisms and metrics converge on any compact domain.
up vote 1 If you know something more about the metrics, then you can often use the additional information to construct particularly nice diffeomorphisms.
down vote
If all of the metrics are rotationally symmetric on $\mathbb{R}^n$, then there is no need to construct diffeomorphisms at all. Cheeger-Gromov compactness, for example assuming bounded
sectional curvature, in this situation is easily verified using the metrics written with respect to polar co-ordinates and the Jacobi equation.
I'm not sure about you last statement. If you take a metric $g$ on $\mathbb{R}^2$ which looks like paraboloid, and define $g_i$ to be the pullback of $g$ by the diffeomorhism $x\mapsto \
frac{x}{i}$, the diffeomorhisms are needed I think ? – Thomas Richard Dec 29 '12 at 4:47
Ok, now I understand : in the case you mentionned the diffeomorhisms are just give by the exponential maps at the origin and isometric identification of the tangent spaces at the origin.
– Thomas Richard Dec 29 '12 at 9:29
add comment
Not the answer you're looking for? Browse other questions tagged mg.metric-geometry dg.differential-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/117391/how-to-understand-the-diffeomorphism-in-the-cheeger-gromov-compactness","timestamp":"2014-04-18T03:02:51Z","content_type":null,"content_length":"61420","record_id":"<urn:uuid:8c0adf22-71b6-4bdd-98e6-9a68d8b9caed>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AD (2005), pp. 357-364
2005 International Conference on Analysis of Algorithms
Conrado Martínez (ed.)
DMTCS Conference Volume AD (2005), pp. 357-364
author: László Györfi and Sándor Győri
title: Analysis of tree algorithm for collision resolution
keywords: random access communication, collision resolution time, tree algorithm
abstract: For the tree algorithm introduced by [Cap79] and [TsMi78] let
denote the expected collision resolution time given the collision multiplicity
. If
stands for the Poisson transform of
, then we show that
- L(N) ≃ 1.29·10
(2 π
N + 0.698).
If your browser does not display the abstract correctly (because of the different mathematical symbols) you may look it up in the PostScript or PDF files.
reference: László Györfi and Sándor Győri (2005), Analysis of tree algorithm for collision resolution , in 2005 International Conference on Analysis of Algorithms, Conrado Martínez (ed.), Discrete
Mathematics and Theoretical Computer Science Proceedings AD, pp. 357-364
bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file.
ps.gz-source: dmAD0133.ps.gz (109 K)
ps-source: dmAD0133.ps (277 K)
pdf-source: dmAD0133.pdf (122 K)
The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at
least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your
browser correctly.
Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the
other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript.
Automatically produced on Di Sep 27 10:09:37 CEST 2005 by gustedt
|
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAD0133/1557","timestamp":"2014-04-19T02:40:28Z","content_type":null,"content_length":"14683","record_id":"<urn:uuid:288107f0-bda6-4a34-869f-5d1d0a155fe0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
D. J. Bernstein
Authenticators and signatures
How to install Zmodexp
The Zmodexp library interface
What is it?
Zmodexp is a library for fast integer modular exponentiation.
Zmodexp 0.51, compiled with egcs 2.90.29 -O1 -fofp -malign-double -mpentiumpro -fschedule-insns -fschedule-insns2, can compute any 512-bit power modulo any 512-bit integer in 1627698 Pentium-II
cycles. (In other words, 4.66 milliseconds on a Pentium II-350. This is faster than Rainbow's $2000 CryptoSwift hardware.) I'm not aware of any other library better than 3000000 cycles.
Most libraries are much slower on the original Pentium than on the Pentium II. Zmodexp is not. Zmodexp 0.51 can compute any 512-bit power modulo any 512-bit integer in 1819000 Pentium cycles. Zmodexp
will provide excellent performance on any modern CPU.
I expect Zmodexp to change the way people implement some common cryptographic tools, notably public-key signatures. However, Zmodexp 0.51 is not ready for integration into other programs: it relies
on some seat-of-the-pants numerical analysis that has not yet been mathematically verified; it doesn't support any sizes other than 512 bits; it doesn't support non-x86 chips; and it isn't fully
optimized. If you're not interested in the details of how fast arithmetic works then you should probably wait for the next release.
|
{"url":"http://cr.yp.to/zmodexp.html","timestamp":"2014-04-18T11:03:57Z","content_type":null,"content_length":"1702","record_id":"<urn:uuid:a13111e4-5e1e-4c2b-97ed-b2130615545f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Induction Help
May 6th 2013, 04:12 PM #1
May 2013
Induction Help
I think I went wrong somewhere here.
Particularly in the last 2 lines.
Here is a pic of my attempt:
Re: Induction Help
I think I went wrong somewhere here.
Particularly in the last 2 lines.
Here is a pic of my attempt:
Your posting is totally useless because to is unreadable.
If you are serious then learn to post your questions in LaTeX code.
Re: Induction Help
First time posting, don't know what latex is. But I will google and find out and post again. Thanks. Though, this is the first time I have ever heard someone call my question useless
Re: Induction Help
Not a useless question, just hard to read.
We have a LaTeX system on site. Check out the LaTeX help forum.
Re: Induction Help
Verify that for n>=1 [LaTeX ERROR: Convert failed]
My attempt:
1. Proved for n
2. Assumed for k:
[LaTeX ERROR: Convert failed]
3. prove for k+1
[LaTeX ERROR: Convert failed]
[LaTeX ERROR: Convert failed]
[LaTeX ERROR: Convert failed]
[LaTeX ERROR: Convert failed]
factoring out 3 - 1/k
never mind i figured out my mistake, i didn't realize to leave a 1 when factoring something from itself.
Re: Induction Help
oh and I'm not sure why I got latex errors, i made sure to put it in:
Re: Induction Help
one advice, just below your message there is a "Go advanced/preview post" - inthat way ou would be able to check your post before posting it , it takes some time from the beginning but it's
really helpful,
May 6th 2013, 04:22 PM #2
May 6th 2013, 04:26 PM #3
May 2013
May 6th 2013, 04:40 PM #4
May 6th 2013, 04:46 PM #5
May 2013
May 6th 2013, 04:48 PM #6
May 2013
May 6th 2013, 05:09 PM #7
|
{"url":"http://mathhelpforum.com/discrete-math/218653-induction-help.html","timestamp":"2014-04-17T01:12:15Z","content_type":null,"content_length":"48899","record_id":"<urn:uuid:7019f62c-0c00-41ef-adf3-030b091bdf76>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
San Leandro Algebra 2 Tutor
Find a San Leandro Algebra 2 Tutor
...When working with a new student, I figure out, with the student's help, where he or she began to have difficulty in the course, and then work with the student intensively to build a deep
understanding of the material so he or she can become successful in the current math course and in future math...
15 Subjects: including algebra 2, calculus, geometry, algebra 1
...Before earning my secondary teaching credential, I worked as a math tutor for 3rd through 5th graders in a public elementary school. Due to this experience, I am able to explain math concepts
at a level that is developmentally appropriate. I enjoy engaging students this age with fun puzzles and activities that at the same time deepen their conceptual understanding.
10 Subjects: including algebra 2, calculus, geometry, algebra 1
...I will begin teaching high school math in Oakland in September. I have been tutoring math for the last seven years. I have worked freelance, for DVC in Pleasant Hill (at their math lab), and
for UC Santa Cruz (as a learning assistant). I have tutored pre-algebra, algebra, geometry, statistics, ...
15 Subjects: including algebra 2, reading, calculus, writing
...I enjoy working in a teaching mode with kids. I have been active with the Boy Scouts for 20 years, and counseled scouts in several merit badge categories, including Personal Fitness, Computing,
and Citizenship. I spent three semesters working with Junior Achievement in the Palo Alto and Mountai...
39 Subjects: including algebra 2, English, chemistry, writing
...Now that you have a better idea about my experience, you might have already guessed my major and my degree in college. I have two Bachelor's degrees in physics and astrophysics from Berkeley
University, which is the BEST PUBLIC university in the world according to many national and international...
10 Subjects: including algebra 2, physics, geometry, algebra 1
|
{"url":"http://www.purplemath.com/San_Leandro_algebra_2_tutors.php","timestamp":"2014-04-17T19:24:39Z","content_type":null,"content_length":"24195","record_id":"<urn:uuid:d275cb98-f0df-40bf-ace7-cab13ddd0192>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reversing the Pump-Dependence of a Laser at an Exceptional Point
Apr 8, ’14 5:31 AM
M. Brandstetter, M. Liertzer, C. Deutsch, P. Klang, J. Schöberl, H. E. Türeci, G. Strasser, K. Unterrainer, S. Rotter
When two resonant modes in a system with gain or loss coalesce in both their resonance position and their width, a so-called “Exceptional Point” occurs which acts as a source of non-trivial physics
in a diverse range of systems. Lasers provide a natural setting to study such “non-Hermitian degeneracies”, since they feature resonant modes and a gain material as their basic constituents. Here we
show that Exceptional Points can be conveniently induced in a photonic molecule laser by a suitable variation of the applied pump. Using a pair of coupled micro-disk quantum cascade lasers, we
demonstrate that in the vicinity of these Exceptional Points the laser shows a characteristic reversal of its pump-dependence, including a strongly decreasing intensity of the emitted laser light for
increasing pump power. This result establishes photonic molecule lasers as promising tools for exploring many further fascinating aspects of Exceptional Points, like a strong line-width enhancement
and the coherent perfect absorption of light in their vicinity as well as non-trivial mode-switching and the accumulation of a geometric phase when encircling an Exceptional Point parametrically.
Optics (physics.optics); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Chaotic Dynamics (nlin.CD)
Coherent perfect absorption with and without lasing in complex potentials
Apr 8, ’14 5:29 AM
Zafar Ahmed
We study the coherent scattering from complex potentials to find that the coherent perfect absorption (CPA) without lasing is not possible in the PT-symmetric domain as the s-matrix is such that \(|\
det S(k)|=1\). We confirm that in the domain of broken PT-symmetry\(|\det S(k)|\) can become indeterminate 0/0 at the spectral singularity (SS), k=k∗, of the potential signifying CPA with lasing at
threshold gain. We also find that in the domain of unbroken symmetry (when the potential has real discrete spectrum) neither SS nor CPA can occur. In this, regard, we find that exactly solvable Scarf
II potential is the unique model that can exhibit these novel phenomena and their subtleties analytically and explicitly. However, we show that the other numerically solved models also behave
Quantum Physics (quant-ph)
Spectral Singularities and CPA-Laser Action in a Weakly Nonlinear PT-Symmetric Bilayer Slab
Apr 8, ’14 5:24 AM
Ali Mostafazadeh
We study optical spectral singularities of a weakly nonlinear PT-symmetric bilinear planar slab of optically active material. In particular, we derive the lasing threshold condition and calculate the
laser output intensity. These reveal the following unexpected features of the system: 1. For the case that the real part of the refractive index η of the layers are equal to unity, the presence of
the lossy layer decreases the threshold gain; 2. For the more commonly encountered situations when η−1 is much larger than the magnitude of the imaginary part of the refractive index, the threshold
gain coefficient is a function of η that has a local minimum. The latter is in sharp contrast to the threshold gain coefficient of a homogeneous slab of gain material which is a decreasing function
of η. We use these results to comment on the effect of nonlinearity on the prospects of using this system as a CPA-laser.
Quantum Physics (quant-ph); Mathematical Physics (math-ph); Optics (physics.optics)
Exceptional points and lasing self-termination in photonic molecules
Apr 8, ’14 5:21 AM
Ramy El-Ganainy, Mercedeh Khajavikhan, Li Ge
We investigate the rich physics of photonic molecule lasers using a non-Hermitian dimer model. We show that several interesting features, predicted recently using a rigorous steady state ab-initio
laser theory (SALT), can be captured by this toy model. In particular, we demonstrate the central role played by exceptional points in both pump-selective lasing and laser self-terminations
phenomena. Due to its transparent mathematical structure, our model provides a lucid understanding for how different physical parameters (optical loss, modal coupling between microcavities and pump
profiles) affect the lasing action. Interestingly, our analysis also confirms that, for frequency mismatched cavities, operation in the proximity of exceptional points (without actually crossing the
square root singularities) can still lead to laser self-termination. We confirm this latter prediction for two coupled slab cavities using scattering matrix analysis and SALT technique. In addition,
we employ our model to investigate the pump-controlled lasing action and we show that emission patterns are governed by the locations of exceptional points in the gain parameter space. Finally we
extend these results to multi-cavity photonic molecules, where we found the existence of higher-order EPs and pump-induced localization.
Optics (physics.optics)
Mathematical and physical aspects of complex symmetric operators
Apr 8, ’14 5:17 AM
Stephan Ramon Garcia, Emil Prodan, Mihai Putinar
Recent advances in the theory of complex symmetric operators are presented and related to current studies in non-hermitian quantum mechanics. The main themes of the survey are: the structure of
complex symmetric operators, C-selfadjoint extensions of C-symmetric unbounded operators, resolvent estimates, reality of spectrum, bases of C-orthonormal vectors, and conjugate-linear symmetric
operators. The main results are complemented by a variety of natural examples arising in field theory, quantum physics, and complex variables.
Functional Analysis (math.FA); Other Condensed Matter (cond-mat.other); Mathematical Physics (math-ph); Operator Algebras (math.OA); Spectral Theory (math.SP)
Non-Hermitian PT-symmetric relativistic Quantum mechanics with a maximal mass in an external magnetic field
Apr 3, ’14 1:34 PM
Starting with the modified Dirac equations for free massive particles with the γ5-extension of the physical mass \(m\to m_1+\gamma_5m_2\), we consider equations of relativistic quantum mechanics in
the presence of an external electromagnetic field. The new approach is developing on the basis of existing methods for study the unbroken PT symmetry of Non-Hermitian Hamiltonians. The paper shows
that this modified model contains the definition of the mass parameter, which may use as the determination of the magnitude scaling of energy M. Obviously that the transition to the standard approach
is valid when small in comparison with M energies and momenta. Formally, this limit is performed when \(M\to\infty\), which simultaneously should correspond to the transition to a Hermitian limit \
(m2\to0\). Inequality \(m\leq M\) may be considered and as the restriction of the mass spectrum of fermions considered in the model. Within of this approach, the effects of possible observability
mass parameters: \(m_1, m_2, M\) are investigated taking into account the interaction of the magnetic field with charged fermions together with the accounting of their anomalous magnetic moments.
High Energy Physics – Theory (hep-th); High Energy Physics – Phenomenology (hep-ph); Mathematical Physics (math-ph); Quantum Physics (quant-ph)
Dipolar Bose-Einstein condensates in a PT-symmetric double-well potential
Mar 27, ’14 2:17 AM
Rüdiger Fortanier, Dennis Dast, Daniel Haag, Holger Cartarius, Jörg Main, Günter Wunner
We investigate dipolar Bose-Einstein condensates in a complex external double-well potential that features a combined parity and time-reversal symmetry. On the basis of the Gross-Pitaevskii equation
we study the effects of the long-ranged anisotropic dipole-dipole interaction on ground and excited states by the use of a time-dependent variational approach. We show that the property of a similar
non-dipolar condensate to possess real energy eigenvalues in certain parameter ranges is preserved despite the inclusion of this nonlinear interaction. Furthermore, we present states that break the
PT symmetry and investigate the stability of the distinct stationary solutions. In our dynamical simulations we reveal a complex stabilization mechanism for PT-symmetric, as well as for PT-broken
states which are, in principle, unstable with respect to small perturbations.
Quantum Physics (quant-ph); Quantum Gases (cond-mat.quant-gas); Chaotic Dynamics (nlin.CD)
Cumulants of time-integrated observables of closed quantum systems and PT-symmetry, with an application to the quantum Ising chain
Mar 20, ’14 4:59 PM
James M. Hickey, Emanuele Levi, Juan P. Garrahan
We study the connection between the cumulants of a time-integrated observable of a quantum system and the PT-symmetry properties of the non-Hermitian deformation of the Hamiltonian from which the
generating function of these cumulants is obtained. This non-Hermitian Hamiltonian can display regimes of broken and of unbroken PT-symmetry, depending on the parameters of the problem and on the
counting field that sets the strength of the non-Hermitian perturbation. This in turn determines the analytic structure of the long-time cumulant generating function (CGF) for the time-integrated
observable. We consider in particular the case of the time-integrated (longitudinal) magnetisation in the one-dimensional Ising model in a transverse field. We show that its long-time CGF is singular
on a curve in the magnetic field/counting field plane that delimits a regime where PT-symmetry is spontaneously broken (which includes the static ferromagnetic phase), from one where it is preserved
(which includes the static paramagnetic phase). In the paramagnetic phase, conservation of PT -symmetry implies that all cumulants are sub-linear in time, a behaviour usually associated to the
absence of decorrelation.
Statistical Mechanics (cond-mat.stat-mech); Quantum Physics (quant-ph)
Bulk Vortex and Horseshoe Surface Modes in Parity-Time Symmetric Media
Mar 20, ’14 7:20 AM
Huagang Li, Xing Zhu, Zhiwei Shi, Boris A. Malomed, Tianshu Lai, Chaohong Lee
We demonstrate that in-bulk vortex localized modes, and their surface half-vortex (“horseshoe”) counterparts (which were not reported before in truncated settings) self-trap in two-dimensional (2D)
nonlinear optical systems with PT-symmetric photonic lattices (PLs). The respective stability regions are identified in the underlying parameter space. The in-bulk states are related to truncated
nonlinear Bloch waves in gaps of the PL-induced spectrum. The basic vortex and horseshoe modes are built, severally, of four and three beams with appropriate phase shifts between them. Their stable
complex counterparts, built of up to 12 beams, are reported too.
Optics (physics.optics); Pattern Formation and Solitons (nlin.PS)
Discrete spectrum of thin PT-symmetric waveguide
Mar 19, ’14 5:39 PM
Denis Borisov
In a thin multidimensional layer we consider a second order differential PT-symmetric operator. The operator is of rather general form and its coefficients are arbitrary functions depending both on
slow and fast variables. The PT-symmetry of the operator is ensured by the boundary conditions of Robin type with pure imaginary coefficient. In the work we determine the limiting operator, prove the
uniform resolvent convergence of the perturbed operator to the limiting one, and derive the estimates for the rates of convergence. We establish the convergence of the spectrum of perturbed operator
to that of the limiting one. For the perturbed eigenvalues converging to the limiting discrete ones we prove that they are real and construct their complete asymptotic expansions. We also obtain the
complete asymptotic expansions for the associated eigenfunctions.
Spectral Theory (math.SP); Mathematical Physics (math-ph); Analysis of PDEs (math.AP)
|
{"url":"http://ptsymmetry.net/","timestamp":"2014-04-20T08:14:20Z","content_type":null,"content_length":"40864","record_id":"<urn:uuid:a1e217df-9c9b-4558-94ea-0a1f0a625bf7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Removing masked arrays for 1.7? (Was 1.7 blockers)
[Numpy-discussion] Removing masked arrays for 1.7? (Was 1.7 blockers)
Nathaniel Smith njs@pobox....
Tue Apr 17 07:52:50 CDT 2012
On Tue, Apr 17, 2012 at 6:44 AM, Travis Oliphant <travis@continuum.io> wrote:
> Basically, there are two sets of changes as far as I understand right now:
> 1) ufunc infrastructure understands masked arrays
> 2) ndarray grew attributes to represent masked arrays
> I am proposing that we keep 1) but change 2) so that only certain kinds of NumPy arrays actually have the extra function pointers (effectively a sub-type). In essence, what I'm proposing is that the NumPy 1.6 PyArrayObject become a base-object, but the other members of the C-structure are not even present unless the Masked flag is set. Such changes would not require ripping code out --- just altering the presentation a bit. Yet, they could have large long-term implications, that we should explore before they get fixed.
> Whether masked arrays should be a formal sub-class is actually an un-related question and I generally lean in the direction of not encouraging sub-classes of the ndarray. The big questions are does this object work in the calculation infrastructure. Can I add an array to a masked array. Does it have a sum method? I think it could be argued that a masked array does have a "is a" relationship with an array. It can also be argued that it is better to have a "has a" relationship with an array and be-it's own-object. Either way, this object could still have it's first-part be binary compatible with a NumPy Array, and that is what I'm really suggesting.
It sounds like the main implementation issue here is that this masked
array class needs some way to coordinate with the ufunc infrastructure
to efficiently and reliably handle the mask in calculations. The core
ufunc code now knows how to handle masks, and this functionality is
needed for where= and NA-dtypes, so obviously it's staying,
independent of what we decide to do with masked arrays. So the
question is just, how do we get the masked array and the ufuncs
talking to each other so they can do the right thing. Perhaps we
should focus, then, on how to create a better hooking mechanism for
ufuncs? Something along these lines?
If done in a solid enough way, this would also solve other problems,
e.g. we could make ufuncs work reliably on sparse matrices, which
seems to trip people up on scipy-user every month or two. Of course,
it's very tricky to get right :-(
As far the masked array API: I'm still not convinced we know how we
want these things to behave. The masked arrays in master currently
implement MISSING semantics, but AFAICT everyone who wants MISSING
semantics prefers NA-dtypes or even plain old NaN's over a masked
implementation. And some of the current implementation's biggest
backers, like Chuck, have argued that they should switch to
skipNA=True, which is more of an IGNORED-style semantic. OTOH, there's
still disagreement over how IGNORED-style semantics should even work
(I'm thinking of that discussion about commutivity). The best existing
model is numpy.ma -- but the numpy.ma API is quite different from the
NEP, in more ways than just the default setting for skipNA. numpy.ma
uses the opposite convention for mask values, it has additional
concepts like the fillvalue, hardmask versus softmask, and then
there's the whole way the NEP uses views to manage the mask. And I
don't know which of these numpy.ma features are useful, which are
extraneous, and which are currently useful but will become extraneous
once the users who really wanted something more like NA-dtypes switch
to those.
So we all agree that masked arrays can be useful, and that numpy.ma
has problems. But straightforwardly porting numpy.ma to C doesn't seem
like it would help much, and neither does simply declaring that
numpy.ma has been deprecated in favour of a new NEP-like API.
So, I dunno. It seems like it might make the most sense to:
1) take the mask fields out of the core ndarray (while leaving the
rest of Mark's infrastructure, as per above)
2) make sure we have the hooks needed so that numpy.ma, and NEP-like
APIs, and whatever other experiments people want to try, can all
integrate well with ufuncs, and make any other extensions that are
generally useful and required so that they can work well
3) once we've experimented, move the winner into the core. Or whatever
else makes sense to do once we understand what we're trying to
-- Nathaniel
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-April/061827.html","timestamp":"2014-04-20T21:10:12Z","content_type":null,"content_length":"7602","record_id":"<urn:uuid:64705388-ec2f-4311-96c4-45efdfd819e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Easy problem
Here's my reasoning, hidden for anyone who wants to do it themselves:
Sorry to anyone uptight about punctuation, but the hide tag has a problem with apostrophes.
Anyway, if you read that, you can see that it was mostly guesswork that got it and we're still nowhere near proving that that's the only combination. I've got as far as showing that there needs to be
1 even and 2 odd, but beyond there I'm stuck.
Last edited by mathsyperson (2005-08-03 04:12:21)
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=12123","timestamp":"2014-04-21T14:51:52Z","content_type":null,"content_length":"18791","record_id":"<urn:uuid:f5599e61-6208-496b-9e0c-09cd5b9fb259>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: July 2004 [00138]
[Date Index] [Thread Index] [Author Index]
Re: Normal distribtion
• To: mathgroup at smc.vnet.net
• Subject: [mg49224] Re: Normal distribtion
• From: "Roger L. Bagula" <rlbtftn at netscape.net>
• Date: Thu, 8 Jul 2004 02:50:59 -0400 (EDT)
• References: <7228735a.0407050100.4695fc68@posting.google.com> <QaednZQbSYcwpnTdRVn-vA@comcast.com> <ccdlms$sd5$1@smc.vnet.net> <ccg4n7$ot0$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Dear Ray Koopman,
I'm familar with the Lorentzian distribution also called Cauchy also
called classically the Witch of Agnesi.
What's your point in your trignometric identity manipulation to get
another of the "peak" distributions?
I use the Normal function to give my amplitude at the address on the
real line x :{ x,-Infinity, Infinity}.
By my experiments this function of mine gives a much larger variability
than the Mathematica built in White noise function or what you get from
a Polar normal distribution, but the logic of the derivation is clear:
1) a random number is found in [0,1]
2) a point on a circle is found
3) that point is projected to the real line at x
4) that real line value gives an amplitude of a distribution that is a
normal distribution.
The result is a Gaussian normal noise.
I really can't make it any simpler.
The idea was to develop a Gaussian noise generator whose derivation was
simple and obvious.
Ray Koopman wrote:
> "Roger L. Bagula" <rlbtftn at netscape.net> wrote in message news:<ccdlms$sd5$1 at smc.vnet.net>...
>>I found a better faster way to get a Gaussian/ white noise:
>>In Mathematica notebook style:
>>ListPlot[noise,PlotRange--> All,PlotJoined->True]
>>It is a projective line ( circle to line random taken as the basic for a
>>normal distribution's amplitude.) based algorithm.
> (1+Sqrt[1-a^2])/a = Cot[ArcSin[a]/2], so
> y = x[Sin[2*Pi*Random[]]] = Cot[Pi*Random[]] has a Cauchy distribution.
> Exp[-y^2/2]/Sqrt[2*Pi] is the standard normal density function,
> but why do you use it here?
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Jul/msg00138.html","timestamp":"2014-04-17T09:50:46Z","content_type":null,"content_length":"36243","record_id":"<urn:uuid:2c61e8ec-62de-41f8-90ab-8ab402a0b23d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bosker Blog
2014-02-25 § Leave a comment
Suppose you have a lock of this sort n dials and k numbers on each dial. Let m(n, k) be the minimum number of turns that always suffice to open the lock from any starting position, where a turn
consists of rotating any number of adjacent rings by one place.
In the previous post, we found an algorithm for computing these bicycle lock numbers, revealing a mysterious symmetry, « Read the rest of this entry »
2014-02-18 § 3 Comments
Don’t lock your bicycle with a combination lock. Someone will steal it: I learnt this the hard way. It’s quite easy to open a combination lock by feel, without knowing the combination. Try it: with a
bit of practice, you can open the lock with your eyes shut. (It’s easier to do this with an old wobbly lock than a tight-fitting new one.)
« Read the rest of this entry »
2013-11-13 § 4 Comments
There is a new feature of Pages and Keynote, not mentioned in any of Apple’s publicity nor in any press coverage I’ve seen, that is really very interesting. Perhaps it will even one day prove to have
been revolutionary, in a quiet way. « Read the rest of this entry »
2013-08-18 § 29 Comments
I hate the Pumping Lemma for regular languages. It’s a complicated way to express an idea that is fundamentally very simple, and it isn’t even a very good way to prove that a language is not regular.
Here it is, in all its awful majesty: for every regular language L, there exists a positive whole number p such that every string w∈L that has p characters or more can be broken down into three
substrings xyz, where y is not the empty string and the total length of xy is at most p, and for every natural number i the string xy^iz is also in L.
« Read the rest of this entry »
2013-08-16 § 6 Comments
Shadab Ahmed raised an interesting question. Open a Unix command shell, type : '!!' and press return. Then type : "!!" '!!' and press return. Now repeat the following a few times: press the up arrow,
and press return.
2013-07-10 § Leave a comment
Paddy3118 wrote about partitioning elements in the same way a Venn diagram does. So, if we have sets A, B and C, the partitions are
2013-05-10 § Leave a comment
My PhD thesis (2007) was available for several years from my web site at the University of Manchester, but since that site was taken down it’s been unavailable. Today’s announcement is that I’ve
finally got round to uploading it to GitHub.
Recent Comments
• Decoding the mysterious symmetry of the bicycle lock numbers | Bosker Blog on The bicycle lock problem
• HTFB on The bicycle lock problem
|
{"url":"http://bosker.wordpress.com/","timestamp":"2014-04-19T07:47:24Z","content_type":null,"content_length":"40762","record_id":"<urn:uuid:14b18bd2-fe8e-4b87-b413-fa872e909c0c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Finding area was rigorously defined by Riemann, which is why we call these Riemann Sums. The sums transform into the integral, which is used to find area.
EDIT: I apologize. My Latex coding is not working. Perhaps a mod could point out the error.
[Don't worry about it. LaTeX is pretty complicated. It took me a while to get that working. Hopefully that's what you meant to say
|
{"url":"http://www.mathisfunforum.com/post.php?tid=2553&qid=25486","timestamp":"2014-04-18T13:32:45Z","content_type":null,"content_length":"18088","record_id":"<urn:uuid:f223c3b8-60d9-47c2-9c82-ab2620391b38>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Contemporary Mathematics
1983; 346 pp; softcover
Volume: 20
ISBN-10: 0-8218-5016-4
ISBN-13: 978-0-8218-5016-9
List Price: US$46
Member Price: US$36.80
Order Code: CONM/20
This volume arose from a special session on Low Dimensional Topology organized and conducted by Dr. Lomonaco at the American Mathematical Society meeting held in San Francisco, California, January
7-11, 1981.
• J. S. Birman and R. F. Williams -- Knotted periodic orbits in dynamical systems II: Knot holders for fibered knots
• S. A. Bleiler -- Doubly prime knots
• J. Brandenburg, M. Dyer, and R. Strebel -- On J.~H.~C.~Whitehead's aspherical question II
• R. Fenn and D. Sjerve -- Geometric cohomology theory
• R. Fintushel and R. J. Stern -- Seifert fibered 3-manifolds and nonorientable 4-manifolds
• M. H. Freedman -- A conservative Dehn's lemma
• D. Gabai -- The Murasugi sum is a natural geometric operation
• D. Gillman and D. Rolfsen -- Manifolds and their special spines
• S. Jajodia and B. Magurn -- Realizing units as Whitehead torsions in low dimensions
• D. Johnson -- A survey of the Torelli group
• L. H. Kauffman -- Combinatorics and knot theory
• R. Kramer -- Dehn twists and handlebodies of genus two
• J. P. Levine -- Localization of link modules
• A. Libgober -- Alexander modules of plane algebraic curves
• S. J. Lomonaco, Jr. -- Five dimensional knot theory
• T. Maeda and K. Murasugi -- Covering linkage invariants and Fox's problem 13
• R. Mandelbaum and B. Moishezon -- Numeric invariants in 3-manifolds
• W. W. Menasco -- Polyhedra representation of link complements
• J. J. Ratcliffe -- A fibered knot in a homology 3-sphere whose group is nonclassical
• M. Scharlemann and C. Squier -- Automorphisms of the free group of rank two without finite orbits
|
{"url":"http://www.ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-20","timestamp":"2014-04-23T12:59:30Z","content_type":null,"content_length":"15242","record_id":"<urn:uuid:9c89aee4-9c20-49d5-a8eb-5ff4ab52b2cf>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Historical Geology/TEX86
From Wikibooks, open books for an open world
In this article we shall look at the TEX[86] temperature proxy, how it works, and how we know that it works.
The TEX[86] method is based on glycerol dibiphytanyl glycerol tetraethers (GDGTs). These come in various forms with more or fewer cyclopentane structures (the reader need neither know nor care what
these actually are). The GDGTs of interest to us can be denoted as GDGT 1, GDGT 2, GDGT 3 and GDGT 4' (pronounced "four-prime"), where the numbers 1, 2, 3 and 4 correspond to the number of
(For the benefit of those readers who wish to research the TEX[86] method in the technical literature, I should point out that different papers use different numbering schemes; the one used here
seems most suitable, because of the correspondence between the GDGT number and the number of cyclopentanes.)
Crenarchaeota and temperature[edit]
In nature these GDGTs are produced by the group of single-celled organisms known as the Crenarchaeota. As with the alkenones discussed in the previous article, the GDGTs resist processes that destroy
most organic compounds, and so can be found in marine sediment; and just as with the alkenones, the proportions of the different GDGTs produced by the Crenarchaeota varies with temperature, according
to the formula:
T = 56.2 × TEX[86] - 10.78
where T is the temperature in °C and TEX[86] (an abbreviation of "TetraEther indeX of tetraethers consisting of 86 carbon atoms") is defined as the ratio of the sum of the quantities of GDGTs 2, 3
and 4' to the sum of the quantities of GDGTs 1, 2, 3 and 4'.
It should be noted that this relationship ceases to hold below about 5°C; below this temperature the variation in TEX[86] becomes negligible and so measurements of TEX[86] can't distinguish between
temperatures below that point.
How do we know?[edit]
We can measure TEX[86] in living organisms and recent sediments, and measure the temperature of the water in which they are found; this is how the formula given above was derived.
However, unlike the U^k'[37] method, the relationship is harder to demonstrate experimentally. Experiments do show that TEX[86] increases with temperature in the lab; however for some unknown reason
lab-grown cultures of Crenarchaeota produce less GDGT 4' than is found in Crenarchaeota in the wild, and so the exact relationship between temperature and TEX[86] can't yet be replicated in the
|
{"url":"http://en.wikibooks.org/wiki/Historical_Geology/TEX86","timestamp":"2014-04-20T06:36:17Z","content_type":null,"content_length":"27715","record_id":"<urn:uuid:c0da0410-5c28-42f9-a2ac-734e1f7a372e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodlynne, NJ Algebra 2 Tutor
Find a Woodlynne, NJ Algebra 2 Tutor
...I feel that getting experience teaching students one on one is the best way for me to have an immediate impact. This will especially help to personalize the teaching experience and is an
effective way to create a trusting relationship. I am especially personable and I know I have the ability to...
16 Subjects: including algebra 2, Spanish, calculus, physics
Students and families,Only in my third year of teaching, I have developed extensive knowledge around elementary mathematics. I have taught students in grades 2-12 in a variety of settings - urban
classrooms, after-school programs, summer enrichment, and summer schools. I work with students to develop strong conceptual understanding and high math fluency through creative math games.
9 Subjects: including algebra 2, geometry, ESL/ESOL, algebra 1
...I am currently a junior in the University of Pennsylvania's undergraduate math program. Previously, I completed undergraduate work at North Carolina State University for a degree in
Philosophy. Math is a subject that can be a bit difficult for some folks, so I really love the chance to break down barriers and make math accessible for students that are struggling with aspects
of math.
22 Subjects: including algebra 2, calculus, statistics, geometry
...My personal goal while tutoring is to bring a student to an understanding of the topic that develops confidence in the material and the ability to work with it on their own eventually and
build upon that. Beyond that, I would like to bring forth a love of learning for its own sake and skills to ...
33 Subjects: including algebra 2, English, French, physics
...Topics include, but are not limited to: (1) operations with real numbers, (2) linear equations and inequalities, (3) relations and functions, (4) polynomials, (5) algebraic fractions, and (6)
nonlinear equations. When I taught Algebra 2, I would take my students across the street to McDonald's a...
12 Subjects: including algebra 2, geometry, ASVAB, algebra 1
Related Woodlynne, NJ Tutors
Woodlynne, NJ Accounting Tutors
Woodlynne, NJ ACT Tutors
Woodlynne, NJ Algebra Tutors
Woodlynne, NJ Algebra 2 Tutors
Woodlynne, NJ Calculus Tutors
Woodlynne, NJ Geometry Tutors
Woodlynne, NJ Math Tutors
Woodlynne, NJ Prealgebra Tutors
Woodlynne, NJ Precalculus Tutors
Woodlynne, NJ SAT Tutors
Woodlynne, NJ SAT Math Tutors
Woodlynne, NJ Science Tutors
Woodlynne, NJ Statistics Tutors
Woodlynne, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Woodlynne_NJ_algebra_2_tutors.php","timestamp":"2014-04-21T00:07:35Z","content_type":null,"content_length":"24526","record_id":"<urn:uuid:3936f522-53ef-4454-a42e-e51fa74666df>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graphs of Linear Equations ( Read ) | Algebra
What if a bowling alley charged $3 for the first game and $2 for each additional game? How could you graph this functional relationship and use it to find the cost of playing 5 games? After
completing this Concept, you'll be able to graph and analyze linear functions like this one.
Watch This
CK-12 Foundation: 0403S Graphs of Linear Equations (H264)
You’re stranded downtown late at night with only $8 in your pocket, and your home is 6 miles away. Two cab companies serve this area; one charges $1.20 per mile with an additional $1 fee, and the
other charges $0.90 per mile with an additional $2.50 fee. Which cab will be able to get you home?
Graph a Linear Equation
At the end of Lesson 4.1 we looked at ways to graph a function from a rule. A rule is a way of writing the relationship between the two quantities we are graphing. In mathematics, we tend to use the
words formula and equation to describe the rules we get when we express relationships algebraically. Interpreting and graphing these equations is an important skill that you’ll use frequently in
Example A
A taxi costs more the further you travel. Taxis usually charge a fee on top of the per-mile charge to cover hire of the vehicle. In this case, the taxi charges $3 as a set fee and $0.80 per mile
traveled. Here is the equation linking the cost in dollars $(y)$$(x)$ .
$y = 0.8x + 3$
Graph the equation and use your graph to estimate the cost of a seven-mile taxi ride.
We’ll start by making a table of values. We will take a few values for $x$$y-$
First, here’s our table of values:
$x$ $y$
1 3.8
2 4.6
3 5.4
4 6.2
And here’s our graph:
To find the cost of a seven-mile journey, first we find $x = 7$$y-$$y = 8$$y = 9$
A seven mile taxi ride would cost approximately $8.50 ($8.60 exactly).
Here are some things you should notice about this graph and the formula that generated it:
• The graph is a straight line (this means that the equation is linear ), although the function is discrete and really just consists of a series of points.
• The graph crosses the $y-$$y = 3$$a + 3$
• Every time we move over by one square we move up by 0.8 squares (notice that that’s also the coefficient of $x$
• If we move over by three squares, we move up by $3 \times 0.8$squares.
Example B
A small business has a debt of $500,000 incurred from start-up costs. It predicts that it can pay off the debt at a rate of $85,000 per year according to the following equation governing years in
business $(x)$$(y)$ .
$y = -85x + 500$
Graph the above equation and use your graph to predict when the debt will be fully paid.
First, we start with our table of values:
$x$ $y$
Then we plot our points and draw the line that goes through them:
Notice the scale we’ve chosen here. There’s no need to include any points above $y= 500$
Next we need to determine how many years it takes the debt to reach zero, or in other words, what $x-$$y-$$x = 4$$y-$$x-$$x = 4$$x-$
To read the time that the debt is paid off, we simply read the point where the line hits $y = 0$$x-$$x = 6$ the debt will definitely be paid off in six years.
To see more simple examples of graphing linear equations by hand, see the Khan Academy video on graphing lines at http://www.youtube.com/watch?v=2UrcUfBizyw . The narrator shows how to graph several
linear equations, using a table of values to plot points and then connecting the points with a line.
Analyze Graphs of Linear Functions
We often use graphs to represent relationships between two linked quantities. It’s useful to be able to interpret the information that graphs convey. For example, the chart below shows a fluctuating
stock price over ten weeks. You can read that the index closed the first week at about $68, and at the end of the third week it was at about $62. You may also see that in the first five weeks it lost
about 20% of its value, and that it made about 20% gain between weeks seven and ten. Notice that this relationship is discrete, although the dots are connected to make the graph easier to interpret.
Analyzing graphs is a part of life - whether you are trying to decide to buy stock, figure out if your blog readership is increasing, or predict the temperature from a weather report. Many graphs are
very complicated, so for now we’ll start off with some simple linear conversion graphs. Algebra starts with basic relationships and builds to more complicated tasks, like reading the graph above.
Example C
Below is a graph for converting marked prices in a downtown store into prices that include sales tax. Use the graph to determine the cost including sales tax for a $6.00 pen in the store.
To find the relevant price with tax, first find the correct pre-tax price on the $x-$$x=6$
Draw the line $x = 6$$y-$$y \approx 6.75$$y = 6$$y = 7$
The approximate cost including tax is $6.75.
Example D
The graph for converting temperature from Fahrenheit to Celsius is shown below. Use the graph to convert the following:
a) $70^\circ$Fahrenheit to Celsius
b) $0^\circ$Celsius to Fahrenheit
a) To find $70^\circ$$x-$$x = 70$$y-$
b) To find $0^\circ$$y = 0$$x-$
Watch this video for help with the Examples above.
CK-12 Foundation: Graphs of Linear Equations
• Be aware that although we graph the function as a line to make it easier to interpret, the function may actually be discrete .
Guided Practice
The graph for converting temperature from Fahrenheit to Celsius is shown below. Use the graph to convert the following:
a) $0^\circ$
b) $30^\circ$
a) To find $0^\circ$$y-$$x = 0$$y-$
b) To find $30^\circ$$y = 30$
For 1-3, make a table of values for the following equations and then graph them.
1. $y = 2x + 7$
2. $y = 0.7x - 4$
3. $y = 6 - 1.25x$
4. “Think of a number. Multiply it by 20, divide the answer by 9, and then subtract seven from the result.”
1. Make a table of values and plot the function that represents this sentence.
2. If you picked 0 as your starting number, what number would you end up with?
3. To end up with 12, what number would you have to start out with?
5. At the airport, you can change your money from dollars into euros. The service costs $5, and for every additional dollar you get 0.7 euros.
1. Make a table for this and plot the function on a graph.
2. Use your graph to determine how many euros you would get if you give the office $50.
3. To get 35 euros, how many dollars would you have to pay?
4. The exchange rate drops so that you can only get 0.5 euros per additional dollar. Now how many dollars do you have to pay for 35 euros?
For 6-9, the graph below shows a conversion chart for converting between weight in kilograms and weight in pounds. Use it to convert the following measurements.
6. 4 kilograms into weight in pounds
7. 9 kilograms into weight in pounds
8. 12 pounds into weight in kilograms
9. 17 pounds into weight in kilograms
For 10-12, use the graph from problems 6-9 to answer the following questions.
10. An employee at a sporting goods store is packing 3-pound weights into a box that can hold 8 kilograms. How many weights can she place in the box?
11. After packing those weights, there is some extra space in the box that she wants to fill with one-pound weights. How many of those can she add?
12. After packing those, she realizes she misread the label and the box can actually hold 9 kilograms. How many more one-pound weights can she add?
|
{"url":"http://www.ck12.org/algebra/Graphs-of-Linear-Equations/lesson/Graphs-of-Linear-Equations-Intermediate/","timestamp":"2014-04-18T19:23:13Z","content_type":null,"content_length":"131985","record_id":"<urn:uuid:a971d978-5c61-41a3-b60f-b26a93f578d0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
andM.Yannakakis, \On the hardnessofapproximating minimization problems
- Algorithmica , 1996
"... The dominating set problem in graphs asks for a minimum size subset of vertices with the following property: each vertex is required to either be in the dominating set, or adjacent to some node
in the dominating set. We focus on the question of finding a connected dominating set of minimum size, whe ..."
Cited by 277 (9 self)
Add to MetaCart
The dominating set problem in graphs asks for a minimum size subset of vertices with the following property: each vertex is required to either be in the dominating set, or adjacent to some node in
the dominating set. We focus on the question of finding a connected dominating set of minimum size, where the graph induced by vertices in the dominating set is required to be connected as well. This
problem arises in network testing, as well as in wireless communication. Two polynomial time algorithms that achieve approximation factors of O(H (\Delta)) are presented, where \Delta is the maximum
degree, and H is the harmonic function. This question also arises in relation to the traveling tourist problem, where one is looking for the shortest tour such that each vertex is either visited, or
has at least one of its neighbors visited. We study a generalization of the problem when the vertices have weights, and give an algorithm which achieves a performance ratio of 3 ln n. We also
consider the ...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=11830977","timestamp":"2014-04-24T05:53:38Z","content_type":null,"content_length":"12696","record_id":"<urn:uuid:a72c71ea-0919-49f2-a12a-8914a3c7fb7e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mechanization of Farming: Frank Sadorus and Changes in Agriculture
1880 - Frank Sadorus born
1884-1890 - Horse drawn combine used in Pacific coast areas
1885 - George Eastman marketed first box camera
1889-1919 - Period of farm prosperity
1890 - Most of the basic potential of agricultural machinery
dependent on horsepower had been discovered
1890’s - Agriculture became increasingly mechanized and commercialized
1900-1920 - Urban influences on rural life intensified 1907 - Frank Sadorus begins taking photographs (he is 27)
1908 - Henry Ford manufactured the first Model-T automobile - President Roosevelt’s Country Life Commission was established and focused attention on the problems of farm wives and the difficulty of
keeping children on the farm
1910-1915 - Big open-geared tractors came into use in areas of extensive farming
1911 - GWB, Frank’s father, died
1912 - Frank Sadorus took his last photographs (he is 32)
1917 - United States declared war on Germany and enters World War I - Sadorus family farm was sold - Frank Sadorus was committed to an institution
1920’s - Agricultural surpluses became the chief agricultural issue
1920-1940 - Gradual increase in farm production resulted from expanded use of mechanized power
1934 - Agricultural Adjustment Act
Farmers and the Land:
│Decade│% of Labor Force = Farmers│Average # Acres│Overall Economic Cycle │
│1880 │49% │134 │business expansion │
│1890 │43% │136 │widespread bankruptcies and depression │
│1900 │38% │147 │return of prosperity / Panic of 1907 │
│1910 │31% │138 │prosperity and war boom │
│1920 │27% │148 │sharp postwar recession / speculative boom │
│1930 │21% │157 │Great Depression Change of Farming │
Technology and Its Influence:
1850 - About 80 labor hours were required to produce 100 bushels of corn (2 1/2 acres) by hand planting and with a walking plow and harrow.
1890 - About 40 labor hours were required to produce 100 bushels of corn (2 1/2 acres) with a two-bottom gangplow, disk and peg-tooth harrow, and a two-row planter.
1930 About 20 labor hours were required to produce 100 bushels of corn (2 1/2 acres) with a two-bottom gang plow, seven-foot tandem disk, harrow, twelve-foot combine, and trucks.
How Does the Math Work? (for students, if you want them to figure it out)
When looking at all these figures, it is easier to make sense of them by looking at the number of people needed in order to produce the same amount of corn each year.
This can be done in a couple of ways:
First Calculation: We know from the information above about the 1850’s that it takes 80 hours to farm 2 1/2 acres. How long would it take to farm one acre?
80 ÷ 2.5 = ? ÷ 1
This ratio shows us that it would take 32 hours to farm one acre in the 1850’s. How long would it take to farm one acre in the 1890’s? or the 1930’s?
(answers: 1890’s = 16 hours to farm one acre; 1930’s = 8 hours to farm one acre)
The Sadorus family farm was roughly 180 acres.
How many hours would it take to farm 180 acres in the 1850s?
32 hours/acre x 180 acres = 5,760 hours
During the 1890s? 16 hours/acre x 180 acres = 2,880 hours
During the 1930s? 8 hours/acre x 180 acres = 1,440 hours
Alternative Calculation: There is another way to figure out how many hours it would take to farm the Sadorus’s land. We start with the same information. We know that during all of these time periods,
2 1/2 acres are needed to produce one unit, one hundred bushels of corn.
180 acres ÷ 2.5 acres = 72
In this math equation we broke down the Sadorus's farm into how many units can be produced. One unit is 100 bushels of corn. We found that the Sadorus's farm can produce 72 units.
In the 1850s it took about 80 hours to produce one unit. 80 hours x 72 units = 5,760 hours
In the 1890s it took about 40 hours to produce one unit. 40 hours x 72 units = 2,880 hours
In the 1930s it took about 20 hours to produce one unit. 20 hours x 72 units = 1,440 hours
We’ve already compared how long it would take to farm the Sadorus land during the different time periods. Now we want to find out how many people it would take to get the work done in the same amount
of time. Because we want to compare these numbers, we want to manipulate them in the same way.
Let’s assume that during all three time periods the people working work for ninety days. The number of days in itself is not important - the fact that we use the same number for ALL of our
calculations is.
1850s: 5,760 hours of work ÷ 90 days = 64 hours of work a day
1890s: 2,880 hours of work ÷ 90 days = 32 hours of work a day
1930s: 1,440 hours of work ÷ 90 days = 16 hours of work a day
Now, we know that you can’t work more than 24 hours a day - most people today don’t work more than 9 (except at planting and harvest times). So, in order to get the work done on time, they would have
to divide the work among a group of people. Let’s say that each person works 8 hours a day.
1850s: 64 hours of work a day ÷ 8 hours work for one person = 8 people working
1890s: 32 hours of work a day ÷ 8 hours work for one person = 4 people working
1930s: 16 hours of work a day ÷ 8 hours work for one person = 2 people working
These numbers can do a lot to help students understand how the changes in agricultural technology affected farmers and their families. Even in the mid-1800s it was essential that the family worked
together and stayed together in order to farm the land. As the times changed and there was more and there was more dependence on machinery, fewer people were needed to produce the same amount of
crops - and it was no longer economically feasible for families to stay together on the same land.
Do these statements agree with what actually happened with the Sadorus family? Are there other possible reasons for why the family sold their farm? (for example, they couldn’t afford to keep up with
the technology)
Photography Resources:
Eastman Photography site
Land record search site for Illinois (State Archives)
American Museum of Photography site
Relevant Pictures from Sadorus Collection:
Sadorus Collection # ---- Picture Subject
034 - farm machinery (thresher?)
035 - bailing hay by hand
046 - Phoebe on knees working in the ground
051 - machinery
061 - picking corn by hand
083 - machinery / horses
088 - machinery
361 - manure spreader
415 - very large machine
529 - corn picking wagon
555 - family in fall harvest
422 - bailing hay
431 - farm machinery
557 - farm machinery and horses
Research Assignment:
Students can divide into groups or work individually on a topic to research farm statistics today and report on the average size of farms, the number of farms in Illinois, the types of farms, new
methods of farming, new technologies, and the possible causes of bankruptcy or sell off, comparing today to the 1950s or 1970s.
Agriculture Resources:
http://www.ilfb.org/ Illinois Farm Bureau Web site
http://www.fb.com/ American Farm Bureau
http://www.agstats.state.il.us/ Illinois Agricultural Statistics Service Web site. Click on Links for more sources.
|
{"url":"http://www.museum.state.il.us/ismdepts/art/sadorus/FarmMechanizationLesson.html","timestamp":"2014-04-17T04:24:56Z","content_type":null,"content_length":"25495","record_id":"<urn:uuid:a0b8dadc-4aa2-4602-a3b7-c6d9f324a4c0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why are DAGs good for task dependencies?
The ‘G’ in DAG is ‘Graph’. A Graph is a collection of nodes and edges that connect the nodes. For our purposes, each node would be a task, and each edge would be a dependency. The ‘D’ in DAG stands
for ‘Directed’. This means that each edge has a direction associated with it. So we can interpret the edge (a,b) as meaning that b depends on a, whereas the edge (b,a) would mean a depends on b. The
‘A’ is ‘Acyclic’, meaning that there must not be any closed loops in the graph. This is important for dependencies, because if a loop were closed, then a task could ultimately depend on itself, and
never be able to run. If your workflow can be described as a DAG, then it is impossible for your dependencies to cause a deadlock.
A Sample DAG
Here, we have a very simple 5-node DAG:
With NetworkX, an arrow is just a fattened bit on the edge. Here, we can see that task 0 depends on nothing, and can run immediately. 1 and 2 depend on 0; 3 depends on 1 and 2; and 4 depends only on
A possible sequence of events for this workflow:
0. Task 0 can run right away
1. 0 finishes, so 1,2 can start
2. 1 finishes, 3 is still waiting on 2, but 4 can start right away
3. 2 finishes, and 3 can finally start
Further, taking failures into account, assuming all dependencies are run with the default success=True,failure=False, the following cases would occur for each node’s failure:
0. fails: all other tasks fail as Impossible
1. 2 can still succeed, but 3,4 are unreachable
2. 3 becomes unreachable, but 4 is unaffected
3. and 4. are terminal, and can have no effect on other nodes
The code to generate the simple DAG:
import networkx as nx
G = nx.DiGraph()
# add 5 nodes, labeled 0-4:
map(G.add_node, range(5))
# 1,2 depend on 0:
# 3 depends on 1,2
# 4 depends on 1
# now draw the graph:
pos = { 0 : (0,0), 1 : (1,1), 2 : (-1,1),
3 : (0,2), 4 : (2,2)}
nx.draw(G, pos, edge_color='r')
For demonstration purposes, we have a function that generates a random DAG with a given number of nodes and edges.
def random_dag(nodes, edges):
"""Generate a random Directed Acyclic Graph (DAG) with a given number of nodes and edges."""
G = nx.DiGraph()
for i in range(nodes):
while edges > 0:
a = randint(0,nodes-1)
while b==a:
b = randint(0,nodes-1)
if nx.is_directed_acyclic_graph(G):
edges -= 1
# we closed a loop!
return G
So first, we start with a graph of 32 nodes, with 128 edges:
In [2]: G = random_dag(32,128)
Now, we need to build our dict of jobs corresponding to the nodes on the graph:
In [3]: jobs = {}
# in reality, each job would presumably be different
# randomwait is just a function that sleeps for a random interval
In [4]: for node in G:
...: jobs[node] = randomwait
Once we have a dict of jobs matching the nodes on the graph, we can start submitting jobs, and linking up the dependencies. Since we don’t know a job’s msg_id until it is submitted, which is
necessary for building dependencies, it is critical that we don’t submit any jobs before other jobs it may depend on. Fortunately, NetworkX provides a topological_sort() method which ensures exactly
this. It presents an iterable, that guarantees that when you arrive at a node, you have already visited all the nodes it on which it depends:
In [5]: rc = Client()
In [5]: view = rc.load_balanced_view()
In [6]: results = {}
In [7]: for node in G.topological_sort():
...: # get list of AsyncResult objects from nodes
...: # leading into this one as dependencies
...: deps = [ results[n] for n in G.predecessors(node) ]
...: # submit and store AsyncResult object
...: with view.temp_flags(after=deps, block=False):
...: results[node] = view.apply_with_flags(jobs[node])
Now that we have submitted all the jobs, we can wait for the results:
In [8]: view.wait(results.values())
Now, at least we know that all the jobs ran and did not fail (r.get() would have raised an error if a task failed). But we don’t know that the ordering was properly respected. For this, we can use
the metadata attribute of each AsyncResult.
These objects store a variety of metadata about each task, including various timestamps. We can validate that the dependencies were respected by checking that each task was started after all of its
predecessors were completed:
def validate_tree(G, results):
"""Validate that jobs executed after their dependencies."""
for node in G:
started = results[node].metadata.started
for parent in G.predecessors(node):
finished = results[parent].metadata.completed
assert started > finished, "%s should have happened after %s"%(node, parent)
We can also validate the graph visually. By drawing the graph with each node’s x-position as its start time, all arrows must be pointing to the right if dependencies were respected. For spreading,
the y-position will be the runtime of the task, so long tasks will be at the top, and quick, small tasks will be at the bottom.
In [10]: from matplotlib.dates import date2num
In [11]: from matplotlib.cm import gist_rainbow
In [12]: pos = {}; colors = {}
In [12]: for node in G:
....: md = results[node].metadata
....: start = date2num(md.started)
....: runtime = date2num(md.completed) - start
....: pos[node] = (start, runtime)
....: colors[node] = md.engine_id
In [13]: nx.draw(G, pos, node_list=colors.keys(), node_color=colors.values(),
....: cmap=gist_rainbow)
|
{"url":"http://ipython.org/ipython-doc/rel-0.13.1/parallel/dag_dependencies.html","timestamp":"2014-04-16T04:38:04Z","content_type":null,"content_length":"25151","record_id":"<urn:uuid:6ae75b6b-e621-4fce-a7d2-c75a55c84015>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roxbury, MA
Find a Roxbury, MA Precalculus Tutor
...I am also an expert in time management and study skills, which are an essential part of scholastic success. I am patient, enthusiastic about learning, and will work very hard with you to
achieve your academic goals. JoannaI have three years' experience tutoring high school students in biology.
10 Subjects: including precalculus, chemistry, geometry, biology
...I help with math - Algebra, Geometry, Pre-Calculus, Statistics. I am comfortable with Standard, Honors, and AP curricula. In addition to private tutoring, I have taught summer courses, provided
tutoring in Pilot schools, assisted in classrooms, and run test preparation classes (MCAS and SAT). Students tell me I'm awesome; parents tell me that I am easy to work with.
8 Subjects: including precalculus, geometry, statistics, algebra 2
...I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. I can tutor
any subject area from elementary math to college level.I got an A+ in Discrete Mathematics in College and an A in the graduate course 6.431 Applied Probability at MIT last year.
16 Subjects: including precalculus, French, elementary math, algebra 1
...I have excellent reading and communication skills, and a background in Latin to help with vocabulary. I am well aware of study methods to improve standardized test scores, and am able to
communicate these methods effectively. I have over ten years of formal musical training (instrumental), so I have a solid foundation in music theory.
38 Subjects: including precalculus, English, reading, chemistry
...Although I am not a fully certified teacher in Massachusetts, I have passed the MTEL Math and literacy exams for teaching 9-12th grade math subject content. Whether a student is trying to avoid
dropping down a level, or if a student aspires to move up a level next year, students have been able t...
13 Subjects: including precalculus, calculus, geometry, GRE
|
{"url":"http://www.purplemath.com/Roxbury_MA_precalculus_tutors.php","timestamp":"2014-04-19T12:29:21Z","content_type":null,"content_length":"24423","record_id":"<urn:uuid:e3855246-55c9-4bac-8f55-2bbda94c2077>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficient algorithm for embedding graphs in arbitrary surfaces
- Math. Slovaca , 1997
"... Let K be a subgraph of G. It is shown that if G is 3–connected modulo K then it is possible to replace branches of K by other branches joining same pairs of main vertices of K such that G has no
local bridges with respect to the new subgraph K. A linear time algorithm is presented that either perfor ..."
Cited by 8 (8 self)
Add to MetaCart
Let K be a subgraph of G. It is shown that if G is 3–connected modulo K then it is possible to replace branches of K by other branches joining same pairs of main vertices of K such that G has no
local bridges with respect to the new subgraph K. A linear time algorithm is presented that either performs such a task, or finds a Kuratowski subgraph K5 or K3,3 in a subgraph of G formed by a
branch e and local bridges on e. This result is needed in linear time algorithms for embedding graphs in surfaces.
, 1994
"... A linear time algorithm is presented that, for a given graph G, finds an embedding of G in the torus whenever such an embedding exists, or exhibits a subgraph\Omega of G of small branch size
that cannot be embedded in the torus. 1 Introduction Let K be a subgraph of G, and suppose that we are ..."
Cited by 4 (0 self)
Add to MetaCart
A linear time algorithm is presented that, for a given graph G, finds an embedding of G in the torus whenever such an embedding exists, or exhibits a subgraph\Omega of G of small branch size that
cannot be embedded in the torus. 1 Introduction Let K be a subgraph of G, and suppose that we are given an embedding of K in some surface. The embedding extension problem asks whether it is embedding
extension problem possible to extend the embedding of K to an embedding of G in the same surface, and any such embedding is an embedding extension of K to G. An embedding extension obstruction for
embedding extensions is a subgraph\Omega of G \Gamma E(K) such that obstruction the embedding of K cannot be extended to K [ \Omega\Gamma The obstruction is small small if K [\Omega is homeomorphic
to a graph with a small number of edges. If\Omega is small, then it is easy to verify (for example, by checking all the possibilities Supported in part by the Ministry of Science and Technolo...
- SIAM J. Discrete Math , 1997
"... Abstract. Let K = C ∪ e1 ∪ e2 be a subgraph of G, consisting of a cycle C and disjoint paths e1 and e2, connecting two interlacing pairs of vertices in C. Suppose that K is embedded in the
MöbiusbandinsuchawaythatC lies on its boundary. An algorithm is presented which in linear time extends the embe ..."
Cited by 3 (3 self)
Add to MetaCart
Abstract. Let K = C ∪ e1 ∪ e2 be a subgraph of G, consisting of a cycle C and disjoint paths e1 and e2, connecting two interlacing pairs of vertices in C. Suppose that K is embedded in the
MöbiusbandinsuchawaythatC lies on its boundary. An algorithm is presented which in linear time extends the embedding of K to an embedding of G, if such an extension is possible, or finds a “nice ”
obstruction for such embedding extensions. The structure of obtained obstructions is also analysed in details. Key words. surface embedding, obstruction, Möbius band, algorithm AMS subject
classifications. 05C10, 05C85, 68Q20 1. Introduction. Let K be a subgraph of a graph G. A K-bridge (or a Kcomponent)inG is a subgraph of G which is either an edge e ∈ E(G)\E(K) (together with its
endpoints) which has both endpoints in K, or it is a connected component of G − V (K) together with all edges (and their endpoints) between this component and K. EachedgeofaK-bridge B having an
endpoint in K is a foot of B. The vertices
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=487015","timestamp":"2014-04-21T01:04:34Z","content_type":null,"content_length":"18043","record_id":"<urn:uuid:78b67ca9-8dde-46d5-bb1b-7dc1c0044842>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Compounded Annually
December 1st 2007, 03:50 AM #1
Compounded Annually
Paul plans to contribute to a retirement fund. He will invest $500 on each birthday from age 25 to 64 inclusive. That is, he will make 40 contributions to the fund. The retirement fund pays
interest on the invesments at the rate of 8% per annum, compounded annually. How much money will be in Paul's fund on his 65th birthday?
Paul plans to contribute to a retirement fund. He will invest $500 on each birthday from age 25 to 64 inclusive. That is, he will make 40 contributions to the fund. The retirement fund pays
interest on the invesments at the rate of 8% per annum, compounded annually. How much money will be in Paul's fund on his 65th birthday?
i think, this is an Annuity type problem..
Amt at 65th bday $= \ 500 \cdot \frac{(1+i)^n - 1}{d}$, where n is the number of years and $d = \frac{i}{1+i}$
using this formula, i got: $\approx \ 139,890.52$
Thanks kalagota, but what is $i$ in the formula? Also, how would you derive this formula?
Edit: nvm I get $i$, but why is $d=\frac{i}{i+1}$
oww, i is the interest in decimal form..
derive, i'll try to explain..
for a period of n years, and you invested an amount say k every year at the beginning of each year, which is compounded annually with an interest i times 100%.. suppose your last payment was at
the beginning of the nth year..
say, you placed k at t=0(this is the beginning of the 1st year); after the end of nth year, it will earn a total of $k(1+i)^n$.. now, placing another k at t=1, after the end of nth year, that
money will earn a total of $k(1+i)^{n-1}$.. and so on.. until the beginning of the nth year (or the end of the (n-1)th year), you will place the last k, and at the end of the nth year, it will
earn $k(1+i)$.. thus you will have the sum:
$\underbrace {k(1+i)^{n} + k(1+i)^{n-1} + ... + k(1+i)}_{n \, years}$..
then using some derivations, you will get the formula..
Oh I understand now, thanks kalagota
I actually interpreted the problem differently
I thought he kept adding money to the total balance WHILE the total balance was being charged interest. This would mean evaluating $f^{40}(0)$, where $f(x) =1.08(x+ 500)$.
I mean it seems odd that he would have 40 different accounts with 500 dollars in them each.
Are you sure this isn't what it meant?
Oh I understand now, thanks kalagota
I actually interpreted the problem differently
I thought he kept adding money to the total balance WHILE the total balance was being charged interest. This would mean evaluating $f^{40}(0)$, where $f(x) =1.08(x+ 500)$.
I mean it seems odd that he would have 40 different accounts with 500 dollars in them each.
Are you sure this isn't what it meant?
i think no.. even my brother who is an accountant interpreted the way i did..
anyways, when i computed it, the answer was almost the same (the new answer exceed by 500).. byt my God, i have to make a program using scilab to compute it.. Ü
December 1st 2007, 04:31 AM #2
December 1st 2007, 04:34 AM #3
December 1st 2007, 04:53 AM #4
December 1st 2007, 06:05 AM #5
December 1st 2007, 04:14 PM #6
|
{"url":"http://mathhelpforum.com/pre-calculus/23878-compounded-annually.html","timestamp":"2014-04-20T22:16:24Z","content_type":null,"content_length":"48679","record_id":"<urn:uuid:c98fca35-be93-4efd-9128-c855b8afc423>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Search: all
Number of results: 61,877
1. All were happy. 2. All was clean. 3. All were clean. Q: In #1 all means all people, so #1 is grammatical. What about #2 and #3? In #2 and #3, all means all things. In this case, do we have to use
'was' or 'were'?
Monday, August 29, 2011 at 8:51pm by rfvv
Integrated Math 1
Which of the following may constitute a population? (choose all that are correct.) A. all rats in New York. B. all nurses in a hospital. C. all skyscrapers in Chicago. D. all brown-haired people in
Monday, July 23, 2012 at 6:34pm by Tifini
Your use of "all" are all correct. Sentences begin with a capital letter -- so 1. should be "All girls are beautiful." Other corrections: 2. All of them really have a kind heart. 3. All people are
Tuesday, December 20, 2011 at 5:52pm by Ms. Sue
Critical Thinking
A valid syllogism is: All A are B C is A therefore (since C is A and all A are B) C is B The non-syllogism above is: All A are B All C are B Therefore all A are B. Erm, no. Spot the difference.
Here's one of the same form: All fish are animals All birds are animals Therefore ...
Tuesday, October 20, 2009 at 11:47pm by jim
Could you please tell me how to say "all in all" in French. It snowed a bit more during the night but all in all it's not very much. Il a neigé un peu plus pendant la nuit mais "all in all" il n'est
pas beaucoup. Thank you so much!
Sunday, December 14, 2008 at 12:37pm by E.G.
History-Civil War
Thank You, I must comment to all at Jiskha, all the time you put in, in answering all those questions. You are all special and dedicated people in doing this.
Tuesday, July 12, 2011 at 4:29am by Joanne
What does a penguin,whale,and a seal have in common? A.They all breath underwater B.They all eat while on land C.They are all protected from the cold by thick layer of fat D.They are all mammals C?
Tuesday, April 17, 2012 at 4:56pm by Aizon
domain of all polynomials is all reals domain of rational functions is all reals except where the denominator is zero. 1. assuming you meant 2/(3x) - 4 domain: all reals except x=0 range: all reals
2. domain: all reals range: all reals, since odd-degree 3. domain: all reals ...
Wednesday, September 26, 2012 at 1:40am by Steve
all even numbers all fractions all real numbers all fractions between 0 and 1 all primes there are lots of these
Monday, April 9, 2012 at 12:35pm by Steve
Well M and m are all diffrent colors and since all the m and m arent the same color it means there uniquine and people arnt all the same we are all unique do you get what i am saying
Thursday, October 23, 2008 at 5:26pm by Kate
Multiply EVERY term in the second bracket by -x; then do the same with the -1. Gather all the like terms together: all the exponents, then all the x terms and then all the numbers.
Wednesday, August 22, 2012 at 12:47pm by charlie
Early childhood literacy
Most preschoolers can produce: a. all the vowel sounds b. all the sounds in their native language c. all consonants d. accurate productions of all sounds
Tuesday, July 5, 2011 at 7:51pm by Frankie
1. I am all ears. (What is the part of speech of 'all'? Is 'all' an adjective or an adverb? What is the meaning of 'all' here?) 2. They are all happy. (What is the part of speech of 'all'? Is 'all'
an adjective or an adverb? What is the meaning of 'all' here?)
Thursday, April 15, 2010 at 10:22pm by rfvv
To. Ms. Sue
I want to answer in short form. Thay have all helped to promote the sports by showing up for the athletes and giving them all the support they can. They were all very supportive towards all the
athletes and are continuing this progress.
Sunday, February 21, 2010 at 8:27pm by Sara
I don't think you just multiply the numbers for this one. If the question said that he was going to all 4 office stores, all 4 electronic stores, all 8 clothing stores, and all 5 sporting goods
stores and asked how many ways can he do that, then you would multiply them all ...
Sunday, December 12, 2010 at 12:10am by Jen
1. My new friends are all great. 2. All my new friends are great. 2-2. All of my new friends are great. 3. My new friends all are great. 4. They all are great. 5. All of them are great. (Are they all
grammatical? What is the part of speech of 'all' in Sentence? In the ...
Tuesday, March 10, 2009 at 3:24am by John
the domain of all polynomials is all real numbers the range of all linear equations (with nonzero slope) is all real numbers. Think about it. You have a straight line that extends all the way in both
directions. Assuming it is not horizontal, then there is no value for either ...
Saturday, July 14, 2012 at 12:56pm by Steve
kindly please correct my english grammar if it is correct or mistake. how to use the word all. 1.all girls are beautiful. 2.all of them is really has a kind heart. 3.all peaple is good.
Tuesday, December 20, 2011 at 5:52pm by jessie
not math
a. You will continue to be sore. b. All can be true. c. All can be true. d. Either one of the first two sentences is false, or Homer is not a student. e. All can be true. f. All can be true.
Saturday, March 31, 2012 at 1:33am by PsyDAG
cultural diversity
Stereotyping is thinking/believing/saying that all _______ are _________. (all old people are bad drivers) (all blondes are dumb) (all Chinese students are super-smart) (all women who wear glasses
are English teachers) etc., etc. What have you thought of? And how do YOU think ...
Tuesday, January 12, 2010 at 7:37am by Writeacher
If you are putting students into quick groups, let y our imagine run wild! 1. all students with white shoes, brown, black 2. all students with brown eyes, blue eyes, hazel eyes 3. all students with
no siblings (brothers/sisters), all with one, all with 2 or more Get the idea? Sra
Friday, May 1, 2009 at 10:59am by SraJMcGin
Do you think all people in real life are all good or all bad? Why?
Sunday, April 4, 2010 at 8:38pm by Ashley
yes they are all hairs on the body are the same because they are all made of growing proteins and in some people for example there hair may be course so the hair on there arms legs that sorta thing
is going to grow in strong and dark so yes they r all the same structurally. ...
Wednesday, October 27, 2010 at 10:40pm by niki
Language Arts
Read the following three words: sauntered, walked, danced What is the BEST way to group these words on a classification chart to help you understand their meanings? (1 point) They all end in –ed.
They are all in the past tense. They are all verbs. They are all ways in which ...
Tuesday, October 2, 2012 at 11:20am by Love2Learn
domain is all real numbers range, as with all exponentials, is all reals > 0 horizontal asymptote is y=0
Wednesday, January 11, 2012 at 9:13am by Steve
psy 230
I have to agree with all of you. I do not understand my text book at all. Im also like all of you, am getting good grades in all my other classes and even in my Psy 265 class but I'm carrying a C+ in
this class. The teacher isn't any help either.
Thursday, October 30, 2008 at 10:14pm by Christina
All are OK, but they'd all be better if you wrote "brushes his teeth" in all four. There are no differences in meaning in the two pairs of sentences.
Monday, April 29, 2013 at 6:44am by Writeacher
1. All of them are useful. 2. All of them is useful. 3. All is expensive. 4. All are expensive. 5. All is silent in the room. 6. All are silent in the room. --------------------------- Which ones are
Monday, March 24, 2014 at 6:32am by rfvv
2 Except for rubidium and caesium, all common cations are smaller than all common anions all common cations are larger than all common anions all common cations are of the same size with all common
anions the cations are not avaliable
Monday, April 23, 2012 at 8:01am by lola
Since I don't know how many numbers are on a roulette wheel: Odd numbers/all numbers Green numbers/all numbers (Red and Green numbers)/all numbers If the events are independent, the probability of
both/all events occurring is determined by multiplying the probabilities of the ...
Friday, March 1, 2013 at 10:44am by PsyDAG
i dont know this word...reading a website
omni- = all omniscient = all knowing omnipotent = all powerful omnipresent = present everywhere etc.
Monday, November 10, 2008 at 6:11pm by Writeacher
Nearly ________ movement in the body is the result of muscle contraction. A. All B. 50% of all C. 10% of all D. None of the above b?
Wednesday, August 22, 2012 at 4:06pm by Aya
Nearly ________ movement in the body is the result of muscle contraction. A. All B. 50% of all C. 10% of all D. None of the above D?
Sunday, August 26, 2012 at 3:36pm by Aya
Nearly ________ movement in the body is the result of muscle contraction. A. All B. 50% of all C. 10% of all D. None of the above
Monday, July 1, 2013 at 10:03pm by annias
All are grammatically correct, but #3 is the best of all. #1 is OK. If you use #2, you need to complete the comparison: "... the most carefully of all the students." (example)
Wednesday, November 27, 2013 at 8:43pm by Writeacher
Per-calc Helppppp
keep in mind answers should be pi radians or if necessary 3 dec. csc^2-2=0 all angles 4cos^2x-4cosx+1=0 all angles and name 8 angles 7sin^2x-22sinx+3=0 all angles and name 8 angles 2cos^2x-7cosx=-3
all angles sinx(2sinx+1)=0 all angles
Thursday, April 19, 2012 at 9:20pm by Amanda
Which of the following statements is true? A. all trapezoids are similar B.all isosceles triangles are the same C.all rectangles are similar D.all equilateral triangles are similar
Thursday, December 6, 2012 at 9:43pm by alfonso
Stress Management Techniques
U ALL ARE ALL WRONG. AND HATE LIERS . DONT POST AND MAKE IT SEEM LIKE ITS ALL THE TRUTH CAUSE NO OF THEM WAS RIGHT
Saturday, July 3, 2010 at 2:56pm by Angela
Generally, that is true. But of course, it all depends upon what you compare it with. For example, if we look at an element plus ALL the elements below it, then it reacts with all of them but if we
compare it with ALL elements above it, it reacts with none of them. So I could ...
Monday, April 14, 2008 at 8:17pm by DrBob222
domain of all polynomials is all reals domain of rational functions is all reals except where the denominator is zero domain of √u is u >= 0 so, 1. all reals except x=±5 2. all reals 3. all reals
5x-2 >= 0, or x >= 2/5
Thursday, November 29, 2012 at 3:13pm by Steve
Are all your doorknobs on one side of the door? are all windows 2 panes? are all ceiling lights in the center of the room? are floor tiles repeating a design?
Monday, September 14, 2009 at 6:24pm by bobpursley
Thank you SO much for all of your help. I am sure all the questions you have to answer for all of us in so many subjects can be very grueling. Just wished I had your smarts.
Thursday, March 13, 2014 at 4:01pm by Anonymous
Ged Test Results
I want to tell everyone thanks for all your help, I pass my Ged Test and I feel great, thank for all your prayers and motivation. I am so excited. I thank God for you all. I thank God for the faith
that he has put in me. Than you all..
Tuesday, July 29, 2008 at 2:50pm by Keysha
Is this all you've written? I take it that it's all still in very rough draft form. Please post all you've written in response to your assignment. And remember to work on using correct verb forms. "I
am saying..." "I have described..." "I show..." etc. Another thing to keep in...
Friday, March 13, 2009 at 4:11pm by Writeacher
"All ears" is an idiom or slang expression, meaning you're listening carefully. "All" is used as an adjective. In the second sentence, all is a pronoun. It means everyone.
Thursday, April 15, 2010 at 10:22pm by Ms. Sue
4pts((all(or(none): Q1:(Draw(a(gene(with(three(exons(on(the(line(below. The(coding(region((ORF) is(completely(contained(in(the(second(exon. Make(sure(you(annotate(the(following(regions: TSS 5’UTR(all
(regions) Coding(sequence 3’UTR(all(regions) AATAAA(sequence 5’(Donor(site((...
Tuesday, September 24, 2013 at 11:40pm by aisha
Thank you for using the Jiskha Homework Help Forum. First of all, list all the vowels in one column and then all the consonants. That will make it easier to see what you might eliminate! Sra
Tuesday, October 7, 2008 at 11:40am by SraJMcGin
SPANISH!!!! articles and all that good stuff HELP!
No. 8 is masculine. I told you th at the first time I helped you. - un idioma Sra BTW, we are all volunteers and if you have a bad attitude, we will not help you at all.
Wednesday, September 8, 2010 at 10:04am by SraJMcGin
Algebra HELP
In the future, please number each problem. 1. 18/5 2. all values of x ( all real numbers) 3. all values of x ( all real numbers) 4. no solution
Friday, February 18, 2011 at 3:08pm by Helper
I just wanted to give thanks to all of the people on this website for all of the hard work and time you have given to me and to others. I still have much work to do so I just hope you will continue
to be there for me as well as others.Thank you and God bless all of you!
Friday, September 11, 2009 at 8:59pm by p
In conclusion, clearly, all in all, definitely, surely, to sum it up are all transitions to begin a conclusion paragraph.
Monday, May 4, 2009 at 3:03am by Anonymous
grammar help
Now check all verb tenses. All should be in the present, or all should be in the past. Don't mix them up.
Saturday, March 17, 2012 at 4:10pm by Writeacher
All the best, Bill ------------------------------ We can see 'All the vest' at the end of a letter. What is the meaning of it? Which one is similar to 'All the best'?
Monday, March 17, 2014 at 7:30am by rfvv
English Expression
Q1) I'm sorry. That's all right. Think nothing of it. It's nothing. Not at all. Think nothing of it. Q2) Thank you. You're welcome. Don't mention it. Not at all. It's my pleasure. It's nothing. Are
the answers to the two questions all correct? Would you add some more suitable ...
Saturday, May 10, 2008 at 2:33am by John
Philosophy - Ethics
In Kant's kingdom of all ends, all members are universally obedient to fundamental, universal moral obligations. Such a place. Then, in the kingdom of all ends, they all agree on laws and mores which
met all ends, based on a universal submission to the fundamental moral ...
Tuesday, March 3, 2009 at 10:28pm by bobpursley
In a cross with a mother who carries the allele for hemophilia and a father who has normal blood clotting what would be the expected phenotypes of the offspring? A. all females normal, all males
normal B. all females normal, all males with hemophilia C. all females carriers of...
Monday, December 2, 2013 at 12:23am by orpheus
Three cards are drawn from a deck without replacement. Find these probabilities. a)all are jacks b)all are clubs c)all are red cards
Friday, July 30, 2010 at 10:41pm by kathy
FACS - Reading Assignment Help!
For the last question, all of those are correct. All team members must be able to do all of those things.
Sunday, April 22, 2012 at 2:47pm by Ms. Sue
A function whose domain is all reals, and which maps all rational numbers to 1 and all irrational numbers to 0.
Thursday, September 20, 2012 at 12:08am by kenny
college algebra, Please help!!
using synthetic division, dividing by x-a, take a look at the signs of the numbers in the quotient. x-0: 1 2 0 -8 -7 x-1: 1 3 3 -5 -12 x-2: 1 4 8 8 9 all signs are positive, so all real roots are all
less than 2. Similarly, for negative values, x+1: 1 1 -1 -7 0 x+2: 1 0 0 -8 9...
Sunday, November 18, 2012 at 11:42pm by Steve
a) What is C(7,3) x C(6,4) ? b) with no restriction --- C(13,7) all girls = C(6,4) all boys = C(7,3) exclude the all boys/girs from the total.
Wednesday, July 21, 2010 at 8:33pm by Reiny
For all three of the cubic lattices (unit cells), all of the ? and all of the ? are the same size. A wild guess. edges. radii?? Check it out thoroughly. Perhaps you have a list of words from which to
Thursday, February 1, 2007 at 12:11am by dory
I'd say considerations need to be all of the following: who what where when why how Once you are clear on all those aspects of a situation, then you can make good judgments about your messages.
Without specifics, it's all quite vague.
Friday, November 25, 2011 at 3:34pm by Writeacher
geology(science) i still dont understand
All life needs all of those subsystems. If we didn't have all four of them, there would be no life on earth.
Sunday, March 1, 2009 at 7:51pm by Ms. Sue
Thank you for all this I will change it all and improve it. But I think I'm just going to stick with the ending part unless you got anything for me. But still anyway thank you for all your help on
this poem.
Wednesday, April 22, 2009 at 9:02pm by john
Thankyou so much to all of you for helping me all of your ideas and sites you found have helped me alot. thankyou again. you are all stars!
Thursday, June 4, 2009 at 12:45pm by leigh
Mrs. SUe
Curiousity; Do you get paid for all for all of this hard work that you do??? BEcuase you are always on helping all students on a ton of subjects.. I really appreciate what you do and you deserve to
get paid for all the students you are really helping, without giving them the ...
Thursday, May 28, 2009 at 10:49pm by Sarah
1. Solve for x: 4x/9 = 2(x+4)/3 2. Factor completely : x to the fourth power -16... 3. If a = all #s between -5 and 5, B= all primes and c= whole numbers how many numbers does the intersection of
A,B, and C all contain? i think its 2...
Thursday, May 1, 2008 at 7:47pm by Anonymous
you don't say what n is, but it looks like all three vectors are in the same direction. It would not be possible for the sum of all three to be zero, since they are all positive
Thursday, June 9, 2011 at 11:47pm by drwls
CheckPoint: Comprehensive Grammar CheckPoint
You guys are all wrong, and all pathetic. Do your own damn work! I hope all of you cheaters that answered here get jobs doing garbage cleanup.
Friday, August 15, 2008 at 3:49pm by gdgdgdg
Poem/ Ms Sue
Thank you for all this I will change it all and improve it. But I think I'm just going to stick with the ending part unless you got anything for me. But still anyway thank you for all your help on
this poem.
Wednesday, April 22, 2009 at 9:54pm by john
Diversity in the Classroom
These are also fallacies -- they are sweeping generalizations. They imply that all boys who participate in the arts... or all girls are more interested ... The "all" or "never" ideas don't work --
too sweeping!!
Saturday, April 25, 2009 at 8:05am by Writeacher
To all teachers
Merry Christmas/Happy Holidays and lots of health, happiness and prosperity for 2011 to all the wonderful teachers at Jiskha. Thank you for sacrificing your spare time to help all of us out.
Sunday, December 19, 2010 at 10:39am by E.G.
Pronoun Pratice
"that (all) credit cards are evil" >> "all" is also an adjective here. Examples of "most" and "all" used as pronouns: Most of the credit card companies make a huge profit. She cancelled all of her
credit cards.
Saturday, July 30, 2011 at 6:55pm by Ms. Sue
Do you want to learn to do these or do you want someone to do your work. All of these are hydrocarbons. Adding oxygen and burning them produces CO2 and H2O for all. All follow this routine: CH4 + 2O2
==> CO2 + 2H2O
Thursday, October 28, 2010 at 10:25pm by DrBob222
to writeacher
im confuse on when it says : for all tables from two to twelve. does it mean all the tables like: 1 *1, 1 *2, all the way to 12 or does it mean: 2 * 1, all the way to 12?
Tuesday, July 24, 2012 at 9:44am by Celest
If you will calculate k for ALL trials, all come out to about 3.5 and from your data 3.5 surely is correct. The only suggestion I have is that sometimes they place an exponent at the top of the table
so that all of the rates would be x 10^-5 or something like that.
Saturday, February 23, 2013 at 10:18am by DrBob222
final exam
a study shown that 20% of all college textbooks cost at least $70.it is known that the standard deviation of the prices of all college textbook is $9.50.assuming that prices of all college textbooks
follow a normal distribution, find the mean price of all college textbooks.
Tuesday, May 21, 2013 at 8:46pm by Anonymous
Find all primes less than 100 that can each be written in all three ways: p=a^2+b^2=c^2+2 e^2=f^2+3 g^2, and show how it's done with each one. The numbers you're squaring must be integers; show your
thinking. thanksi dont get it at all
Thursday, September 24, 2009 at 4:50pm by Mia
Dr. King told the parents,''Don't worry about your children. They are going to be all right. Don't hold them back if they want to go to jail for they are doing a job for all of America and for all
mankind.'' What job were they doing? Please HELP me THANK YOU
Wednesday, October 13, 2010 at 4:55pm by Melea
I would choose 2. 1. all strong electrolytes. 2. BaCl2 and NaNO3 are strong but HF is weak. 3. all non-elecrolytes. 4. all weak electrolytes. 5. all strong electrolytes.
Sunday, September 22, 2013 at 3:31pm by DrBob222
All look good. But remember what I said above -- what works well for ELL students works well for all students. Use these teaching strategies with all your students, and they'll all learn much more
than if you don't. You also don't want to be treating the ELL students ...
Wednesday, August 13, 2008 at 7:51pm by Writeacher
what is the shortest length of string that can be cut into all 5cm lengths, all 8cm lengths or all 10cm lengths?
Wednesday, February 9, 2011 at 11:12pm by Anonymous
Which of the following statements is NOT true in regards to water, ethanol, and isopropanol? A. Water has the highest boiling point. B. Two lone pairs of electrons are assigned to the oxygen in each
Lewis structure of all three molecules. C. Dispersion forces affect all three ...
Monday, February 4, 2008 at 8:02pm by jose
Which of the following statements is NOT true in regards to water, ethanol, and isopropanol? A. H-O-H and C-O-H bond angles are all equal to 109.5. B. Dispersion forces affect all three molecules. C.
Two lone pairs of electrons are assigned to the oxygen in each Lewis ...
Saturday, February 9, 2008 at 4:02pm by Anonymous
Which of the following statements is NOT true in regards to water, ethanol, and isopropanol? A. Two lone pairs of electrons are assigned to the oxygen in each Lewis structure of all three molecules.
B. Dispersion forces affect all three molecules. C. H-O-H and C-O-H bond ...
Sunday, February 10, 2008 at 10:21pm by Anonymous
Which of the following statements is NOT true in regards to water, ethanol, and isopropanol? A. Two lone pairs of electrons are assigned to the oxygen in each Lewis structure of all three molecules.
B. H-O-H and C-O-H bond angles are all equal to 1095. C. Dispersion forces ...
Saturday, May 3, 2008 at 2:35am by Tom
Which of the following statements is NOT true in regards to water, ethanol, and isopropanol? A. H-O-H and C-O-H bond angles are all equal to 1095. B. Two lone pairs of electrons are assigned to the
oxygen in each Lewis structure of all three molecules. C. Dispersion forces ...
Tuesday, February 3, 2009 at 11:37pm by Kyle
an all 7th gradesubjects
You and your parents -- click on the links, one at a time and read and make plans. Don't try to do all of them all at once. Start at the top and take maybe one or two each few days or week.
Thursday, September 8, 2011 at 8:35pm by Writeacher
List all possible rational zeros for the polynomial below. Find all real zeros and factor completely. Please show all your work. f(x) = 2x^4 + 19x^3 + 37x^2 - 55x - 75
Tuesday, December 20, 2011 at 8:07pm by edward
First, please do not use all capitals. Online it is like SHOUTING. Not only is it rude, but it is harder to understand. Thank you. Inadequate data. Since they are all on the same horizontal plane,
they are all at the same height, whatever that is.
Thursday, March 15, 2012 at 8:07am by PsyDAG
7th grade ath Ms. Sue only 1 question please
All I was saying is that if its all thats left and she had marked off all the other answers, then yeah. Sorry.
Monday, November 19, 2012 at 8:13pm by rawr... im a dinosaur.... rawr
1.)Calculate the number of moles in 3.50 x 10^21 atoms of silver. Show all work. 2.) Calculate the number of atoms in 2.58 mol antimony. Show all work 3.) Determine the mass of 1.45 mol FePO4. Show
all work. 4.) Calculate the number of mol in 6.75 g of NaCl. Show all work. 5...
Sunday, February 9, 2014 at 8:01pm by Raina
the domain of all polynomials is all reals. There's no number where f(x) is undefined. For rational functions (the quotient of two polynomials p(x)/q(x)) the domain is all reals except where q(x)=0
because division by zero is undefined. For functions with square roots (or any ...
Wednesday, March 27, 2013 at 2:30pm by Steve
Thanks, I have read it through, it just doesn't all connect so easily for me. I get that the conversation with his mom is important and all, but I don't necessarily see why all the other
conversations with people from the Iliad made him learn a lesson. Blech. My brain isn't ...
Sunday, January 27, 2008 at 8:52pm by Mark
Guru Blue
Finished all the conditionals and got all the reported speech done and all my word stress done. Thanks in no small way to your extremely helpful links. I'm burned out now and going to get some sleep.
It's 3.30 am here. Thank you so much.
Friday, March 20, 2009 at 3:36pm by Island boy
Do all firms in all market structures have anything in common? I think not. They all consist of buyers and sellers. In Principals of Microeconomics courses, economists make several general economic
assumptions (perhaps incorrectly I might add) about firms and the behavior of ...
Sunday, December 10, 2006 at 5:34pm by Mariah
A series circuit requires ALL switches to be connected and all three devices would operate at the same time or not at all. With parallel connections, each can have its separate switch and be operated
independently of the other devices.
Monday, July 21, 2008 at 11:07pm by DrBob222
math ;
hint: these are all quadratic equations and all of them factor. So expand if necessary, bring all the terms to the left side and set them equal to zero. Factor them and from the factors determine the
solutions. I am sure they have shown you how to do that.
Friday, October 24, 2008 at 10:06am by Reiny
I really appreciate the help, I understand all the concepts I just get overwhelmed with all the steps sometimes, when I see all those numbers and letters, This just saved me hours of confusing myself
even further if I did not have any help.
Wednesday, June 3, 2009 at 1:54pm by Juggernaut
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=all","timestamp":"2014-04-18T19:06:06Z","content_type":null,"content_length":"40614","record_id":"<urn:uuid:1a1f4435-3ddd-4659-a014-bfd59d92efaf>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SODA 2007: Day 0
Getting into New Orleans at 1 am because of "mechanical trouble" meant that I haven't been at my best so far. But I've already heard one amazing talk today.
Luc Devroye
gave the
ANALCO plenary lecture
on "Weighted Heights of Random Trees", based on work with his students Erin McLeish and Nicolas Broutin. After having sat through many talks with titles like this, I generally approach them with
great caution and with a clear escape route. But...
This was an amazing exposition of a topic that could have become dry and terse, and essentially incomprehensible, within a slide or two. He had jokes, (that were funny), a global plan for the
material, enough technical material that I went away feeling like I'd learnt something, and intuition galore. And the work itself is very beautiful.
So what was it all about ? The problem is really quite simple to state. Suppose I give you a (random) weighted binary tree, where nodes attach to parents randomly, and edges may have weights chosen
randomly. What is the maximum height of such a tree ?
The standard application for such a tool is in analyzing binary search trees. The height of a such a tree controls the running time of an algorithm that needs to use it. And there's now a vast
literature analyzing both the asymptotics of the height distribution (basically it's sharply concentrated around 2 log n) and the specific constants (the maximum height of a random binary search tree
is roughly 4.3 log n, and the minimum is around 0.37 log n).
The "master goal" that Devroye described in his talk was this: Suppose I have a general way of attaching nodes to parents (that leads to a general distribution on subtree sizes), and a general way of
attaching weights to edges (rather than being deterministically 1 for binary search trees). Such a general model captures the analysis of tries (trees on strings that are very important in text
searching), geometric search structures like k-d trees, and even restricted preferential attachment models in social network analysis (Think of the edges as hyperlinks, and the height of the tree as
the diameter of a web tree).
Is there a generic theorem that can be applied to all of these different situations, so that you can plug in a set of distributions that describes your process, and out pops a bound on the height of
your tree ? It turns out that you can (with some technical conditions). The method uses two-dimensional large-deviation theory: can you estimate the probability of a sum of random variables being
bounded by some function, while at the same time ensuring that some other sum of random variables (that might depend slightly on the first) is also bounded ?
An example of a 1D large deviation result is of course a Chernoff bound. Devroye showed that a 2D large deviation bound for the height of such trees can be expressed in a similar form using the
so-called Cramér exponent, something that will probably not be surprising to experts in
large deviation theory
. After that, the analysis for any tree process becomes a whole lot easier. You have to analyze the corresponding Cramér function for your distributions, and a bound (with a constant; no big-O
nonsense here!) pops out.
He also talked about a neat extension of this method to analyzing the "skinnyness" of k-d tree decompositions, showing that for a kind of "relaxed" k-d tree construction, the skinniest cell can be
extremely skinny (having a super-linear aspect ratio). It's the kind of result that I imagine would be very difficult to prove without such a useful general theorem.
|
{"url":"http://geomblog.blogspot.com/2007/01/soda-2007-day-0.html","timestamp":"2014-04-19T19:36:26Z","content_type":null,"content_length":"140530","record_id":"<urn:uuid:253a4696-352a-4d02-b5dd-07880feb65a0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
So if e^lnx = x does e^-lnx = -x?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50591c92e4b0cc1228932e5d","timestamp":"2014-04-21T08:06:23Z","content_type":null,"content_length":"41524","record_id":"<urn:uuid:36c4f63f-9a76-48b4-99f2-23609fb066cb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Velocity Reviews - looping versus comprehension
An email in
claimed that the following loop in a
charting module was taking a long time
> I use ReportLab 2.6 but I also found the problem in ReportLab daily from 01/29/2013 in /src/reportlab/graphics/charts/lineplots.py:
> 276 # Iterate over data columns.
> 277 if self.joinedLines:
> 278 points = []
> 279 for xy in row:
> 280 points += [xy[0], xy[1]]
> If I use a list comprehension instead, the plot is generated within seconds or minutes:
> 278 points = [[xy[0], xy[1]] for xy in row]
however, when I tried an experiment in python 2.7 using the script below I find
that the looping algorithms perform better. A naive loop using list += list
would appear to be an O(n**2) operation, but python seems to be doing better
than that. Also why does the append version fail so dismally. Is my test coded
wrongly or is pre-allocation of the list making this better than expected?
C:\code\tests>tpoints 86000 860000
#################### START n=86000 ####################
existing algorithm took 0.08 seconds
existing algorithm using list took 0.12 seconds
existing algorithm using list assuming length 2 took 0.12 seconds
map(list,row) took 0.16 seconds[list(xy) for xy in row] took 0.28 seconds
[[xy[0],xy[1]] for xy in row] took 0.22 seconds
append algorithm took 0.19 seconds
#################### END n=86000 ####################
#################### START n=860000 ####################
existing algorithm took 0.86 seconds
existing algorithm using list took 1.33 seconds
existing algorithm using list assuming length 2 took 1.25 seconds
map(list,row) took 3.44 seconds[list(xy) for xy in row] took 3.03 seconds
[[xy[0],xy[1]] for xy in row] took 2.70 seconds
append algorithm took 2.48 seconds
#################### END n=860000 ####################
import sys, time
def main(n):
print 20*'#','START n=%s'%n,20*'#'
row = [(i,i+1) for i in xrange(2*n)]
print 'existing algorithm',
t0 = time.time()
points = []
for xy in row:
points += [xy[0],xy[1]]
t1 = time.time()
print 'took %.2f seconds' % (t1-t0)
print 'existing algorithm using list',
t0 = time.time()
points = []
for xy in row:
points += list(xy[:2])
t1 = time.time()
print 'took %.2f seconds' % (t1-t0)
print 'existing algorithm using list assuming length 2',
t0 = time.time()
points = []
for xy in row:
points += list(xy)
t1 = time.time()
print 'took %.2f seconds' % (t1-t0)
print 'map(list,row)',
t0 = time.time()
points = map(list,row)
t1 = time.time()
print 'took %.2f seconds' % (t1-t0)
print '[list(xy) for xy in row]',
t0 = time.time()
points =[list(xy) for xy in row]
t1 = time.time()
print 'took %.2f seconds' % (t1-t0)
print '[[xy[0],xy[1]] for xy in row]',
t0 = time.time()
points = [[xy[0],xy[1]] for xy in row]
t1 = time.time()
print 'took %.2f seconds' % (t1-t0)
print 'append algorithm',
t0 = time.time()
points = [].append
for xy in row:
points = points.__self__
t1 = time.time()
print 'took %.2f seconds' % (t1-t0)
print 20*'#','END n=%s'%n,20*'#','\n\n'
if __name__=='__main__':
if len(sys.argv)==1:
N = [86000]
N = map(int,sys.argv[1:])
for n in N:
Robin Becker
|
{"url":"http://www.velocityreviews.com/forums/printthread.php?t=957089","timestamp":"2014-04-24T13:15:08Z","content_type":null,"content_length":"7296","record_id":"<urn:uuid:a3eabc99-4526-47e9-ac1a-49c33b261eb0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chronology for 1940 to 1950
Previous page Chronology index Full chronology Next page
Baer introduces the concept of an injective module, then begins studying group actions in geometry.
Aleksandrov introduces exact sequences.
Linnik introduces the large sieve method in number theory.
Abraham Albert starts work on nonassociative algebras.
Steenrod publishes a paper in which "Steenrod squares" are introduced for the first time.
Eilenberg and Mac Lane publish a paper which introduces "Hom" and "Ext" for the first time.
Marshall Hall publishes on projective planes.
Naimark proves the "Gelfand-Naimark theorem" on self-adjoint algebras of operators in Hilbert space.
Von Neumann and Morgenstern publish Theory of Games and Economic Behaviour. The theory of games is used in the study of economics.
Artin studies rings with the minimum condition, now called "Artinian rings".
Eilenberg and Mac Lane introduce the terms "category" and "natural transformation".
Weil publishes Foundations of Algebraic Geometry.
George Dantzig introduces the simplex method of optimisation.
Norbert Wiener publishes Cybernetics: or, Control and Communication in the Animal and the Machine. The term "cybernetics" is due to Wiener. The book details work done on the theory of information
control, particularly applied to computers.
Shannon invents information theory and applies mathematical methods to study errors in transmitted information. This becomes of vital importance in computer science and communications.
Schwartz publishes Généralisation de la notion de fonction, de dérivation, de transformation de Fourier et applications mathématiques et physiques which is his first important publication on the
theory of distributions.
Mauchly and John Eckert build the Binary Automatic Computer (BINAC). One of the major advances of this machine is that data is stored on magnetic tape rather than on punched cards.
Selberg and Erdös find an elementary proof of the prime number theorem that makes no use of complex function theory.
Carnap publishes Logical Foundations of Probability.
Hamming publishes a fundamental paper on error-detecting and error-correcting codes.
Hodge puts forward the "Hodge Conjecture" on projective algebraic varieties.
List of mathematicians alive in 1940.
List of mathematicians alive in 1950.
Previous page Chronology index Next page
Main Index Full chronology Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace Maps Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR August 2001 School_of_Mathematics_and_Statistics
The URL of this page is:
|
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Chronology/1940_1950.html","timestamp":"2014-04-17T04:06:11Z","content_type":null,"content_length":"18924","record_id":"<urn:uuid:15daa33c-638c-40c7-b0ef-7c3508933efc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Normal way of teaching this?
November 15, 2013 at 1:50 AM
My math book for my grade 3 DS as a strange way of doing Algerbra...or Pre algerbra as his teacher calls it...
So this is the example
N + 6 = 13
__ - [S:6:S](crossed out) = 6 (So its 6-13)
N = 7
Check 7+6 =13
November 15, 2013 at 1:57 AM
-6 -6
Basically you take 6 from both sides. So 6-6=0 N+0=N and 13-6=7. So N=7
That is how I taught my second grader. Or 6+____=13
November 15, 2013 at 2:05 AM
It's not strange. You want the N to be by itself, so you bring the 6 over to the other side of the equal sign. On the other side, the +6 becomes -6. Now you have N=13-6 (or N=-6+13) which =7. Oh,
and it's algebra, not algerbra.
(I edited my response to make it clearer)
November 15, 2013 at 2:08 AM
Quoting NYCitymomx3:
It's not strange. You subtract 6 from both sides of the equal sign to get N by itself (if the problem was N-6=13, then you would ADD 6 to both sides). The 6s on the left cancel each other out
(6-6=0 which is why it's crossed out on the left side of the equal sign). Now you have N=13-6 (or N=-6+13) which =7. Oh, and it's algebra, not algerbra.
See that is just a clearer way then the teaching book said...thats tommrow...and thanks for the spelling correction I knew it looked wrong...
November 15, 2013 at 6:17 AM
That is the way I was taught beginning algebra. It was explained that an algebra equation is like a balanced scale. Both sides are equal to each other. To maintain that balance, whatever you do
to one side must also be done to the other side.
You want to end up with the unknown on one side of the equation,
N+6 = 13
N6 = 13 -6 (moving the 6 from the left side to the right side)
N = 7
______________If the equation was as follows
N -6 = 13
N -6 +6= 13 + 6
N = 19
Check 19 - 6 = 13
|
{"url":"http://mobile.cafemom.com/group/114079/forums/read/19305365/Normal_way_of_teaching_this?use_mobile=1","timestamp":"2014-04-20T18:36:39Z","content_type":null,"content_length":"21908","record_id":"<urn:uuid:b40d747a-7978-49da-a2b7-c0705f492fbd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two Uniform Solid Spheres Have The Same Mass, But ... | Chegg.com
Two uniform solid spheres have the same mass, but one has twice the radius of the other. The ratio of the larger sphere's moment of inertia to that of the smaller sphere is
a) 4
b) 2
c) 4/5
d) 8/5
please explain answer as best you can. thanks!
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/two-uniform-solid-spheres-mass-one-twice-radius--ratio-larger-sphere-s-moment-inertia-smal-q1290833","timestamp":"2014-04-21T13:01:41Z","content_type":null,"content_length":"20530","record_id":"<urn:uuid:4808cc61-0b1e-45f4-afdb-df44bd4e2ec7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Formalization Thesis
S. S. Kutateladze sskut at math.nsc.ru
Fri Dec 28 02:26:52 EST 2007
Timothy Y. Chow asks
... does that mean that you concede the point about category theory
and are arguing only that model theory cannot be accurately captured by
set-theoretic statements?
No it does not. Model theory includes reflections about formal
theories and their relations to the core of mathematics.
Timothy Y. Chow asks:
Do you claim, then, that
while theorems of all other branches of mathematics (group theory, number
theory, topology, analysis, ...) can be translated into (say) Mizar and
formally verified, theorems of model theory cannot be so translated?
I explain simply that all branches of mathematics cannot be translated
fully neither into set theory nor into any unique formal theory.
Category theory yields an illustration, as well as model theory.
Your thesis is definitely precise when you speak of ZFC, but
obviously false. It is senseless or banal on calling the quest for precision quibbling.
Euclid was and will ever be a piece of mathematics but
his Elements lacks the definition of triangle. His formalization was
not final. There are no grounds to believe that the present-day formalizations
are final either.
This is not anyone's radical claim. It is a lesson of history.
Sobolev Institute of Mathematics
Novosibirsk State University
mailto: sskut at math.nsc.ru
copyto: sskut at academ.org
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-December/012386.html","timestamp":"2014-04-17T18:26:17Z","content_type":null,"content_length":"4017","record_id":"<urn:uuid:4d7f3e32-f6f3-41c8-9203-b265c6007f4b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
help using for while loop
October 29th, 2010, 02:50 PM #1
Join Date
Oct 2010
Thanked 0 Times in 0 Posts
having trouble using for while loop
this is what im supposed to do:
Write a program that will read an unspecified number of positive numbers from the keyboard and determine the sum and the average of these numbers. A negative number should be used to terminate
the input, i.e. when the program encounters a negative number, it should stop prompting and reading data and at that point should output the results. Note that the negative number that terminates
the looping should not be added to the sum and certainly should not be counted as one of the valid numbers.
A sample program run would look like this :
Enter a number that is positive or zero 3
Enter a number that is positive or zero 5
Enter a number that is positive or zero 6
Enter a number that is positive or zero 4
Enter a number that is positive or zero -1
You have entered 4 numbers
Total of the numbers you entered is 18
Average of the numbers is 4.5
The program should use a For WHILE loop to accomplish it's mission.
here is my code and i dont think its anywhere close to right
import java.util.Scanner ;
public class Assign7_1_Roberts{
public static void main (String [] args){
//Create Scanner
Scanner input = new Scanner(System.in);
i = initialValue;
int number = 0, i = 0; sum = 0, numamount = 0; // You can change the variable names.
for(i = 0; i < 100; i++)
while(i < endValue1)
{ // Begin while statement
System.out.println("Enter a number that is positive or zero: "); // Get the next number
String userInput = input.nextLine();
sum = sum+number; // Get the sum so far.
numamount++; // Get the number amount so far.
} // End while statement.
double average = sum/numamount; //Get the average.
System.out.println("You have entered " + numamount + " numbers"); // Output amount of numbers
System.out.println("Total of the numbers you entered is " + sum); // Output sum of numbers
System.out.println("Average of the numbers is " + average); // Output average of numbers.
having trouble using for while loop
this is what im supposed to do:
Write a program that will read an unspecified number of positive numbers from the keyboard and determine the sum and the average of these numbers. A negative number should be used to terminate
the input, i.e. when the program encounters a negative number, it should stop prompting and reading data and at that point should output the results. Note that the negative number that terminates
the looping should not be added to the sum and certainly should not be counted as one of the valid numbers.
A sample program run would look like this :
Enter a number that is positive or zero 3
Enter a number that is positive or zero 5
Enter a number that is positive or zero 6
Enter a number that is positive or zero 4
Enter a number that is positive or zero -1
You have entered 4 numbers
Total of the numbers you entered is 18
Average of the numbers is 4.5
The program should use a For WHILE loop to accomplish it's mission.
here is my code and i dont think its anywhere close to right
import java.util.Scanner ;
public class Assign7_1_Roberts{
public static void main (String [] args){
//Create Scanner
Scanner input = new Scanner(System.in);
i = initialValue;
int number = 0, i = 0; sum = 0, numamount = 0; // You can change the variable names.
for(i = 0; i < 100; i++)
while(i < endValue1)
{ // Begin while statement
System.out.println("Enter a number that is positive or zero: "); // Get the next number
String userInput = input.nextLine();
sum = sum+number; // Get the sum so far.
numamount++; // Get the number amount so far.
} // End while statement.
double average = sum/numamount; //Get the average.
System.out.println("You have entered " + numamount + " numbers"); // Output amount of numbers
System.out.println("Total of the numbers you entered is " + sum); // Output sum of numbers
System.out.println("Average of the numbers is " + average); // Output average of numbers.
1.) You never have an ending bracket for the for loop.
2.) Where is endValue1 defined?
3.) Either have each variable defined on its own line if you're going to initialize them right away or
have them on the same line, but don't initialize them till later, but not too late that your for and while loops don't know what they are, and do so one per line.
4.) You're doing nothing with the String userInput.
5.) It appears that number will always be 0 inside your while loop. Is that what you want it to do?
6.) since sum is set to 0 and so is number, it'll always be 0 for each iteration of the while loop.
in other words
0 = 0 + 0;
0 = 0 + 0;
number is never changing its value and so neither is sum.
Maybe the number the user enters is supposed to be what you're adding.
if so, it'd be input.nextInt().
Last edited by javapenguin; October 29th, 2010 at 03:39 PM.
so it seems i gotta lot of work to do..im really new to java so im still tryin to figure out what all lingo used in java means
i also have to do the exact same thing but using the do-while loop
im working on the do-while loop assignment and the output "enter a number that is positive or zero" doesnt end
heres my code
import java.util.Scanner ;
public class Assign7_2_Roberts{
public static void main (String [] args){
int data;
int sum = 0, numamount = 0; // You can change the variable names.
//Create Scanner
Scanner input = new Scanner(System.in);
do { // Begin do-while statement
System.out.println("Enter a number that is positive or zero: "); // Get the next number
data = input.nextInt();
sum += data; // Get the sum so far.
} while (data >= 0); // End while statement.
double average = sum/numamount; //Get the average.
System.out.println("You have entered " + numamount + " numbers"); // Output amount of numbers
System.out.println("Total of the numbers you entered is " + sum); // Output sum of numbers
System.out.println("Average of the numbers is " + average); // Output average of numbers.
Last edited by robertsbd; October 29th, 2010 at 05:22 PM.
In your do while loop, you are continuing as long as data is not equal to zero, but in the description of your assignment, it says to read numbers, until the user enters a negative number.
ok i changed from !=0 to >=0 but when i enter a negative number i get this
Exception in thread "main" java.lang.ArithmeticException: / by zero
at Assign7_2_Roberts.main(Assign7_2_Roberts.java:26)
You are dividing by numaount when calculating average. You initialize numamount to 0, then it never changes. You can't divide by 0
October 29th, 2010, 03:35 PM #2
Join Date
May 2010
North Central Illinois
My Mood
Thanked 112 Times in 110 Posts
October 29th, 2010, 04:23 PM #3
Join Date
Oct 2010
Thanked 0 Times in 0 Posts
October 29th, 2010, 04:25 PM #4
Join Date
Oct 2010
Thanked 0 Times in 0 Posts
October 29th, 2010, 04:42 PM #5
Join Date
Oct 2010
Thanked 0 Times in 0 Posts
October 29th, 2010, 05:18 PM #6
October 29th, 2010, 05:21 PM #7
Join Date
Oct 2010
Thanked 0 Times in 0 Posts
October 29th, 2010, 07:07 PM #8
Join Date
Oct 2010
Thanked 0 Times in 0 Posts
October 30th, 2010, 02:36 PM #9
|
{"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/5803-help-using-while-loop.html","timestamp":"2014-04-16T14:06:24Z","content_type":null,"content_length":"104439","record_id":"<urn:uuid:deb1b200-6411-46c7-8807-a17ef5c979b4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Solution of a first order ODE.
We have the first order ODE
[tex] y'=4t \sqrt y,~y(0)=1, [/tex]
for which i have found the exact solution, namely a fourth order polynomial.
I want a numerical method to solve the problem exactly. This method has to be a fourth order method, since this implies that the local error vanishes.
Now we change the problem so it becomes
[tex] y'=4t \sqrt y - \lambda(y-(1+t^2)^2),~y(0)=a, [/tex]
and the question is: for which values of [itex]\lambda[/itex] and [itex]a[/itex] does a method that has the above mentioned property solve the new problem exactly.
Of course, the obvious case is [itex]\lambda=0[/itex] and [itex]a=1[/itex], because in this case the new problem reduces to the first problem.
My idea is that the solution must be a fourth order polynomial, since a fourth order numerical method has to solve the new problem exactly.
Although I want your view on this and a strategy to find the values of [itex]\lambda[/itex] and [itex]a[/itex] for which the new problem is solved exactly by a fourth order numerical method.
|
{"url":"http://www.physicsforums.com/showpost.php?p=913954&postcount=1","timestamp":"2014-04-17T07:25:42Z","content_type":null,"content_length":"9432","record_id":"<urn:uuid:e4e332fc-983d-43a3-be4c-7ebe7678d445>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Power laws in chess
Finding power laws has now become de rigueur when analyzing popularity distributions. Long tails have been reported for the frequency of word usage in many languages [2], the number of citations of
scientific papers [3], the number of visits (hits) to individual websites in a given time interval [4], and many more. In all these cases (including this new one related to chess) the exponent of the
distribution is close to $-2$. That is, the number of entries (chess openings, words, papers, or websites) with popularity $P$ approximately scales as $P-2$. The semi-universal value of this exponent
was first noticed by Zipf [2] when he saw the same statistics apply to such different objects as words ranked by their popularity and cities ranked by their population.
Many scientists proposed simple (and not so simple) models aimed at explaining the origins of this scaling. For city-size distribution, the celebrity list of modelers includes Paul Krugman [5]—the
Nobel Prize winning economist and New York Times columnist. Even though the laws of population dynamics responsible for city formation and subsequent growth appear to have very little in common with
rules dictating preferences in chess openings, all inverse quadratic power-law distributions became collectively known as “Zipf’s law.” There is indeed something special about the distribution $P-α$
, with $α$=$2$, since it separates the region with a well-defined average ($α>2$) from that where the average formally diverges and thus depends on the upper cutoff ($α≤2$). Nevertheless, the quest
for the universal “first principles” explanation of Zipf’s law remains elusive.
Apart from establishing yet another example of Zipf’s law, the work of Blasius and Tönjes goes a long way towards elucidating its origins in the special case of sequential games or, more generally,
any composite multistep decision processes (e.g., complex business or political strategies). These processes are best visualized as decision trees with multiple choices at each level (Fig. 1). The
number of possible paths on such trees grows exponentially with the number of steps. As a result, even in the simplest cases the exhaustive enumeration very soon becomes impossible.
The first important observation made by the authors is that if one concentrates on the distribution of popularity of opening sequences limited to the first d steps of the game, it will also be
described by a power law, yet with a nonuniversal exponent $αd$ that linearly grows with the number of moves. The universal Zipf distribution with $α$=$2$ is recovered only after these d-dependent
distributions are all merged together. The rationale for such a merger leading to double counting is poorly explained in the paper. Nonuniversal power-law exponents often implicate multiplicative
random walks [6, 7, 8] and this case is no exception. Other examples of power laws generated by multiplicative random walks include wealth of individuals [9], stock prices and their drawdowns
(deviations down from the maximum) [10], gene family sizes expanding by gene duplication [11], and many others.
One way to calculate the popularity of a particular opening sequence $σ$ is to notice that every sequential move $i$ of the sequence reduces the number of games in the database that open with these
moves by a factor $0<ri≤1$ . These factors are the same as branching ratios illustrated in Fig. 1. If the total number of recorded games is $N$ (which is $∼1.5$ million in the professional chess
database ScidBase [12] used in this work) then the number of openings starting with a particular sequence of $d$ moves is given by $N$($σ)$ $=Nr1r2…rd$. The value $N$($σ)=1$ serves as an absorbing
lower boundary for this multiplicative process. When a random walker hits such an absorbing boundary it stops moving altogether. In the case of chess opening, once the diversity is down to just one
realization, a unique move will be selected at each subsequent time step and $N$($σ)$ will remain at $1$ until the end of the game. In the case of standard (additive) random walks, a boundary
generally gives rise to an exponential Boltzmann distribution. For multiplicative random walks after the logarithmic change of variables [6, 7], this exponential distribution becomes a power law.
The exponent $αd$ of the high-popularity tail of the distribution can be analytically derived for some special cases of the distribution of ratios $ri$ : $ρ(r)∼rβ$ (see Eq. (6) in the paper of
Blasius and Tönjes [1]). According to these calculations, $αd$ linearly increases with the number of moves $d$. This is in agreement with the actual distribution of popularity of chess openings.
However, the empirically measured $ρ(r)$, shown in Fig. 3(a) of their paper, has a different profile. Surprisingly, it closely follows the parameter-free distribution $ρ(r)=2/π1-r2$. This
distribution describes the density of points on a circle projected onto its diameter.
Blasius and Tönjes offer no explanation for this empirical observation. Qualitatively, this profile of $ρ(r)$ makes intuitive sense. At every position of pieces on the chess board, out of many moves
allowed by the rules, just one would lead to the most favorable position for the long-term outcome of the game. Such moves that maximize the gradient of “fitness” would be preferentially selected by
skillful chess players (the average rating of players in the database puts them between the Candidate Master and the International Master levels). This selection would manifest itself in $ρ(r)$
increasing (and possibly even diverging) as $r$ $→$ $1$. This divergence is a direct manifestation of players’ skills. Indeed, if I were to play the game of chess against other equally clueless
players, the shape of $ρ(r)$ defined by our uninformed random moves would likely to be flatter than that shown in Fig. 3(a) of Ref. [1].
As a suggestion for future studies, it would be interesting to empirically verify this hunch using player ratings included in the ScidBase or other less selective chess databases. In other words, is
the distribution of openings selected by the best players significantly different from that selected by the worst players? Another observation made by the authors is that the shape of $ρ(r)$ is
independent of the depth of the game $d$. This indicates that, at least at early stages of the game, the phase space of favorable moves does not significantly depend on $d$.
All these empirical facts summarizing the collective knowledge of many chess players have implications for the design of chess-playing computer algorithms. Thinking about chess has a long and
venerable history in computer science. One of the founding fathers of information theory, Claude Shannon, has worked on this topic. In his 1950 paper [13] he outlined two possible approaches to
designing a computer program playing chess: the brute force strategy, performing the exhaustive evaluation of all possible moves and opponent’s responses for as many steps as computer power would
allow. The other strategy is to iteratively select a few of the most promising moves at every step and concentrate computer resources on following a smaller number of more likely paths on the
decision tree of the game. The shape of $ρ(r)$ in Ref. [1] provides an empirical justification for this latter strategy that is indeed the one used by modern chess-playing programs.
During the last decade it became customary to blame all types of power laws in popularity on linear preferential attachment mechanisms first used to explain the Zipf’s law by another Nobel Laureate,
Herbert Simon [14]. According to these models, the high popularity of certain items is a frozen accident self-sustained by fashion. For example, an initially popular website would acquire new links
at a higher rate than its less popular cousins, and as a consequence, further increase its visibility. While in certain situations fashion-driven preferential attachment is likely to be responsible
for long tails of popularity distribution, it is reassuring to know that it is not the case in chess—a quintessential game of skill.
1. B. Blasius and R. Tönjes, Phys. Rev. Lett. 103, 218701 (2009).
2. G. K. Zipf, Human Behaviour and the Principle of Least Effort (Addison-Wesley, Cambridge, MA, 1949)[Amazon][WorldCat].
3. D. J. de S. Price, Science 149, 510 (1965).
4. M. Crovella, M. Taqqu, and A. Bestavros, in A practical guide to heavy tails: statistical techniques and applications, edited by R. J. Adler, R. E. Feldman, and M. S. Taqqu (Birkhäuser, Boston,
5. P. Krugman, J. Japanese Int. Economies 10, 399 (1996).
6. M. Levy and S. Solomon, Int. J. Mod. Phys. C 7, 595 (1996).
7. D. Sornette and R. Cont, J. Phys. I (France) 7, 431 (1997).
8. M. Marsili, S. Maslov, and Y-C. Zhang, Physica 253A, 403 (1998).
9. V. Pareto, Cours d'economie politique (F. Rouge, Lausanne, 1896).
10. S. Maslov and Y.-C. Zhang, Physica, 262A, 232 (1999).
11. M. Huynen and E. Van Nimwegen, Mol. Biol. Evol. 15, 583 (1998).
12. C. E. Shannon, Philos. Mag. 41, 256 (1950).
13. H. A. Simon, Biometrika 42, 425 (1955).
|
{"url":"http://physics.aps.org/articles/v2/97","timestamp":"2014-04-19T10:05:35Z","content_type":null,"content_length":"31561","record_id":"<urn:uuid:8e9dbbbd-fdf5-4a09-8b45-1632ce891bc4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of Structural Dynamics
Structural dynamics
is a subset of
structural analysis
which covers the behaviour of
subjected to
loading. Dynamic loads include people, wind, waves, traffic,
, and blasts. Any structure can be subject to dynamic loading. Dynamic analysis can be used to find dynamic
, time history, and
modal analysis
A static load is one which does not vary. A dynamic load is one which changes with time. If it changes slowly, the structure's response may be determined with static analysis, but if it varies
quickly (relative to the structure's ability to respond), the response must be determined with a dynamic analysis.
Dynamic analysis for simple structures can be carried out manually, but for complex structures finite element analysis can be used to calculate the mode shapes and frequencies.
A dynamic load can have a significantly larger effect than a static load of the same magnitude due to the structure's inability to respond quickly to the loading (by deflecting). The increase in the
effect of a dynamic load is given by the dynamic amplification factor (DAF):
$DAF = frac\right\}\right\}$
where u is the deflection of the structure due to the load.
Graphs of dynamic amplification factors vs non-dimensional rise time (t[r]/T) exist for standard loading functions (for an explanation of rise time, see time history analysis below). Hence the DAF
for a given loading can be read from the graph, the static deflection can be easily calculated for simple structures and the dynamic deflection found.
Time history analysis
A full time history will give the response of a structure over time during and after the application of a load. To find the full time history of a structure's response you must solve the structure's
equation of motion.
A simple single degree of freedom system (a mass, M, on a spring of stiffness, k for example) has the following equation of motion:
$M\left\{ddot\left\{x\right\}\right\} + kx = F\left(t\right)$
: where
is the acceleration (the double
of the displacement) and x is the displacement.
If the loading F(t) is a Heaviside step function (the sudden application of a constant load), the solution to the equation of motion is:
$x = frac\left(1 - cos\left\{\left(wt\right)\right\}\right)$
where ω = $sqrt\left\{frac\right\}$ and the fundamental natural frequency, $f = frac$.
The static deflection of a single degree of freedom system is:
$x_\left\{static\right\} = frac$
: so you can write, by combining the above formulae:
$x = x_\left\{static\right\}\left(1 - cos\left(wt\right)\right)$
This gives the (theoretical) time history of the structure due to a load F(t), where the false assumption is made that there is no damping.
Although this is too simplistic to apply to a real structure, the Heaviside Step Function is a reasonable model for the application of many real loads, such as the sudden addition of a piece of
furniture, or the removal of a prop to a newly cast concrete floor. However, in reality loads are never applied instantaneously - they build up over a period of time (this may be very short indeed).
This time is called the rise time.
As the number of degrees of freedom of a structure increases it very quickly becomes too difficult to calculate the time history manually - real structures are analysed using non-linear finite
element analysis software.
Any real structure will dissipate energy (mainly through friction). This can be modelled by modifying the DAF:
$DAF = 1 + e^\left\{-cpi\right\}$
: where
and is typically 2%-10% depending on the type of construction:
• Bolted steel ~6%
• Reinforced concrete ~ 4%
• Welded steel ~ 2%
Generally damping would be ignored for transient events (for example, an impulse load such as a bomb blast), but would be important for non-transient events (such as wind loading or crowd loading).
Modal analysis
A modal analysis calculates the frequency modes or natural frequencies of a given system, but not necessarily its full time history response to a given input. The natural frequency of a system is
dependent only on the stiffness of the structure and the mass which participates with the structure (including self-weight). It is not dependent on the load function.
It is useful to know the modal frequencies of a structure as it allows you to ensure that the frequency of any applied periodic loading will not coincide with a modal frequency and hence cause
resonance, which leads to large oscillations.
The method is:
a) Find the natural modes (the shape adopted by a structure) and natural frequencies
b) Calculate the response of each mode
c) Optionally superpose the response of each mode to find the full modal response to a given loading
Energy method
It is possible to calculate the frequency of different mode shapes of system manually by the energy method. For a given mode shape of a multiple degree of freedom system you can find an "equivalent"
mass, stiffness and applied force for a single degree of freedom system. For simple structures the basic mode shapes can be found by inspection, but it is not a conservative method. Rayleigh's
principle states:
"The frequency ω of an arbitrary mode of vibration, calculated by the energy method, is always greater than - or equal to - the fundamental frequency ω[n]."
For an assumed mode shape $bar\left\{u\right\}\left(x\right)$, of a structural system with mass, M; stiffness, EI (Young's modulus, E, multiplied by the second moment of area, I); and applied force,
$text\left\{Equivalent mass, \right\} M_\left\{eq\right\} = int\left\{Mbar\left\{u\right\}^2\right\} du$
$text\left\{Equivalent stiffness, \right\}k_\left\{eq\right\} = int\left\{EI\left(frac\right\}\right)\right\}dx$
$text\left\{Equivalent force, \right\}F_\left\{eq\right\} = int\left\{Fbar\left\{u\right\}\right\}dx$
: then, as above:
Modal response
The complete modal response to a given load F(x,t) is $v\left(x,t\right)=sum\left\{u_n\left(x,t\right)\right\}$. The summation can be carried out by one of three common methods:
• Superpose complete time histories of each mode (time consuming, but exact)
• Superpose the maximum amplitudes of each mode (quick but conservative)
• Superpose the square root of the sum of squares (good estimate for well-separated frequencies, but unsafe for closely spaced frequencies)
To superpose the individual modal responses manually, having calculated them by the energy method:
$T = 2pi w$
Assuming that the rise time t[r] is known it is possible to read the DAF from a standard graph. The static displacement can be calculated with $u_\left\{static\right\}=frac\left\{F_\left\{1,eq\right
\}\right\}\left\{k_\left\{1,eq\right\}\right\}$. The dynamic displacement for the chosen mode and applied force can then be found from:
$u_\left\{max\right\} = u_\left\{static\right\}DAF$
Modal participation factor
For real systems there is often mass participating in the forcing function (such as the mass of ground in an earthquake) and mass participating in inertia effects (the mass of the structure itself, M
[eq]). The modal participation factor Γ is a comparison of these two masses. For a single degree of freedom system Γ = 1.
Γ $= frac\left\{sum\left\{M_nbar\left\{u\right\}_n\right\}\right\}\left\{sum\left\{M_nbar\left\{u\right\}_n^2\right\}\right\}$
See also
External links
|
{"url":"http://www.reference.com/browse/Structural+Dynamics","timestamp":"2014-04-17T20:30:04Z","content_type":null,"content_length":"88314","record_id":"<urn:uuid:cdd4e7f9-8f68-427a-aebb-1056ffe1a2d9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charles' Law
Charles' Law states the relationship between temperature and volume of a gas at a constant pressure.
T/V = k
Where T is the temperature of the gas,
V is the volume
and k is a constant.
In a nutshell, Charles' law says that cooling a gas decreases its volume. If the volume remains fixed then the pressure must decrease. A suitable example of this phenomenon is a scuba tank bursting
because it is left in the trunk of a hot car in the tropical sun (temperature rises, volume is constant and pressure increases).
See our choices for scuba training
|
{"url":"http://www.thescubaguide.com/certification/charles-law.aspx","timestamp":"2014-04-21T12:10:06Z","content_type":null,"content_length":"9633","record_id":"<urn:uuid:21cc0065-074f-4e8d-9f23-4ffedd54165d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spatially Uniform ReliefF (SURF) for computationally-efficient filtering of gene-gene interactions
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Spatially Uniform ReliefF (SURF) for computationally-efficient filtering of gene-gene interactions
Genome-wide association studies are becoming the de facto standard in the genetic analysis of common human diseases. Given the complexity and robustness of biological networks such diseases are
unlikely to be the result of single points of failure but instead likely arise from the joint failure of two or more interacting components. The hope in genome-wide screens is that these points of
failure can be linked to single nucleotide polymorphisms (SNPs) which confer disease susceptibility. Detecting interacting variants that lead to disease in the absence of single-gene effects is
difficult however, and methods to exhaustively analyze sets of these variants for interactions are combinatorial in nature thus making them computationally infeasible. Efficient algorithms which can
detect interacting SNPs are needed. ReliefF is one such promising algorithm, although it has low success rate for noisy datasets when the interaction effect is small. ReliefF has been paired with an
iterative approach, Tuned ReliefF (TuRF), which improves the estimation of weights in noisy data but does not fundamentally change the underlying ReliefF algorithm. To improve the sensitivity of
studies using these methods to detect small effects we introduce Spatially Uniform ReliefF (SURF).
SURF's ability to detect interactions in this domain is significantly greater than that of ReliefF. Similarly SURF, in combination with the TuRF strategy significantly outperforms TuRF alone for SNP
selection under an epistasis model. It is important to note that this success rate increase does not require an increase in algorithmic complexity and allows for increased success rate, even with the
removal of a nuisance parameter from the algorithm.
Researchers performing genetic association studies and aiming to discover gene-gene interactions associated with increased disease susceptibility should use SURF in place of ReliefF. For instance,
SURF should be used instead of ReliefF to filter a dataset before an exhaustive MDR analysis. This change increases the ability of a study to detect gene-gene interactions. The SURF algorithm is
implemented in the open source Multifactor Dimensionality Reduction (MDR) software package available from http://www.epistasis.org.
Technological advances are rapidly improving geneticists ability to measure variation between individuals. Because of these advances, the genome-wide association study is now a common approach to
detecting genetic factors which influence individual susceptibility to common human diseases. Genome-wide association studies targeting common variants which, alone, influence susceptibility have
produced mixed results [1-5]. As currently performed, these studies ignore complex interactions between variants that may lead to disease susceptibility. These are often ignored because methods to
detect these interactions are computationally infeasible or provide insufficient sensitivity.
Epistasis is a term literally meaning "resting upon" which refers to the situation where interacting genes, as opposed to a single gene, influence a trait. Because of the complex architecture of
biological networks, epistasis is likely to be fundamental to an individual's disease risk for common human diseases [6]. This, combined with the knowledge that single-locus results have not
frequently replicated for common human diseases [7,8], indicates that methods to detect and characterize epistasis are likely to be critical to understanding the genetic basis of common human
Detecting and characterizing epistatic interactions in datasets containing large numbers of SNPs is challenging. It requires examining the effect of SNPs not just in isolation, but also in concert
with other SNPs. In a dataset with one million SNPs, a number typically provided by high throughput technologies, there are about 5 × 10^11 pairwise combinations of SNPs. For three-way combinations,
the number is 1.7 × 10^17. For higher order interactions the number of combinations is astronomical. Combinatorial methods which evaluate each such combination are not feasible [9].
Efficient algorithms for identifying sets of SNPs likely to contain predictive models for disease susceptibility are therefore needed. Methods of filtering SNPs are one possibility. These first rank
the attributes by some criterion. Then either the top K SNPs or all SNPs above some threshold T are selected. The SNPs within this set can then be analyzed for interactions using combinatorial
methods. Stochastic search wrappers are another possibility. These wrappers are probabilistic methods which retain the ability to consider all attributes and have the potential to use information
learned early in the search to direct future exploration. Relief algorithms are nearest neighbor based approaches to detecting attributes relevant for some outcome. Relief algorithms are attractive
for use in genetic association studies using either filters or wrappers because the computation time required increases linearly with the number of SNPs and quadratically with the number of
individuals. Importantly, these algorithms are able to detect attributes associated with disease through interactions or independent main effects, although they do not provide a model for the effect
[10]. Instead, information gleaned from these methods can be used as input into other approaches. Stochastic search approaches such as genetic programming [11-13] and ant colony optimization [14] can
successfully develop models in this domain when information from the Relief family of algorithms is used to assist the search, although they fail to detect purely epistatic associations without this
additional information [12]. Motsinger et al. [15] have shown that patterns of correlation between SNPs can make the problem easier to solve in the absence of expert knowledge, although here we
specifically examine uncorrelated SNPs. Moore et al. briefly discuss both filter and wrapper options as part of an overall epistasis analysis strategy for human disease susceptibility [16] and Greene
et al. [17] provide a theoretical analysis of both approaches. For the situation where there is a single source of expert knowledge, the filter approach is most appropriate [17]. In this situation we
are considering the success rate of individual Relief methods, each of which is a single source which meets these assumptions up to a good approximation according to the appendix (Additional file 1).
For this reason we test the ability of these methods to successfully filter a dataset retaining SNPs with an epistatic interaction associated with disease susceptibility.
Numerous variants of Relief have been developed. When applied to genetic association study data these methods use genetically similar individuals or, equivalently, nearest neighbors to adjust weights
which are assigned to each SNP. The nearest neighbor is the nearest individual in the dataset to the current individual calculated across all SNPs. While Relief uses, for each individual, a single
nearest neighbor in each class, ReliefF, a variant of Relief, uses multiple nearest neighbors, and thus is more robust when the dataset contains noise [18]. Moore and White developed a Tuned ReliefF
(TuRF) approach for human genetics [19]. This approach, though requiring more computer time, further improves the performance when the data contain a large number of non-relevant SNPs in addition to
a small number of relevant SNPs. TuRF achieves this by iterating a ReliefF algorithm and, with each iteration, deleting SNPs with the lowest ReliefF weights, i.e. those thought to be least predictive
[19]. SNPs are assigned a weight based on their normalized weights when removed. This iterative approach improves the overall ranking of disease associated SNPs because noisy SNPs are most often
removed. This means that the re-estimation can more accurately evaluate the relevance of the remaining SNPs.
Here we present a new version of Relief, called Spatially Uniform ReliefF or, briefly, SURF. It detects epistatic interactions with a significantly higher success rate than the Relief variant widely
used for machine learning, ReliefF. Iterated SURF, called SURF & TuRF, has a significantly higher success rate than TuRF. For each individual SURF, like ReliefF, adjusts weights of all the SNPs by
using certain neighbors of the individual. While ReliefF uses a fixed number of nearest neighbors, SURF uses all neighbors within a fixed distance of the individual. This distance may be thought of
as a similarity threshold. Thus SURF uses precisely those neighbors more similar than this threshold. ReliefF, on the other hand, may use either fewer or more neighbors, thereby possibly neglecting
informative individuals or including uninformative ones. Furthermore, similarity thresholds which give greater success rate than ReliefF can be estimated from the data while distances are
pre-computed, thus removing a nuisance parameter from the algorithm (see §2 in the appendix). SURF also does not increase the complexity of the algorithm, so the scaling is still linear with respect
to the number of SNPs and quadratic with respect to the number of individuals.
Relief and Spatially Uniform ReliefF (SURF)
All Relief algorithms attach a weight to each SNP. The higher the weight of a SNP, the more likely it is predictive of disease status. Genetically similar individuals are used to adjust these SNP
weights. We define the distance between two individuals as the number of their SNPs with differing genotypes. With this distance metric, nearest neighbors share genotypes at the greatest number of
SNPs, and so are genetically most similar.
Relief algorithms are based on the assumption that those SNPs of nearby individuals which have different states (i.e. differing genotypes) are either most or least predictive of disease status.
Relief algorithms adjust the weights of these SNPs-upward if the two individuals have different disease status, and downward by the same amount if they have the same status. More precisely, the
original Relief algorithm adjusts, for each individual I[i], the SNP weights using I[i]'s nearest hit (the individual which is closest to I[i ]and in the same class as I[i]) and I[i]'s nearest miss
(the individual which is closest to I[i ]and in the other class from I[i]). In the case of SURF, for each individual I[i], this adjustment is done using each hit and miss within a fixed threshold
distance T of I[i]. Figure Figure11 shows graphically how neighbors are selected with each Relief algorithm.
How Relief, ReliefF and SURF select neighbors. Each panel in this figure shows the genotypes at two markers for a dataset of cases and controls. For the purpose of this example only these two markers
will be considered and both are continuous. When analyzing ...
Relief is able to detect epistatic SNPs, even when no single SNP has an effect. We outline how it does this for epistatic pairs. More detail is in the appendix (Additional file 1). All of the
penetrance functions used in this work are available in Additional file 2. We begin with a discussion of epistatic pairs. Consider the penetrance function for the epistatic pair of SNPs shown in
Table Table1.1. If an individual has genotype AA and the genotype of SNP[2 ]is unknown, then the probability the individual is sick is
Penetrance values for an example epistasis model with a heritability of 0.1.
The individual has the same probability of being sick if he has genotype either Aa or aa, provided again that the genotype of SNP[2 ]is unknown. Similarly, if his genotype is either BB, bB or bb with
SNP[1]'s genotypes unknown, the probability he is sick is again .3849. The point is that no single SNP has an effect on disease susceptibility. Only the relevant pair does.
Now we discuss how Relief detects epistatic pairs. Given an individual I[i], we define the set M[kΔ ]to consist of those misses with exactly k of their two relevant SNPs in a different state from
those of individual I[i]. In the case of two relevant SNPs, k = 0, 1 or 2. Note that the miss nearest I[i ]is in exactly one of the three sets M[0Δ], M[1Δ ]or M[2Δ]. Indeed, these partition the set
of all misses. The sizes of the sets M[kΔ ]can be determined (as in §1 of the appendix) from the penetrance function which governs the relationship between genotype and phenotype. As an example, with
a sample size of 1600 and the penetrance function shown in Table Table11 the sizes of these sets are
For the analogous sets involving hits we have
These are actually expected numbers rounded to the nearest integer. Since |M[1Δ]| > |M[2Δ]|, the contribution of the irrelevant SNPs to the distance from I[i ]to its nearest point in M[1Δ ]tends to
be less than that to its nearest point in M[2Δ]. The two relevant SNPs contribute one to the distance from I[i ]to every point of M[1Δ]. For points in M[2Δ], the contribution to this distance is two,
which makes points in M[2Δ ]farther by one from I[i], on average, than points in M[1Δ]. Since the states of the relevant and irrelevant SNPs are independent, it follows that the nearest miss is more
likely to be in M[1Δ ]than M[2Δ]. To be precise for the example in table table11 the probability, according to equation (10) of the appendix, that the closest miss is in M[1Δ ]is
while the probability it is in M[2Δ ]is
We mention that the probability it is in M[0Δ ]is
but do not use this since Relief adjusts weights only for SNPs where pairs of individuals have differing genotypes. The analogous probabilities for hits are
If the nearest miss is in M[2Δ], then the Relief score of both relevant SNPs is increased by one. If it is in M[1Δ], there is a 50% chance that the score of the first relevant SNP is increased by
one. Thus the expected contribution due to misses of individual I[i ]to the score of a relevant SNP is
Using the same notation for hits, except with H in place of M, an analogous discussion gives
as the expected contribution dues to hits of individual I[i ]to the score of a relevant SNP. Thus the expected contribution of individual I[i ]to the score of a relevant SNP is
The value of this for the example we have been considering is .005. The expected contribution of individual I[i ]to the score of an irrelevant SNP is 0. This indicates why Relief tends to assign
higher scores to relevant SNPs than to irrelevant ones.
The analysis of SURF, though mathematically easier, is more subtle. Again, let I[i ]be a random, but fixed, individual. Then, as before, each miss within the threshold distance T of I[i ]is in one of
the three sets M[0Δ], M[1Δ ]or M[2Δ]. For k = 0, 1 and 2, let TM[k ]be the subset of M[kΔ ]consisting of those individuals within distance T of I[i]. Using analogous notation for hits with H in place
of M, the mean contribution of individual I[i ]to the SURF score of a relevant SNP is
The TM[1 ]and TH[1 ]changes the score of a relevant SNP by
Returning now to the example model, specifically expressions (1) and (2), we see that |M[2Δ]| - |H[2Δ]| < 0.
Thus, on average, |TM[2]| - |TH[2]| < 0; however two factors make
Also, elements of M[1Δ ]and H[1Δ ]are, on average, one closer to I[i ]than elements of M[2Δ ]and H[2Δ]. Together these make
and, consequently,
The scores T. In our simulations, we have chosen T to maximize the
Because of the way the variance of this sum varies with T, slightly smaller values of T are probably optimal. This is discussed at the end of §2 of the appendix.
Results and Discussion
Our results suggest that the SURF approaches provide a more successful method for the detection of gene-gene interactions in these data. Figure Figure22 shows both success rate and significance test
results for a single sample size and heritability (1600 and 0.1 respectively). These results indicate that the success rates of the SURF approaches (SURF and SURF & TuRF) are greater than their
corresponding ReliefF approaches (ReliefF and TuRF). Furthermore the step plots show that this difference is highly significant except for the 99^th percentile comparison of ReliefF and SURF. Neither
of the non-iterative approaches is highly effective for filtering to the 99^th percentile for this heritability and sample size, so as a stringent filter the iterative approaches are most useful.
Example Success Rate and Significance of Differences. Part A shows the detailed success rate analysis results for a single heritability (0.1) and sample size (1600). The success rate to filter both
relevant SNPs into percentiles from the 99^th to 50^th ...
Our complete results, shown in figure figure3,3, show that the new SURF algorithm, outperforms ReliefF. Furthermore we see that this increase in success rate is not redundant with the tuned
approaches, as both of these, TuRF and SURF & TuRF, which iteratively remove attributes with low quality estimates, are much better than the standard Relief and SURF approaches at selecting a small
subset which contains the functional attributes. Here we see that these approaches significantly outperform ReliefF and SURF when the task is to filter the dataset to the 99^th or 95^th percentiles
of SNPs. Finally we find that SURF & TuRF outperforms TuRF alone achieving a much greater success rate, particularly at moderate heritabilities. We find that these differences are statistically
significant. The success rate when SURF is used, particularly with larger sample sizes, is consistently significantly greater than the success rate when the standard method, ReliefF is used (see
Additional files 3, 4, 5) for both the "tuned" and non-iterative approaches. Additionally the success rates of these "tuned" algorithms to include the proper SNP in the 99^th and 95^th percentiles
are consistently significantly better than the success rates of the non-tuned approaches (see figure figure22 and Additional files 3, 4, 5).
Success Rate Analysis. This is a summary of success rate as shown in figure 2 across a wide range of sample sizes and heritabilities. Within each heritability the success rates for all five genetic
models for that heritability are averaged. The x-axis ...
Methods which increase success rate without an increase in computational complexity or sample size are extremely desirable for genome-wide association studies. By developing improved methods for
detecting epistasis we greatly expand our ability to characterize interactions in large datasets. Moore argues that when people use sensitive methods to detect epistasis, they are frequently able to
find examples of it [20]. Algorithms which both detect and characterize epistasis in the absence of main effects are of combinatorial complexity for the number of SNPs. The SURF algorithm we
introduce to detect disease associated interacting SNPs is, like ReliefF, of linear complexity for the number of SNPs. Moreover, its success rate for epistasis analysis is higher than ReliefF's. One
caveat is that Relief methods such as SURF, though useful for detecting interacting SNPs, neither identify specific interacting pairs nor develop a model. Because SURF & TuRF is able to detect
interacting genetic variants which are predictive of human health, weights from this algorithm can be used to filter a dataset before traditional combinatorial approaches are used to characterize the
interaction. McKinney et al. have previously integrated ReliefF [21] and TuRF [22] with other information sources using an evaporative cooling technique to direct genetic association analyses. Direct
replacement of ReliefF by SURF & TuRF may improve the sensitivity of these frameworks to detect and characterize interactions.
SURF & TuRF's greatly increased success rate to detect epistasis improves our ability to detect variants leading to disease risk in the absence of main effects. This new distance based approach may
also be extensible to biological and biomedical data beyond case-control genetic association studies. While ReliefF, which SURF & TuRF builds on, is usable for these discrete endpoints and attribute
values, other modifications to ReliefF have extended this machine learning method to other data types. With Regression ReliefF (RReliefF), ReliefF is broadened to handle continuous attributes and
endpoints [23,24]. Future work should examine whether the new distance based approach used for SURF & TuRF also improves these methods. If using a distance threshold also improves RReliefF methods,
the sensitive SURF approach can be applied to continuous gene expression data or to detecting variants predictive of continuous endpoints. With future work it may also be possible to combine
continuous and discrete attributes, to provide a method capable of examining gene-gene, gene-environment, and environment-environment interactions in a common framework.
Now that it is technically and economically feasible to measure large numbers of DNA sequence variations in human genetics, the bioinformatics challenge is to identify and improve methods for
detecting variants which are predictive of disease risk. This is particularly challenging when the task is to identify polymorphisms which have little or no independent effect. The Relief family of
algorithms provides one potential solution for SNP selection, and SURF & TuRF is a novel within this family which effectively detects epistasis. By developing sensitive and computationally efficient
methods capable of detecting epistasis, it becomes more practical to probe datasets for these interactions. Highly sensitive methods will allow researchers to better understand the impact of
epistasis on human health. Both SURF and SURF & TuRF have been included as filtering methods in the user friendly open source Multifactor Dimensionality Reduction (MDR) software package.
As discussed SURF weights can be used for genetic analysis in either filters or probabilistic wrappers. Here we consider the simpler filter approach. Specifically we analyze SURF's ability to filter
a dataset to the 99^th, 95^th and 75^th percentiles of SNPs without removing those SNPs with an interaction effect predictive of disease susceptibility. ReliefF has previously been used in the
genetic analysis of complex diseases in this fashion [25].
The goal of our simulation study is to generate artificial datasets with high concept difficulty to evaluate SURF in the domain of human genetics. We first develop 30 different penetrance functions
(i.e. genetic models) which determine the relationship between genotype and phenotype in our simulated data. These functions determine the probability that an individual has disease given his or her
genotype. This probability depends only on the genotypes of the two interacting SNPs, not on the genotype of any one SNP. The 30 penetrance functions include groups of five with heritabilities of
0.025, 0.05, 0.1, 0.2, 0.3, or 0.4. These heritabilities range from very small to large genetic effect sizes. Each functional SNP has two alleles with frequencies of 0.4 and 0.6. These models are
included in Additional file 2. Each of the models is used to generate 100 replicate datasets with sample sizes of 800, 1600, and 3200. Each dataset consists of an equal number of case (diseased) and
control (disease free) subjects. Each pair of functional SNPs is added to a set of 998 irrelevant SNPs for a total of 1000 attributes. A total of 9,000 datasets are generated and analyzed.
We test each method with the following parameters. All four methods can use some or all of the dataset when performing weight estimations. Here we use the entire dataset, as this is similar to what
is performed in practice where the number of individuals is often more limiting than the computational costs. ReliefF and TuRF require a number of neighbors. Here we use 10, as suggested by
Robnik-Sikonja and Kononenko [24] in a comprehensive analysis. SURF requires a distance threshold. Our theoretical analysis in §2 of the appendix (Additional file 1) suggests that the mean distance
between all pairs of individuals in the dataset and across all attributes can be used and thus we use this distance in this situation. By using the mean distance as calculated from the data, we
remove this nuisance parameter from the algorithm. Both SURF & TuRF and TURF remove a number of SNPs at each iteration before re-estimating the weights of the remaining SNPs. Here we remove 25 SNPs
at each iteration (2.5% of the dataset).
Here, because we are interested in interactions, we consider the success rate to be the number of times that both relevant SNPs are scored above a given threshold. We set this stricter standard here
because further analysis steps can not succeed of both relevant parts of the interaction are not discovered. To estimate the success rate, we use 100 datasets for each of the 30 models. Specifically,
the percentage of datasets for which a method ranks the two relevant SNPs above the N^th percentile of all SNPs is the estimate of the method's success rate. We apply Fisher's exact test to assess
the significance of differences between the success rates of the tested methods at these thresholds. These percentiles represent the situation where each method is used to filter a large dataset with
1000 SNPs to a smaller dataset of 10, 50, and 250 SNPs respectively. Fisher's exact test is a significance test appropriate for categorical count data [26]. The resulting p-value for this test can be
interpreted as the likelihood of seeing a difference of the size observed among success rates when the methods do not differ. We consider results statistically significant when p ≤ 0.05.
Additionally, we graphically show results for filtering to each percentile from the 99^th to the 50^th.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
CSG and JK developed SURF. CSG, NMP and JHM designed and performed the experiments. CSG, NMP, JK and JHM prepared the manuscript. All authors have read and approved the final manuscript.
Supplementary Material
Additional file 1:
Appendix. This is an appendix to accompany the manuscript that includes additional theoretical analysis of the Relief algorithms discussed in the manuscript.
Additional file 2:
Epistasis models. These are the epistasis models used in our data simulation.
Additional file 3:
Significance of differences with a sample size of 800. This is a plot showing the significance of statistical results for the situation where there are 400 cases and 400 control individuals. These
plots follow the example shown in Figure Figure2.2. Pairwise comparisons are made between each pair of methods at the 99^th, 95^th, and 75^th percentiles. ReliefF, SURF, TuRF, and SURF & TuRF are
labeled R, S, T, and ST respectively. Significance is illustrated with levels of grey (i.e. light grey indicates 0.01 <p ≤ 0.05, dark grey indicates 0.001 <p ≤ 0.01, and black indicates p ≤ 0.001).
Additional file 4:
Significance of differences with a sample size of 1600. This is a plot showing the significance of statistical results for the situation where there are 800 cases and 800 control individuals. These
plots follow the example shown in Figure Figure2.2. Pairwise comparisons are made between each pair of methods at the 99^th, 95^th, and 75^th percentiles. ReliefF, SURF, TuRF, and SURF&TuRF are
labeled R, S, T, and ST respectively. Significance is illustrated with levels of grey (i.e. light grey indicates 0.01 <p ≤ 0.05, dark grey indicates 0.001 <p ≤ 0.01, and black indicates p ≤ 0.001).
Additional file 5:
Significance of differences with a sample size of 3200. This is a plot showing the significance of statistical results for the situation where there are 1600 cases and 1600 control individuals. These
plots follow the example shown in Figure Figure2.2. Pairwise comparisons are made between each pair of methods at the 99^th, 95^th, and 75^th percentiles. ReliefF, SURF, TuRF, and SURF&TuRF are
labeled R, S, T, and ST respectively. Significance is illustrated with levels of grey (i.e. light grey indicates 0.01 <p ≤ 0.05, dark grey indicates 0.001 <p ≤ 0.01, and black indicates p ≤ 0.001).
This work is funded by NIH grants LM009012, AI59694, HD047447, and ES007373. The authors would like to thank Mr. Jason Gilmore for his technical assistance.
• Iles MM. What Can Genome-Wide Association Studies Tell Us about the Genetics of Common Disease? PLoS Genet. 2008;4:e33. doi: 10.1371/journal.pgen.0040033. [PMC free article] [PubMed] [Cross Ref]
• McCarthy MI, Abecasis GR, Cardon LR, Goldstein DB, Little J, Ioannidis JPA, Hirschhorn JN. Genome-wide association studies for complex traits: consensus, uncertainty and challenges. Nat Rev
Genet. 2008;9:356–369. doi: 10.1038/nrg2344. [PubMed] [Cross Ref]
• Hardy J, Singleton A. Genomewide Association Studies and Human Disease. N Engl J Med. 2009;360:1759–1768. doi: 10.1056/NEJMra0808700. [PMC free article] [PubMed] [Cross Ref]
• Kraft P, Hunter DJ. Genetic Risk Prediction - Are We There Yet? N Engl J Med. 2009;360:1701–1703. doi: 10.1056/NEJMp0810107. [PubMed] [Cross Ref]
• Jakobsdottir J, Gorin MB, Conley YP, Ferrell RE, Weeks DE. Interpretation of Genetic Association Studies: Markers with Replicated Highly Significant Odds Ratios May Be Poor Classifiers. PLoS
Genet. 2009;5:e1000337. doi: 10.1371/journal.pgen.1000337. [PMC free article] [PubMed] [Cross Ref]
• Tyler AL, Asselbergs FW, Williams SM, Moore JH. Shadows of complexity: what biological networks reveal about epistasis and pleiotropy. BioEssays. 2009;31:220–227. doi: 10.1002/bies.200800022. [
PMC free article] [PubMed] [Cross Ref]
• Hirschhorn JN, Lohmueller K, Byrne E, Hirschhorn K. A comprehensive review of genetic association studies. Genet Med. 2002;4:45–61. [PubMed]
• Finckh U. The future of genetic association studies in Alzheimer disease. Journal of Neural Transmission. 2003;110:253–266. doi: 10.1007/s00702-002-0775-7. [PubMed] [Cross Ref]
• Moore JH, Ritchie MD. The Challenges of Whole-Genome Approaches to Common Diseases. JAMA. 2004;291:1642–1643. doi: 10.1001/jama.291.13.1642. [PubMed] [Cross Ref]
• Kira K, Rendell LA. A Practical Approach to Feature Selection. Machine Learning: Proceedings of the AAAI'92. 1992.
• Moore JH, White BC. Exploiting expert knowledge in genetic programming for genome-wide genetic analysis. Lecture Notes in Computer Science. 2006;4193:969–977. full_text.
• Moore JH, White BC. Genome-wide genetic analysis using genetic programming: The critical need for expert knowledge. Genetic Programming Theory and Practice. 2007;4:11–28. full_text.
• Greene CS, White BC, Moore JH. An Expert Knowledge-Guided Mutation Operator for Genome-Wide Genetic Analysis Using Genetic Programming. Lecture Notes in Bioinformatics. 2007;4774:30–40.
• Greene CS, White BC, Moore JH. Ant Colony Optimization for Genome-Wide Genetic Analysis. Lecture Notes in Computer Science. 2008;5217:37–47. full_text.
• Motsinger A, Reif D, Fanelli T, Davis A, Ritchie M. Linkage Disequilibrium in Genetic Association Studies Improves the Performance of Grammatical Evolution Neural Networks. IEEE Symposium on
Computational Intelligence and Bioinformatics and Computational Biology, 2007 CIBCB'07. 2007. pp. 1–8. [PMC free article] [PubMed]
• Moore JH, Gilbert JC, Tsai CT, Chiang FT, Holden T, Barney N, White BC. A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in
genetic studies of human disease susceptibility. Journal of Theoretical Biology. 2006;241:252–261. doi: 10.1016/j.jtbi.2005.11.036. [PubMed] [Cross Ref]
• Greene CS, Kiralis J, Moore JH. Nature-Inspired Algorithms for the Genetic Analysis of Epistasis in Common Human Diseases: Theoretical Assessment of Wrapper vs. Filter Approaches. Proceedings of
the IEEE Congress on Evolutionary Computing. 2009. pp. 800–807.
• Kononenko I. Estimating Attributes: Analysis and Extensions of RELIEF. European Conference on Machine Learning. 1994. pp. 171–182.
• Moore JH, White BC. Tuning ReliefF for Genome-Wide Genetic Analysis. Lecture Notes in Computer Science. 2007;4447:166–175. full_text.
• Moore JH. The Ubiquitous Nature of Epistasis in Determining Susceptibility to Common Human Diseases. Human Heredity. 2003;56:73–82. doi: 10.1159/000073735. [PubMed] [Cross Ref]
• McKinney B, Reif D, White B, Crowe J, Moore J. Evaporative cooling feature selection for genotypic data involving interactions. Bioinformatics. 2007;23:2113–2120. doi: 10.1093/bioinformatics/
btm317. [PubMed] [Cross Ref]
• McKinney BA, Crowe JE, Guo J, Tian D. Capturing the Spectrum of Interaction Effects in Genetic Association Studies by Simulated Evaporative Cooling Network Analysis. PLoS Genet. 2009;5:e1000432.
doi: 10.1371/journal.pgen.1000432. [PMC free article] [PubMed] [Cross Ref]
• Robnik-Sikonja M, Kononenko I. ICML '97: Proceedings of the Fourteenth International Conference on Machine Learning. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc; 1997. An adaptation of
Relief for attribute estimation in regression; pp. 296–304.
• Robnik-Sikonja M, Kononenko I. Theoretical and Empirical Analysis of ReliefF and RReliefF. Mach Learn. 2003;53:23–69. doi: 10.1023/A:1025667309714. [Cross Ref]
• Beretta L, Cappiello F, Moore JH, Barili M, Greene CS, Scorza R. Ability of epistatic interactions of cytokine single-nucleotide polymorphisms to predict susceptibility to disease subsets in
systemic sclerosis patients. Arthritis and Rheumatism. 2008;59:974–83. doi: 10.1002/art.23836. [PubMed] [Cross Ref]
• Sokal RR, Rohlf FJ. Biometry: the principles and practice of statistics in biological research. 3. New York: W. H. Freeman and Co; 1995.
Articles from BioData Mining are provided here courtesy of BioMed Central
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2761303/?tool=pubmed","timestamp":"2014-04-16T08:44:10Z","content_type":null,"content_length":"105441","record_id":"<urn:uuid:4d313a5f-003a-4bbb-8b96-86b58fe233dc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Flushing, NY Calculus Tutor
Find a Flushing, NY Calculus Tutor
...I had been teaching assistant for three courses and four semesters during my graduate studies: Introduction to Medical Imaging, Advanced Medical Imaging and Introduction to Biomaterials at
University of Southern California at Los Angeles during my PhD training. I assisted these courses by develo...
24 Subjects: including calculus, physics, geometry, statistics
Hello, students and/or parents, Thank you for giving me the opportunity to introduce myself to you. I am a current college instructor at the City University of New York (CUNY) in the department of
computer engineering technology. I earned my Master of Science degree in electrical engineering from Stony Brook University, New York.
28 Subjects: including calculus, Spanish, chemistry, geometry
...I've been using apple computers since the first Macintosh classic came out. Although I also have PC's at home and at work, Mac's are my main tools for graphic design, movie editing, and web
design. I consistently follow the technological advances with MacOS, as well as it's connection with iPhones, iPads, and iTunes.
83 Subjects: including calculus, chemistry, physics, statistics
...We can do practices, exercises, and/or homework together, or we can study a topic from the beginning and do exercises to reinforce our understanding. I need cancellation notifications 24-hours
before. As work place, I prefer comfortable and suitable public places generally, like a coffee shop with wide tables, or a library.
25 Subjects: including calculus, physics, statistics, logic
I am a former Wall Street quant with degrees in Math, Computer Science and Finance. I have been tutoring since my days on the Math Team at Stuyvesant HS. I was a high school teacher for a brief
time and, more recently, I taught Probability and Statistics at Stony Brook University for a few years, where I received the President's Award for Excellence in Teaching.
16 Subjects: including calculus, geometry, statistics, GRE
|
{"url":"http://www.purplemath.com/Flushing_NY_calculus_tutors.php","timestamp":"2014-04-20T21:20:16Z","content_type":null,"content_length":"24254","record_id":"<urn:uuid:cdd0eb2a-4241-4a06-b844-2cb6e1832eac>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Strava Stats YTD! - What's yours?
1. 1650miles
Not very much, but I've broken my arm twice!
2. Distance 32 Miles
Time. 5.32h 32m
Elev Gain 4350ft
Rides. 2
3. YEAR-TO-DATE
Distance 4,011.1 mi
Time 287h 58m
Elev Gain 186,178 ft
Rides 177
3/4 of these miles are on skinny wheels , I've turned into a roadie.
4. Distance: 14,115.2 km (8 771 miles)
Time: 706h 42m
Elev Gain: 304,907 m (1,000,351 ft)
Rides: 292
5. YEAR-TO-DATE
Distance 4,132.8 mi
Time 271h 7m
Elev Gain 151,407 ft
6. YEAR-TO-DATE
Distance11,438.7 km
Time 569h 7m
Elev Gain 102,153 m
Rides 256
7. YEAR-TO-DATE
Distance 11,438.7 km (7107 miles)
Time 569h 7m
Elev Gain 102,153 m (335,147 feet)
Rides 256
is there a prize for lowest?
Hope so
Distance 10.7 mi
Time 0h 48m
Elev Gain 787 ft
Rides 2
Only used it the once when i wanted to see how far/high it was to the top of the local forest from my hoose. Turned it off on the trails but missed a couple of mile at the start of return journey
when i forgot to turn it on again. Not really into the stats side of it but will use it again on the big hill as i've a top speed to beat
9. Year-to-Date
Distance 3,395.0 mi
Time 291h 56m
Elev Gain 224,098 ft
Rides 163
mostly off road
11. YTD
Distance 3368m
Time 415Hrs
Elev Gain - can't find it!
Rides 237
Mostly off road
12. YEAR-TO-DATE
Distance 3,544.0 km
Time 240h 35m
Elev Gain 78,807 m
Rides 179
This only includes a very small (<5) percent of my commutes though as I don't routinely Strava these. I guess there's about another 3500km commuting...
13. Didn't log all my rides on strave but my garmin total for the year was 5244 miles. Last ride on the 31st December and it all starts again today.
Been a good year - average hr down, average cadence up, average speed up. Quite surprised that at 46 I'm still improving.
14. Cycling, a feeble,
Distance 1,010.8 km
Time 52h 18m
Elev Gain 7,531 m
Rides 88
Distance 2,372.1 km
Time 219h 55m
Elev Gain 43,384 m
Runs 188
15. 3017 mile, 210,000ft climbing and 86 rides, 50/50 on and off road
End of term report - poor effort, must try harder
16. YEAR-TO-DATE
Distance 2,416.1 mi
Time 138h 33m
Elev Gain 97,005 ft
Rides 143
17. YEAR-TO-DATE
Distance 0.0 mi
Time 0h 0m
Elev Gain 0 ft
Rides 0
Mine seems to have reset :p
18. is there a way to retrieve the 2013s figures now they've all reset to zero?
19. Not using strava but keeping a log, 2510 miles. Something for me to beat this year.
is there a way to retrieve the 2013s figures now they've all reset to zero?
You could use veloviewer.com?
reggiegasket - Member
is there a way to retrieve the 2013s figures now they've all reset to zero?
Go to the training window and upper left where you see the <2014> click the left hand pointer to go back a year.
My figures since getting a Garmin mid-May:
2,229.7 Miles
181 hours
236 activities
57,326 ft climbed.
22. Since the beginning of March last year i've recorded a paltry 1566 miles & 71 rides on Endomondo. Thats not including my daily cycle commute but thats only around 17 miles a week.
Must do better this yr.
23. 12,914 miles
763 hours 5 minutes
423,058ft climbing
647 rides
24. cheers for that eddie
Distance 7891 miles
Time 518 hrs
Elev Gain ?
Rides 287
25. 30miles ytd
Yesterday on the mtb freezing windy mucky local trails
26. Distance 2846km
Time 139h57m
Elev Gain 20821m
Rides 172
Nothing great by the figures already posted, but 25% up on 2012, so happy with that
27. not much mileage but plenty of climbing.
Activity Stats
Activity count: 104
Total dist: 1,052 mi
Total elev: 116,174 ft
Total time: 5d 03:33:41
Max dist: 68 mi - 23/06/13 I'm done and scraped silver medal
Max elev: 6,605 ft - 23/06/13 I'm done and scraped silver medal
Max time: 04:53:25 - 23/06/13 I'm done and scraped silver medal
12,914 miles
763 hours 5 minutes
423,058ft climbing
647 rides
Chapeau, that's some dedicated commuting!
12,914 miles
763 hours 5 minutes
423,058ft climbing
647 rides
Good work!
30. Nowt special and didn't start using strava until February this year, but a fair bit of climbing for a southerner. Includes some classic alpine climbs and the Marmott route - 4000m of climbing in
one hit! Hoping to double it this year.
2198km (1365 miles)
33,329m climbing (109,347ft)
117 hours
129 rides
31. Total Activities 53
Total Distance (mi) 750
Total Time (hrs) 103:29:04
Total Elevation (ft) 90,427
NB: No commuting or road bike mileage! And I (or more accurately she) had a baby and I (definitely not she) almost broke my neck during the summer. I prefer to rack up elevation (or descent but
you have to earn it to burn it) than miles!
32. That veloviewer site is handy, got more than 0 miles this time.
Activity Stats
Activity count:
Total dist:
1,661 mi
Total elev:
24,842 m
Total time:
4d 03:22:10
mtbmatt - Member
Distance: 14,115.2 km (8 771 miles)
Time: 706h 42m
Elev Gain: 304,907 m (1,000,351 ft)
Rides: 292
wow puts anyone to shame, a million feet of climbing
heres mine : ALL MTB (dont own a roadie!)
hours riding : 271
miles : 2,084.7
rides : 114
elevation : 383,014 feet
doesnt look much on paper miles wise but the climbing we do per ride is plenty
You must log in to post.
|
{"url":"http://singletrackworld.com/forum/topic/strava-stats-ytd-whats-yours/page/5","timestamp":"2014-04-18T23:39:00Z","content_type":null,"content_length":"56333","record_id":"<urn:uuid:1b523be1-ea40-4297-a7a1-6696b9c0078f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry... another sos...
November 19th 2006, 07:59 PM
Geometry... another sos...
Hi There;
Help.. help...
Find the 3x3 matrices that represent the following transformation of 2D space:
a) A rotation of -90 degrees about the origin followed by a translation of (3, -4)
b) A translations of (3, -4) followed by a rotaion of -90 about the origin
c) A rotation of -90 degrees abt the point (3, -4)
b) Find the image of A(1,2) under the transformation (a), (b) and (c) respectively.
Thank you.
November 19th 2006, 08:37 PM
Hi There;
Help.. help...
Find the 3x3 matrices that represent the following transformation of 2D space:
a) A rotation of -90 degrees about the origin followed by a translation of (3, -4)
b) A translations of (3, -4) followed by a rotaion of -90 about the origin
c) A rotation of -90 degrees abt the point (3, -4)
b) Find the image of A(1,2) under the transformation (a), (b) and (c) respectively.
Thank you.
Are you sure you are asked to find 3x3 matrices to represent these transformations?
November 19th 2006, 08:41 PM
yes.. is 3x3...
November 20th 2006, 04:41 AM
|
{"url":"http://mathhelpforum.com/pre-calculus/7773-geometry-another-sos-print.html","timestamp":"2014-04-16T14:59:35Z","content_type":null,"content_length":"5758","record_id":"<urn:uuid:e5880bc5-ef05-4355-8e96-e6bbf690ff23>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Free fraction
Free fraction videos online
Find here free online math videos on these fraction topics:
(Fraction videos, part 1, are on this page.) The videos are recorded in high-density (HD) and are viewable both here as well as at my Youtube channel.
These videos are usable for students, teachers, and parents. You can use them...
• To learn these topics yourself (if you're a student for example, or an adult needing a refresher)
• As lesson plans for teaching these topics. Often, one video from below can be made into several lessons with students.
The videos match the lessons in my book Math Mammoth Fractions 2 (Blue Series book), or the lessons in chapter 6 of Grade 5-B (Light Blue series). In either book, you will get MANY more practice
exercises, word problems, and puzzles than what are shown in the videos, and also some lessons that are not in covered in this set of videos.
Math Mammoth Fractions 2
103 pages
(includes answers) Sample pages (PDF)
Contents and Introduction
Multiplying Fractions by Whole Numbers 1
PDF download $5.00 Multiplying Fractions by Fractions
Printed copy $10.20, b&w Multiplication and Area
Divide Fractions by Whole Numbers
Fractions and Decimals in Measuring Units
Buy download at Kagi
(credit cards, Paypal, check, cash, PO,
wire transfer, money order; see Kagi shopper help)
Buy printed book at Lulu
(credit cards, Paypal, PO)
Este libro en español
Simplifying fractions
First I show the simplification process using visual models and an arrow notation to help students understand the concept. Simplifying fractions is like joining or merging fractional pieces together,
such as in 4/12, we merge each 4 pieces so we get 1/3.
I show how we can simplify a fraction in several steps, instead in one step. If you simplify in one step, you need to use the greatest common factor of the numerator and denominator, but this is not
necessary if you simplify in several steps.
Sometimes you cannot simplify. Lastly we explore if the given fractions are already in their lowest terms.
Multiply fractions by whole numbers
Multiplying fractions by whole numbers is a fairly easy concept. Students just need to remember that 4 x (2/3) is not calculated as (4 x 2) / (4 x 3). In the visual model, you can color two thirds,
four times, to get the answer. Doubling or tripling recipes is a nice application of this concept.
I also show an interesting connection between (1/3) x 5 or one-third of five pies, and 5 x (1/3), or five copies of 1/3.
Multiply fractions by fractions
I start out by explaining that (1/2) x (1/3) means 1/2 of 1/3, and we find that visually. Similarly, (1/3) x (1/4) means 1/3 of 1/4. From this we get a shortcut that (1/m) x (1/n) = (1/mn).
Next, we find what is 2/3 of 1/4. First, we find 1/3 of 1/4 as being 1/12. Therefore, 2/3 has to be double that much, or 2/12.
After introducing the shortcut for fraction multiplication (multiply the numerators, multiply the denominators), I solve a few simple multiplication problems and a word problem.
Lastly, I justify the common rule for fraction division (sort of a "proof" for fifth grade level).
Simplifying before multiplying - fraction multiplication
I explain how we can simplify before we multiply fractions, and also why we are allowed to do so.
Multiplying mixed numbers
Multiplying mixed numbers is easy: simply convert them to fractions first, then multiply using the shortcut (rule) for fraction multiplication.
The difficulty comes in remembering to change them to fractions!
Some students have misconception that you can multiply the whole-number parts and the fractional parts separately, such as (1 1/2) x (1 1/2) = 1 1/4. I show with an area model why that is wrong.
Lastly, I solve a word problem involving area of a rectangle and a square.
Fraction multiplication and area
I explain how fraction multiplication and area of rectangles relate to each other. Basically, if the sides of the rectangle are fractional parts of a unit, then we solve the area by multiplying the
fractions (of course). And, the visual model provides a neat illustration of fraction multiplication.
Divide fractions: mental math
I show two situations where we can divide fractions without using the usual "rule" for fraction division, but just mental math.
The first is when we divide a fraction by a whole number and we can think of pie pieces shared evenly between so many people, such as 6/10 divided by 3.
The other situation is a fraction divided by a fraction, and has to do with thinking "How many times does the divisor 'fit' into the dividend?" such as for example 4/7 divided by 2/7 (two times).
How to divide fractions & reciprocal numbers
I explain what reciprocal numbers are, including a visual interpretation for them. Then we study the rule for fraction division: To divide by a fraction, multiply by its reciprocal.
Lastly, I explain why this rule works, based on reciprocal numbers and on interpreting division as "how many times does the divisor fit into the divi
Fractional part of a group
This is sort of a "lesson plan" for finding a fractional part of a group of objects, or a fractional part of a number. Basically, 1/3 of 18 is a division problem 18 divided by 3. (To find a
fractional part of a number when the fraction is of the form 1/n, just divide by n.) And to find 5/11 of 44, first find 1/11 of 44, which is 44 divided by 11 = 4. Then, multiply that times 5.
Ratios and fractions
A ratio is a comparison of two numbers (or quantities) using division. For example, if I have 4 hearts and 3 stars, then the ratio of hearts to stars is 4:3 (four to three).
We can often "translate" between ratio language and fraction language.
Lastly I work several word problems that involve ratios and fractions, using a bar (or block) model, a powerful visual aid that helps students from grade 4 onward solve problems that would otherwise
require algebra.
How to convert fractions into decimals
Some fractions we can convert first to equivalent fractions with a denominator 10, 100, 1000 etc. and then from those, into decimals. But most often, to convert a fraction into a decimal, we need to
divide (long division or calculator).
For example, to convert 5/7 into a decimal, divide 5 by 7.
Sometimes, in such a division, the decimal ends. More often though, it is a nonending repeating decimal. We see that when in our long division the same remainders keep coming up in the same order.
Such decimals repeat part of their decimal digits, such as 0.13131313.... or 0.83567567567567... (567 repeats)
You might ask if decimals that don't repeat exist. Yes, they do. They are irrational numbers, meaning they are NOT fractions (not rational numbers), and they are quite a fascinating topic in
Divide fractions: an alternative algorithm
In this alternative algorithm for fraction division we first convert the two fractions to have the same denominator (like fractions). Then, we simply divide the numerators, so it becomes a
whole-number division. I also show a proof of this method.
See also free fraction videos, part 1: mixed numbers, equivalent fractions, adding and subtracting fractions & mixed numbers.
The videos match the lessons in my book Math Mammoth Fractions 2 (Blue Series book), or the lessons in chapter 6 of Grade 5-B (Light Blue series). In either book, you will get MANY more practice
exercises, word problems, and puzzles than what are shown in the videos, and also some lessons that are not in covered in this set of videos.
Math Mammoth Fractions 2
103 pages
(includes answers) Sample pages (PDF)
Contents and Introduction
Multiplying Fractions by Whole Numbers 1
PDF download $5.00 Multiplying Fractions by Fractions
Printed copy $10.20, b&w Multiplication and Area
Divide Fractions by Whole Numbers
Fractions and Decimals in Measuring Units
Buy download at Kagi
(credit cards, Paypal, check, cash, PO,
wire transfer, money order; see Kagi shopper help)
Buy printed book at Lulu
(credit cards, Paypal, PO)
Este libro en español
|
{"url":"http://www.mathmammoth.com/videos/fractions_2.php","timestamp":"2014-04-18T00:19:48Z","content_type":null,"content_length":"30824","record_id":"<urn:uuid:4ac91344-b303-4039-91fd-1649edd7144f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electrical Engineering Formulas Cheat Sheet
Electrical Engineering Formulas Cheat Sheet PDF
Sponsored High Speed Downloads
electrical formulas cheat sheet Best Results From Wikipedia Yahoo Answers Youtube Electrical Formulas, Calculations, ... Aug 13, 2012 · Download basic Electrical engineering complete pdf for 1st
year.. 11:20 Sushil Kumar. ... Linux--Ubuntu reference terminal commands and Cheat sheet.
What do complex numbers have to do with electrical engineering? What good are they? Before you say, “Beam me up, Scotty,” consider the following circuits problem: ... Note the following immediate
implications (these are worth writing on your exam cheat sheet):
Engineering Formulas, 6th Edition By Gieck, Kurt and Reiner Gieck ... The PE Review Cheat Sheet ITE Traffic Engineering Handbook, 2009, 6th edition, ... School of Electrical and Computer Engineering
ECE 440 Transmission of Information Class ...
Department of Electrical and Computer Engineering . New Jersey Institute of Technology . ECE 361: Electromagnetic Fields I (3 credits, 3 ... Permitted: definitions, units, formulas, geometry that
define parameters in formulas; equivalent circuits. Homework Policy. The problems will be assigned ...
Batten College of Engineering and Technology Department of Electrical and Computer Engineering ... and only a “cheat sheet” with handwritten formulas may be used. Details on how to prepare this sheet
will be given in the lecture before
calculated by formulas that you create. Line items on an estimate can be automatically extended ... they can be easily updated on your spreadsheet. Setting up a Summary Sheet The summary sheet is a
quick overview of all the costs of construction broken down into major work ...
PHYS 401 Physics of Ham Radio 26 Basic Electronics Chapter 2, 3A (test T5, T6) Basic Electrical Principles and the Functions of Components Figures in this course book are
Electrical and Electronic Symbols.....43 4.5.2. Electrical and Electronic Diagrams ... Figure 30a: Engineering Order (sheet 1) EO ENGINEERING ORDER GODDARD SPACE FLIGHT CENTER GREENBELT,MARYLAND OF
DRAWING & REV LEVEL EO NO. INC REV
This Study Guide is designed to familiarize you with the advanced electrical and fundamental ... • Use the Reference Sheet on page 15 to find the figures and formulas you will need. ... Reference
Sheet Formulas
Department of Chemical Engineering - Tuskegee University formulas, ... Computer and Electrical Engineering ... and Transforms, 4th Edition, Prentice Hall, ... The PE Review Cheat Sheet Part III Water
Engineering ... 4th edition, 2007, ...
EE 215, Fundamentals of Electrical Engineering, ... Complex formulas (such as trigonometric identities) ... If you cheat, you cheat yourself of the opportunity to learn the material, ...
School of Electrical and Computer Engineering ECE 440 Transmission of Information Class Information ... crib sheet, and no calculator. However, some tables or formulas may be provided. The dates for
these exams cannot be changed once they are fixed. ... Cheat. Course Outcomes A ...
My old cheat sheet was hand written, well used (and abused), ... You can find a summary table of simplified equations for the series ↔ parallel transformation in many electrical engineering texts.
... Don’t have the time to derive formulas for magnitude and phase of common passive circuits?
• Understand basic chemical engineering jargon and terminology • Process Development Engineers • Industrial Engineers • Electrical Engineers • Mechanical Engineers ... SHEET • Process Flow Diagrams
(PFDs) • Piping and Instrumentation Diagrams
Reference Sheet Formulas V avg E = IR P = IE = V ⋅ 637 V = V ⋅ 707 peak rms peak Figure 1. A B C D E F G H I J K L Figure 2. 15 ... Refer to Figure 1 on the Reference Sheet. Which drawing is the
electrical symbol for a source of energy? a. A b. C c. I d. J Section 3 Page 4 ...
For definitions and details of joint type, please consult CWB Module 2, Engineering Drawings, Basic Joints and Preparation for ... following formulas: ATS (ins/min) = Measured Weld Length in ...
details will be shown on the specific Welding Procedure Data Sheet(s). Electrical Characteristics
Effective February 10, 2009 ABC Formula/Conversion Table for Wastewater Treatment, Industrial, Collection and Laboratory Exams Alkalinity, as mg CaCO3/L =
Math & Engineering Cheat Sheet Transformation from time space to frequency space This is a topic that’s taught in college level engineering, and it represents a conceptual
ISE 541: Occupational Safety Engineering Spring 2013 Wednesday: 4:30-7:15pm ... Thermal stress; electrical hazards; industrial noise and vibration hazards; fall hazards and ... 8.5” x 11”) for
formulas, etc.
Formulas ... NEMA Ratings (electrical enclosures)..... 19 IP Ratings (electrical enclosures) ... FRL Cheat Sheet ..... 21 3 Courtesy of Steven Engineering, Inc.-230 Ryan Way, South San Francisco, CA
THERMODYNAMICS,), PROBLEMS AND FORMULAS ... ELECTRICAL & POWER ENGINEERING ELECTRONICS EBOOK COLLECTION ENTHEOGEN REFERENCE (20 EBOOKS) ... Calculus Cheat Sheet Integrals.pdf . 18.
CalculusFormulas2003.doc . 19. Call of Cthulhu D20 - Adventure - Epic.pdf .
Apply the area and perimeter formulas for rectangles in real world and mathematical problems. ... and the great engineering projects that bring . 4.35 . ... growing population and its expanding need
for electrical power.
The IDEALbender line gives you the engineering design, indicator marks and durability to bend conduit with ease and confidence. IDEAL INDUSTRIES, INC. Sycamore, IL 60178, U.S.A. 800-304-3578 Customer
Assistance www.idealindustries.com ND 1534-2 Made in U.S.A.
plications in electrical engineering. Prerequisites: Stat 151, 153, ... book, and one double-sided sheet of formulas to each in-class exam. Absolutely no calculators, laptops, ... Do not cheat. Do
not collude. Do not fabri-
and digital/electrical meters; and metering calculations and formulas (for example, consumption, load, etc ... Engineering. Revised 2/10/2009. Can be found on Portal MSO Information Center General
Information.* SCE Meter Catalogs.
... engineering, technology and ... straight wire, uniformly charged infinite plane sheet and uniformly charged thin spherical shell (field inside and outside). 7 ... electrical energy and power,
electrical resistivity and conductivity. Carbon resistors, ...
|
{"url":"http://ebookily.org/pdf/electrical-engineering-formulas-cheat-sheet","timestamp":"2014-04-23T11:04:06Z","content_type":null,"content_length":"29070","record_id":"<urn:uuid:9e7e7521-7286-4b05-bd68-f68bcace0718>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math::Series - Perl extension dealing with mathematic series
use Math::Series;
my $x_n = Math::Series->new( formula => 'n*x',
start_value => 1,
iteration_var => 'n',
previous_var => 'x',
start_index => 0,
cached => 1 );
print $x_n->next(), "\n" foreach 0..5;
# prints 1, 2, 6, 24...
print $x_n->at_index(3);
# prints 24
Math::Series defines a class for simple mathematic series with a recursive definition such as x_(n+1) = 1 / (x_n + 1). Such a recursive definition is treated as a sequence whose elements will be
added to form a series. You can refer to the previous sequence element as well as to the current index in the series. Creation of a Math::Series object is described below in the paragraph about the
Math::Series uses Math::Symbolic to parse and modify the recursive sequence definitions. That means you specify the sequence as a string which is parsed by Math::Symbolic. Alternatively, you can pass
the constructor a Math::Symbolic tree directly.
Because Math::Series uses Math::Symbolic for its implementation, all results will be Math::Symbolic objects which may contain other variables than the sequence variable and the iterator variable.
Each Math::Series object is an iterator to iterate over the elements of the series starting at the first element (which was specified by the starting element, the second argument to the new()
constructor). It offers facilities to cache all calculated elements and access any element directly, though unless the element has been cached in a previous calculation, this is just a shortcut for
repeated use of the iterator.
Every element in the series may only access its predecessor, not the elements before that.
Math::Series defines the following package variables:
This scalar contains a Parse::RecDescent parser to parse formulas. It is derived from the Math::Symbolic::Parser grammar.
This scalar indicates whether Math::Series should warn about the performance implications of using the back() method on uncached series. It defaults to true.
The constructor for Math::Series objects. It takes named parameters. The following parameters are required:
The formula is the recursive definition of a sequence whose elements up to the current element will be summed to form the current element of the series. The formula may contain various
Math::Symbolic variables that are assigned a value elsewhere in your code, but it may also contain two special variables: The number of the current iteration step, starting with 0, and the
previous element of the series.
The formula may be specified as a string that can be parsed by a Math::Symbolic parser or as a Math::Symbolic tree directly. Please refer to the Math::Symbolic and Math::Symbolic::Parser man
pages for details.
This parameter defines the starting value for the series. It used as the element in the series that is defined as the lowest series element by the start_index parameter. The starting value
may be a string that can be parsed as a valid Math::Symbolic tree or a preconstructed Math::Symbolic tree.
The following parameters are optional:
The iteration variable is the name of the variable in the Math::Symbolic tree that refers to the current iteration step. It defaults to the variable 'n'.
It must be a valid Math::Symbolic variable identifier. (That means it is /[A-Za-z][A-Za-z0-9_]*/.)
The previous_var parameter sets the name of the variable that represents the previous iteration step. It defaults to the name 'x' and must be a valid Math::Symbolic variable identifier just
like the iteration variable.
This parameter indicates whether or not to cache the calculated series' elements for faster direct access. It defaults to true. At run-time, the caching behaviour may be altered using the
cached() method.
The lower boundary for the series' summation. It defaults to 0, but may be set to any positive integer or zero.
The next() method returns the next element of the series and advances the iterator by one. This is the prefered method of walking down a series' recursion.
Returns a true value if the series is currently being cached, false if it isn't. By default, new objects have caching enabled. It is suggested that you only disable caching if space is an issue
and you will only walk the series uni-directionally and only once.
cached() can be used to change the caching behaviour. If the first argument is true, caching will be enabled. If it is false, caching will be disabled.
Returns the index of the current element. That is, the index of the element that will be returned by the next call to the next() method.
This method also allows (re-)setting the element that will be next returned by the next() method. In that case, the first argument shoudl be the appropriate index.
Returns undef and doesn't set the current index if the argument is below 0.
This method returns the series element with the index denoted by the first argument to the method. It does not change the state of the iterator. This method is extremely slow for uncached series.
Returns undef for indices below the starting index.
This methods returns the series element previously returned by the next() method. Since it is extremely slow on uncached series, it warns about this performance hit by default. To turn this
warning off, set the $Math::Series::warnings scalar to a false value.
This method decrements the current iterator series element.
Returns undef if the current index goes below the starting index.
Steffen Mueller, <series-module at steffen-mueller dot net<gt>
You may find the current versions of this module at http://steffen-mueller.net/ or on CPAN.
This module is based on Math::Sequence. Math::Symbolic and Math::Symbolic::Parser for the kinds of formulas accepted by Math::Series.
|
{"url":"http://search.cpan.org/~smueller/Math-Series-1.01/lib/Math/Series.pm","timestamp":"2014-04-16T05:18:30Z","content_type":null,"content_length":"19290","record_id":"<urn:uuid:e34e191e-32d9-4adc-ab11-e6e763bca61f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
& Continuing Education
The 36-credit program in Computational Science is particularly designed for students with interest and preparation in science, engineering, mathematics, and/or computer science. The program prepares
such students for a future in which computation will play an ever-increasing role in solving science and engineering problems and in creating new scientific knowledge. Specifically, the program is
designed as a professional master’s degree that provides students with a set of highly marketable skills applicable to many areas of science, industry, business, and government.
Although the program is designed for individuals having a wide range of academic and work backgrounds, appropriate preparation for the program involves an understanding of science, typically
demonstrated by at least an academic minor in a traditional science field, as well as some basic mathematics, statistics, and computer science coursework (see Admission requirements). Given the
appropriate preparatory coursework, the program can be completed in 1.5 years.
Students enrolled in this program will:
• learn a high-level language
• acquire knowledge of applied mathematics
• demonstrate knowledge of computational methods
• learn and apply simulation and modeling skills
• be able to apply computational modeling techniques to one or more STEM (science, technology, engineering, mathematics) disciplines
• learn to communicate the solution process effectively
Admission. Applicants must meet the general admission requirements of the Graduate School, including having a minimum GPA of 3.0 and submitting letters of recommendation, transcripts, and a personal
essay. In addition, applicants should have the equivalent of a minor in a science or engineering field as well as basic coursework in mathematics (e.g., calculus and linear algebra), statistics, and
computer science (e.g., a course in programming and one in simulation and modeling). Students not meeting the general admission requirements or lacking preparation may be admitted provisionally
assuming they complete the preparatory coursework either at Valparaiso University or another institution prior to full admission to the program. For international students, a minimum TOEFL score of
80 or IELTS of 6.0 is required.
Curriculum. Students complete four required core courses built around statistics, databases, and visual imaging, and take at least two courses (6 cr) in computational applications in science,
engineering, or other applied areas. Students also complete either an internship experience or a research project. To allow specialization, students fill out the program with elective coursework in
computational science applications, mathematics, or computer science.
┃Core Requirements (12 credits) ┃
┃MATH 540 │Statistics for Decision Making │3 cr ┃
┃IT 603 │Information Management │3 cr ┃
┃IT 633 │Data Mining │3 cr ┃
┃CS 525 │Simulation and Modeling │3 cr ┃
┃Core Applications in Computational Science (6 credits) ┃
┃Choose minimum of two: │ ┃
┃MATH 521 │Mathematical Models of Infectious Disease │3 cr ┃
┃CTS 530 │Meteorological Computer Applications │3 cr ┃
┃CTS 560 │Computational Molecular Science │3 cr ┃
┃CTS 610 │Business Analytics │3 cr ┃
┃CTS 620 │Bioinformatics │3 cr ┃
┃CTS 640 │Topics in Biostatistics │3 cr ┃
┃CTS 650 │Computational Social Science │3 cr ┃
┃Experiential Training (3 credits) ┃
┃Choose one: │ ┃
┃CTS 786 │Internship │1-3 cr┃
┃CTS 792 │Research Project │1-3 cr┃
┃Electives (15 credits may be selected from Core Applications or from below) ┃
┃CS 565 │Interactive Computer Graphics │3 cr ┃
┃CS 572 │Computability and Complexity │4 cr ┃
┃CTS 545 │Evolutionary Algorithms │3 cr ┃
┃CTS 550 │Scientific Visualization │3 cr ┃
┃CTS 590 │Topics in Computational Science │3 cr ┃
┃CTS 690 │Advanced Topics Computational Science │3 cr ┃
┃GEO 515 │Advanced Geographic Information Systems │3 cr ┃
┃IT 664 │Natural Language Technologies │3 cr ┃
┃MATH 520 │Dynamical Systems │3 cr ┃
┃MATH 522 │Optimization │3 cr ┃
┃MATH 523 │Game Theory │3 cr ┃
┃MATH 530 │Partial Differential Equations │3 cr ┃
┃MATH 543 │Time Series Analysis │3 cr ┃
┃MATH 544 │Applied Probability and Statistical Decision Theory │3 cr ┃
┃MATH 570 │Numerical Analysis │3 cr ┃
┃MATH 571 │Experimental Mathematics │3 cr ┃
┃MATH 530 │Numerical Weather Prediction │3 cr ┃
Special 4+1 BS/MS Program Option for undergraduate students at Valparaiso University
Undergraduate students at Valparaiso University may complete the MS in Computational Science in one year by following a special track that ensures completion of all admission requirements and allows
elective graduate computational science coursework during their senior year. As part of their undergraduate study, such students will have either:
1. earned a mathematics or computer science major along with a science minor, or,
2. earned a minimum major in one of the natural sciences or engineering fields (e.g., Astronomy, Biology, Biochemistry, Chemistry, Environmental Science, Meteorology, Physics or any field of
Engineering) and completed the follow mathematics and computer science courses with a B or higher:
• MATH 131 Analytic Geometry and Calculus I (Prerequisite: Precalculus)
• MATH 264 Linear Algebra I, or equivalent
• MATH 240 Statistical Analysis, or equivalent
• CS 157 Algorithms and Programming
• CS 525 Simulation and Modeling (during their junior or senior year)
Students are also encouraged to take MATH 132 Calculus II and CS 158 (Algorithms and Abstract Data Types).
Students meeting the above requirements with a 3.2 overall GPA and a 3.0 science or engineering GPA will be guaranteed admission to the 4+1 BS/MS program. Others may be considered on an individual
basis. Students interested in pursuing this track should consult with the Graduate Office and/or the Computational Science Program Director during their junior year or, at the latest, in the fall of
their senior year.
Valparaiso University students pursuing the BS/MS track that have completed MATH 340 or CS 325 during their undergraduate study rather than MATH 540 or CS 525 may have these core requirements waived
if the course instructor or their academic adviser confirms that graduate level requirements for the courses have been successfully completed.
|
{"url":"http://www.valpo.edu/grad/compsci/index_print.php","timestamp":"2014-04-16T17:18:20Z","content_type":null,"content_length":"13669","record_id":"<urn:uuid:d0de1d79-b8ac-4ae1-ac08-915b5fdc9fc1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adress: College of Nyíregyháza
Department of Mathematics
4400 Nyíregyháza, Sóstói út 31/b
Tel: (42) 599-460
Fax: (42) 402-485
Head of Department: Dr. Zoltán Kovács
Members of the Department
Please visit our electronic mathematical journal Acta Mathematica Nyíregyháziensis, the journal of College of Nyíregyháza, (Hungary) publishes a papers in any parts of mathematics and informatics.
The papers will be refered in the traditional manner, with one anonymus referee. The journal appears in one volume per annum only in electronic form.
IJMTL, International Journal for Mathematics Teaching and Learning was founded by the Centre for Innovation in Mathematics Teaching, CIMT ) and the Institute of Mathematics and Informatics of College
of Nyíregyháza in 2000.
The journal published only in electronic form aims to enhance mathematics teaching for all ages, through relevant articles, reviews and information from around the word.
Research in Mathematics:
Algebraic investigations focus on modular group algebras, representations of finite dimension algebras, embedding problems, as well as on solvability and Engel length of group algebras and of
associated Lie algebras. Besides, researches on fuzzy groups and relations are also carried out. In geometry we study the field of Finsler spaces,namely their geodetic and homology theory. In
combinatorics there are researches concerning optimal ordering algorithms. In analysis our main research topic is dyadic harmonic analysis.Further,there are investigations on linear recursive
series.In pedagogy and methodology we wrote textbook series for general schools (upper four classes),worked out methods for their usage.We compiled "local" and "frame" curricula for these
schools,too. There are researches in the history of mathematics, mainly in the history of Hungarian mathematics in XXth century.
- In algebra the close link between group commutators and associated Lie algebras was discovered.In certain cases the Engel length and nilpotent class of group algebras and its associated Lie
algebras were determined.We characterised those modular group algebras in which symmetric units form a subgroup.We dealt with the representations of algebras of finite dimension.We managed to give a
perfect answer to the question:when can the coronary product of two P-order groups be embedded into the unit group of the group algebra.In certain cases we determined the order of the unitary
subgroup and gave its generators.
- In geometry we reached results in differential geometry.Chern-Weil-type homomorphism was constructed on a manifold with l-form.We extended Masumoto investigations on Finsler spaces with conic
section geodetic to the ones with ellipse geodetic.
- In analysis: 1. some important problems of the theory of 2-adic integers were solved: Proof of Fejér-Lebesgue theorem on their characters solve a 25 year old problem. 2. The fundamental theorem of
dyadic derivate in two-variable case was studed in the Walsh (1995) and bounded Vilenkin case. 3. Verification of double Walsh-Fejér means in spaces wider than L log L. The problem is from 1938. 4.
Proof of Fejér-Lebesgue theorem on unbounded Vilenkin groups answering one of the most sought-after question of dyadic analysis. 5. Nowadays we investigate questions related to the Marcinkiewicz
means (with respect to Vilenkin and Walsh-Kaczmarz systems) and logarithmic means (with respect to Vilenkin and Vilenkin like systems). Research goes on the character systems of noncommutative groups
too. For more details see the homepage of Dyadic Harmonic Analysis Group.
- As for methodology, a textbook series of Muszaki Publisher House was completed that are now used in 95% of general schools. To these books helping materials were prepared including evaluation tests
that make possible a nation-wide unified measurement of pupils' mathematical performance. Some members of our department are co-authors in this series. "Local" and "frame" curricula were also
compiled. Since 1994 we have been involved in the Kassel/Exeter and the IPMA international projects
and we also help the MEP mathematics teaching experiment with teacher support and pupil's material in the U.K.
- By historical research we explored the life and work of some emigrated XXth century Hungarian mathematicians who were unknown in Hungary.We managed to solve some important problems of antique Greek
mathematics concerning incommensurability.
|
{"url":"http://www.nyf.hu/mattan/node/41","timestamp":"2014-04-16T19:49:28Z","content_type":null,"content_length":"16493","record_id":"<urn:uuid:9eee5754-f756-4d32-bfc4-0bc4c61f7cd7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Tutorial on Variable Neighborhood Search
A Tutorial on Variable Neighborhood Search (2003)
Download Links
by Pierre Hansen , Nenad Mladenović
Venue: LES CAHIERS DU GERAD, HEC MONTREAL AND GERAD
Citations: 16 - 2 self
author = {Pierre Hansen and Nenad Mladenović},
title = {A Tutorial on Variable Neighborhood Search},
institution = {LES CAHIERS DU GERAD, HEC MONTREAL AND GERAD},
year = {2003}
Variable Neighborhood Search (VNS) is a recent metaheuristic, or framework for building heuristics, which exploits systematically the idea of neighborhood change, both in the descent to local minima
and in the escape from the valleys which contain them. In this tutorial we first present the ingredients of VNS, i.e., Variable Neighborhood Descent (VND) and Reduced VNS (RVNS) followed by the basic
and then the general scheme of VNS itself which contain both of them. Extensions are presented, in particular Skewed VNS (SVNS) which enhances exploration of far away valleys and Variable
Neighborhood Decomposition Search (VNDS), a two-level scheme for solution of large instances of various problems. In each case, we present the scheme, some illustrative examples and questions to be
addressed in order to obtain an efficient implementation.
10923 Computers and Intractability: A Guide to the Theory of NP-Completeness - Garey, Johnson - 1979
3501 Optimization by simulated annealing - Kirkpatrick, Gelatt, et al. - 1983
287 Modern Heuristic Techniques for Combinatorial Problems - Reeves, editor - 1993
201 P.: Variable neighborhood search - Mladenovic, Hansen - 1997
108 A survey of very large-scale neighborhood search techniques. Discrete Applied Mathematics 123(1–3):75–202 - Ahuja, Ergun, et al. - 2002
97 An introduction to variable neighborhood search - Hansen, Mladenovíc - 1999
58 Local branching - Fischetti, Lodi - 2002
57 Heuristic methods for estimating generalized vertex median of a weighted graph - Teitz, Bart - 1968
37 N.: Variable Neighborhood Search for the p-median - Hansen, Mladenović - 1997
35 J-means: A new local search heuristic for minimum sum-of-squares clustering. 34(2):405–413, 2001 - Hansen, Mladenović
34 A new genetic algorithm for the quadratic assignment problem - Drezner
34 A hybrid GRASP with perturbations for the Steiner problem in graphs - Ribeiro, Uchoa, et al. - 2002
33 N.: A variable neighborhood search for graph coloring - Avanthay, Hertz, et al.
33 D.: Variable neighborhood decomposition search - Hansen, Mladenović, et al.
30 Variable neighborhood search for extremal graphs. 1 - Caporossi, Hansen
21 Accelerating Strategies in Column Generation Methods for Vehicle Routing and Crew Scheduling Problems - Desaulniers, Desrosiers, et al. - 2002
13 On the implementation of a swap-based local search procedure for the p-median problem - Werneck - 2003
7 Local search and variable neighborhood search algorithms for vehicle routing with time windows, Acta Wasaensia 87, Universitas Wasaenis - Braysy - 2001
7 A local branching heuristic for mixed-integer programs with 2-level variables, Research report - Fischetti, Polo, et al. - 2003
6 Effective local and guided Variable neighborhood search methods for the asymmetric traveling salesman problem - Burke, Cowling, et al. - 1999
6 Variable neighborhood search for weighted maximum satisfiability problem, Les Cahiers du GERAD G-2000-62 - Hansen, Jaumard, et al. - 2000
5 Heuristic Algorithms for the Solution of the Quadratic Assignment Problem - Drezner
3 Scheduling workover rigs for onshore oil production, Research Report Dpt - Aloise, Aloise, et al. - 2003
3 Variable neighborhood search,Computers and - Mladenovic, Hansen - 1997
2 Heuristics for routing-median problems - Rodriguez, Moreno-Vega, et al. - 1999
1 Cahiers du GERAD G–2003–46 22 - Les - 1990
1 Programmes mathématiques en variables 0-1 - Hansen - 1974
1 Les procédures d’optimization et d’exploration par séparation et évaluation, Combinatorial Programming - Hansen - 1975
1 Primal - dual variable neighborhood search for exact solution of the simple plant location problem (in preparation - Hansen, Brimberg, et al. - 2003
1 Mladenović (2001a). Variable neighborhood search: Principles and applications - Hansen, N
1 Mladenović (2001c). Developments of variable neighborhood search - Hansen, N
1 Cahiers du GERAD G–2003–46 23 P. Hansen and N. Mladenović (2002a). Variable neighborhood search - Les
1 2002b). Recherche à voisinage variable - Hansen, Mladenović
1 Moreno Pérez (2003). Búsqueda de Entorno Variable - Hansen, Mladenović, et al.
1 Survey and comparison of initialization methods for k-means clustering (in preparation - Hansen, Ngai, et al. - 2003
1 Lokalnii poisk s chereduyshimisy okrestnostyami (in Russian), Diskretnoi analiza (to appear - Kochetov, Mladenović, et al. - 2003
1 J.A.Moreno Pérez and J.M.Moreno Vega (2002). The parallel variable neighborhood search for the p-median problem - Lopez, Batista
|
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.2350","timestamp":"2014-04-18T02:11:28Z","content_type":null,"content_length":"30702","record_id":"<urn:uuid:88d004f2-afba-4fd8-bc77-45ccc3219757>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Running Head: Statistical Significance Bars
Statistical significance bars (SSB):
A way to make graphs more interpretable
Christian D. Schunn
University of Pittsburgh
Author notes: Christian D. Schunn, LRDC. Correspondence concerning this article should be addressed to: Christian Schunn, LRDC 821, 3939 O'Hara St, Pittsburgh, PA 15260 or by E-mail at:
Schunn, C. D. (1999). Statistical significance bars (SSB): A way to make graphs more interpretable. Unpublished manuscript.
While line and bar graphs are a very common medium for presenting data in psychology, currently they rarely present information that the reader would like to know: which means are statistically
significant from one another. I present a new kind of error bar, statistical significance bars (SSBs), that allows the reader to more directly and easily infer whether any two means in the graph are
statistical significant from one another. This method works for both with and between subjects designs and is based directly upon standard statistical tests that the authors would normally use for
pairwise means comparisons.
Tables, line graphs, and bar graphs are a very common way of displaying data in the sciences, especially in psychology. Line graphs and bar graphs are thought to have an advantage over tables in that
they make it easy for the reader to notice effects of variables-that is, they make salient the relative height of points or lines (Kosslyn, 1989; Loftus, 1993; Shah & Carpenter, 1995). However,
simply knowing which points are above or below other points is not typically sufficient in determining the effects of variables. Because the researcher has typically gathered only a small subset of
all possible data, there is a question of statistical significance: how certain are we that the true population means actually differ? Here again, line graphs and bar graphs have the potential of
being more informative than tables. By adding the appropriate error bars to the graph, simple visual comparisons may be used to determine how likely it is that the differences between sample means
can be attributed to noise. Unfortunately, the appropriate error bars are never used.
There are two senses in which appropriate error bars are not used. First, most line and bar graphs display no error bars at all.1 This lack of error bars can be readily seen by examining current
psychology journals. To estimate the recent use rates, I coded the 1998 issues of three high profile journals that span many areas of psychology-Journal of Experimental Psychology: General,
Psychological Review, and Psychological Science.2 Sixty-six percent of the articles included at least one line or table graph. Of these graphs, two thirds displayed no error bars at all. In a
historical analysis of the journal JEP:LMC, Estes (1997) found a similar dearth in the use of error bars in graphs (contrasting sharply with a recent rise in the presentation of variance information
in tables).
The second sense in which appropriate error bars are not used is that, even when some kind of error bar is presented, it is usually not an appropriate error bar for easily determining the statistical
significance between points. From the coding of the 1998 journal articles, when authors did use error bars, the most common type was standard error bars. Specifically, 22% of graphs contained
standard error bars, 2% contained standard deviation bars, 1% displayed confidence interval, and 6% did not describe which error bars were displayed. As I will shortly explain, none of these error
bars are appropriate for the task of easily judging the statistical significance of differences between means.
It is likely that these two aspects are related to one another: people may not use error bars because they do not know what is the appropriate error bar to use or because they know that all of the
common types of error bars are inappropriate. A survey given to faculty at my own psychology department found a variety of reasons for not presenting error bars in journal articles (in decreasing
order of frequency): not being required to include error bars by editors, not knowing how to add error bars in their graphing package, not knowing which type of error bar to use, believing that error
bars are irrelevant to their purposes, aesthetics, adding error bars requires too much effort, and worrying that the error bars they know how to add make the effects look less reliable than they
actually are. Moreover, the current APA publication manual (APA, 1994) provides no guidance on this issue of whether to use error bars in graphs, and if so, which ones are most appropriate. Other
style guides on graph construction provide little or no guidance on this issue as well (e.g., Cleveland, 1985; Gillan, Wickens, Hollands, & Carswell, 1998; Tufte, 1983).
What is the appropriate error bar to use? The answer to this question depends in part on the type of information being sought. Table 1 presents a summary of various information one might seek from an
error bar and how well each type of error bars conveys that information. The function of this paper is to introduce a new kind of error bar that is the best choice for one of the most common goals of
data analysis in psychology: assessing the statistical significance of pairwise comparisons between means.
Table 1. Comparison of standard deviation, boxplots, standard error, 95% confidence intervals, root mean square error, and statistical significance bars for assessing distribution information (range
and homogeneity of variance), effect size, absolute location of the population means, pairwise statistical significance of differences between means, and applicability to repeated measures designs.
│ │Distribution│Effect size│Location of true means│Differences between means │Repeated measures│
│SD │ ++ │ ++ │ + │ + │ + │
│Boxplot│ +++ │ + │ + │ + │ + │
│SE │ + │ + │ ++ │ ++ │ + │
│95% CI │ + │ + │ +++ │ ++ │ + │
│RMS │ + │ + │ + │ ++ │ ++ │
│SSB │ + │ + │ + │ +++ │ +++ │
Note. +=poor, ++=ok, +++=best
To illustrate this goal concretely, consider the following example. A psychologist runs a study with three conditions and displays the resulting condition means in a line graph.3 The readers of the
psychologist's article will want to know which condition means are statistically significantly different from one another. To determine this information, the reader would normally have to refer to
the text in which the results of an ANOVA and pairwise posthoc contrasts are described. In this case, the graph itself is in effect potentially misleading, because apparently significant differences
are not significant, or at least ambiguous. A much more desirable state of affairs would be for the reader to be able to determine more directly and easily from the graph which means are
significantly different from one another. If this were possible, then the reader would not need to navigate the clumsy text or appendices outlining the results of all pairwise contrasts.
The Existing Alternatives
While most graphs contain no information about variance, some graphs do contain error bars of some kind and the reader typically tries to assess statistical significance using these error bars. In
this section, I will step through the various existing types of error bars that can be used, demonstrating why they are inappropriate for the task of determining the statistical significance of
pairwise means comparisons. This overview will serve to enumerate concretely the desirable features of an appropriate type of error bar. Then I will introduce a new type of error bar that has all of
these desirable features.
Standard deviation bars.
Standard deviation is useful when the absolute magnitude of the within group variance is of interest (see Figure 1a). For example, theoretical models occasionally make predictions regarding variance.
As another example, when criteria are to be used to separate groups in clinical settings, the within group variance is also important to assess the reliability of the criterion. However, standard
deviations are not appropriate variance estimates for assessing statistical significance between means because they do not reflect the sample size. As the sample size increases, the variance (as
reflected in the standard deviation) will remain stable, whereas the variance in the estimate of the mean decreases. Thus, to begin to assess statistical significance using standard deviation bars,
the reader must know the ns of each point, and scale the standard deviations bars appropriately. This computation is no trivial task, especially in complex experimental designs with uneven ns.
Similar complaints could be made of boxplots and interquartile range bars. They are quite useful for examining group variance, but quite impractical for assessing statistical significance of means
Standard error bars.
In contrast to standard deviations, standard error bars (see Figure 1b) do make use of the sample size. Specifically, the standard error is equal to standard deviation divided by the square root of
n. This particular error information is highly relevant to statistical means comparisons. However, standard errors do not convey information regarding the criterion associated with an
a level. The statistical significance of a mean comparison is defined relative to a particular test statistic (e.g., t or F). Since the statistics are typically a complex function of the degrees of
freedom and an a level, a reader cannot easily estimate statistical significance with only an estimate of variance. For example, are two means significantly different when their difference is more
than 1.5, 2, 2.5, or 3 times the size of the standard error?
Second, the statistical tests are typically done with pooled estimates of variance (i.e., one estimate of mean variance across all condition means). Thus, to properly assess statistical differences,
the graphically displayed variance should also be a pooled variance estimate. Unfortunately, most graphing packages default to displaying condition specific standard error bars (which will vary in
size from condition to condition). Note: if the difference in variance across conditions is very large, then pooled variance is inappropriate both for the display and for the statistical tests.
Third, standard error bars are not the appropriate error information for within subjects designs (see Estes (1997) and Loftus and Masson (1994) for a detailed exposition of this issue). For within
subject designs, the variability within conditions (which standard error bars convey) is irrelevant. Instead it is the information about the variability in the difference between conditions that is
important. Estes (1997) recommends the use of root mean squared error. This measure is appropriate for within subjects designs and is a pooled measure of variance. However, it does not reflect a
criterion associated with an
a level threshold.
Figure 1. An example graph displaying A) standard deviation bars, B) standard error bars, C) 95% Confidence intervals, and D) .05 Tukey HSD statistical significance bars.
Confidence intervals.
In contrast to standard error bars, confidence intervals solve the first of the three problems with standard error bars (see Figure 1c). That is, confidence intervals do reflect a criterion associate
with an
a level. Specifically, the size of the confidence interval is simply the standard error multiplied by a criterion (e.g., t or F), which can be found in a statistical table using information about
degrees of freedom and a given a level. Unfortunately, the confidence intervals that authors typically use suffer from the other two problems with standard error bars. First, they reflect the
variance within each condition rather than the pooled variance that is used in the statistical tests. Second, they are also not appropriate for within subjects designs, for exactly the same reasons.
These last two problems can be addressed using modified confidence intervals that are based on pooled variance and are appropriate for within subjects designs (see Loftus and Masson (1994) for a
presentation of these modified confidence intervals). However, confidence intervals, including these modified confidence intervals, have an important additional problem: confidence intervals are not
for comparing pairs of condition means. Instead, confidence intervals are used for comparing observed means to theoretical population means (i.e., how likely is it that the observed mean could have
arisen from a population with a certain theoretical mean). Confidence intervals are not directly useful for comparisons across means. The problem is that the variance associated with the estimate of
a difference between two samples is larger by a factor of
A New Alternative: Statistical Significance Bars
The preceding discussion illuminated the problems with existing error bars and desirable features of an error bar. To that list of desirable features, I will add another important feature: a display
construct should be consistent with the usual habits of a reader (Larkin & Simon, 1987). In other words, the construct should have good visual affordances. To understand this feature in the current
context, one must think about the typical habits of an individual reading a line or bar graph. When confronted with a graph containing error bars, a likely activity is for the reader to look for
overlapping error bars. When the error bars for two means overlap, the reader will assume the means do not differ from one another. If the error bars do not overlap, the reader will assume the means
do differ from one another. Unfortunately for the reader, all existing error bars do not accurately support this inference procedure. Then what is desired of a good error bar is one that matches this
behavior. That is, for a good type of error bar, when any two bars overlap, the two means should never be statistically different from one another, and when any two bars do not overlap, the two means
should always be statistically different from one another.
Let us define a new display construct, statistical significance bars (SSB), in exactly these terms. A statistical significance bar is an error bar that displays pairwise statistical significance
between means using the visual overlap test. It turns out that such an error bar is quite easy to calculate using information readily available in an ANOVA table.
SSB also has a simple relationship to existing error bars. For between subjects designs, SSB is simply the (pooled) confidence interval divided by, or the (pooled) standard error multiplied by a
criterion value (usually a t-statistic with given degrees of freedom and
a level) and then divided by
Pooled standard errors or pooled confidence intervals are not usually given in standard statistical package outputs, and thus do not provide a practical definition. Instead, one can easily define an
SSB in terms of the output of an ANOVA table. Let MSEc be the mean squared error term associated with the contrast being graphed. Let
df[c] be the degrees of freedom associated with that error term. Let n be the number of data points contributing to each given mean being displayed.4 Then, the a priori SSB is defined as (a priori
SSB equation):
a, df[c])
t(a, df[c]) is the two-tailed t statistic criterion (i.e., a/2) with df[c] degrees of freedom. This one formula is applicable to both within and between subjects designs.
Truth in advertising: Post hoc contrasts. Since the statistical significance of differences between particular means is usually tested using post hoc contrasts, it usually more appropriate to use
SSBs that are also based on post hoc contrasts. Since we are generally interested in allowing the reader to make inferences about all pairwise comparisons between means in the graph, an appropriate
post hoc procedure is the Tukey Honestly Significant Different (HSD) test. To compute an SSB using the Tukey HSD test, the following formula is used (post hoc SSB equation):
a, k, n)
where k is the number of conditions (or number of cells in an interaction graph), n is the number of data points contributing to each given mean being displayed, MSEc is the mean squared error term
associated with the contrast being graphed, and
Q(a, k,n) is the Q statistic criterion taken from the Sudentized range table found in the appendices of many statistics books. Note that we are dividing by 2 in this equation, not by a level should
be indicated either on the graph or in the figure caption (e.g., .05 Tukey HSD SSB).
Tests other than Tukey's HSD test may be used. For example, for binary data, a corresponding SSB may be constructed from the logistic regression. In general, whatever test the author would use to
test the pairwise significance between points can be applied to create an SSB. The goal of including SSBs is to have the figure match the inferential statistics as closely as possible.
A simple example. Table 2 presents the results of an ANOVA applied to sample data taken from Loftus and Masson (1994). The dependent measure is the mean number of words recalled (out of 20). The
independent measure, manipulated within participants, is the exposure time to each word: 1, 2, or 3 seconds per word. The means for this data set are displayed in Figure 1. To compute the SSBs, we
first determine the MSE for the error term. In this case, the error term is the Block by Participant interaction, and so the MSE is 0.615. The values of n and k are 10 and 3. The Q statistic for a
a level, k=3, df=10 is 3.88. Substituting these values into the posthoc equation produces an SSB of length 0.48. The graph using this SSB is displayed in Figure 1d. The scales are held constant
across the figures 1a through 1d. This scale is unusually large for Figure 1d, but was selected to accommodate the large standard deviation bars. A replotting of Figure 1d is presented in Figure 2
make the group differences clearer.5 Templates demonstrating how this SSB was computed and how to create a graph with an SSB using Microsoft Excel(c) can be found by visiting the web address: http://
Table 2. ANOVA table used as an example of how to calculate SSBs.
│ Term │ Df │ Sum of Squares │ Mean Square │ F-value │ p-value │
│ Participant │ 9 │ 942.533 │ 104. 726 │ │ │
│ Block │ 2 │ 52.267 │ 26.133 │ 42.506 │ <.0001 │
│ Block X Participant │ 18 │ 11.067 │ 0.615 │ │ │
Figure 2. The example graph from Figure 1D (with .05 Tukey HSD statistical significance bars) with the scales expanded.
In this case, the SSBs are much smaller than the SE bars. The relationship between SSBs and SE bars will vary from situation to situation, which is exactly why one cannot rely on SE bars to determine
statistical significance. In a between subjects design, there is a fixed relationship between the size of the SE and the size of the .05 a priori SSB-assuming equal variance, the SSB is 1.39 times
the size of the SE (1.96 divided by
Interactions. SSBs are easily applied to graphs displaying interactions among variables. The same formulas described above are used, except that the value for k in the equation is the number of cells
in the interaction. For example, in a 2x3 interaction graph, there are 6 cell means, and so k is 6. MSEc is the mean squared error term used in testing the interaction. This is true for both within
and between subjects designs.
Occasionally, authors prefer not to use error bars for interaction line graphs because the error bars from one line obscure the data points from the other line, creating a cluttered display. In these
cases, interaction bar graphs with SSBs may be used instead.
As a third issue, occasionally the difference in slope rather than the pairwise means comparisons are of interest in an interaction graph. In this case, authors should plot mean differences between
conditions, with SSBs derived from the error term for the interaction effect. This difference score plot with SSBs will allow the reader to quickly determine whether the effect of one variable is the
statistically different across conditions of the other variable.
General Discussion
On the use of statistical significance cutoffs.
While the shorter term, significance bars, was considered, it was rejected because there is an important difference between statistical significance (i.e., how likely is the observed difference due
to chance) and semantic significance (i.e., how large or how important is the effect). Semantic significance is better displayed by displaying effect sizes (e.g., the difference between condition
means divided by a measure of within group variance) or simply by displaying within group variance (e.g., with standard deviation bars or boxplots).
As Table 1 indicates, SSBs are not appropriate for displaying some kinds of variance information. In psychology, our dependent measures are often continuous, our independent measures are often
discrete values, and our goal is determine the pairwise statistical significance of differences between means. For this situation and goal, SSBs are the most desirable error bar. For other goals,
other error bars should be used. For example, if the author wishes to convey information about the distribution of a variable, either within or across groups, then boxplots should be used. If effect
size is the more desirable information, standard deviation bars are more helpful.
As a related issue, there has been recent debate whether significance testing should be banned entirely because of the many logical errors that it produces (Cohen, 1994; Loftus, 1993). It might be
argued that the use of SSBs would further support poor use of reasoning about
a levels. However, there is nothing inherently illogical in the appropriate use of significance testing (Dixon, 1998). Moreover, there is nothing in particular about SSBs that would support less
logical thinking about significance testing. SSBs merely allow one to visual determine whether a given a level criterion has been met, which is what the text that accompanies figures currently
indicates. If anything, using SSBs may improve thinking about significance testing. For example, in the text it is easy to make sharp but logically questionable distinctions between minor differences
in p-values (p=.051 as non-significant and p=.049 as significant). Using SSBs in a graph makes it more difficult to make such sharp distinctions. Instead three qualitatively different situations are
possible: 1) error bars clearly overlap, indicating a p-value much above .05; 2) error bars clearly do not overlap, indicating a p-value much below .05; and 3) errors bars are close to overlapping
indicating a p-value of approximately .05. Thus, readers are discouraged from making sharp distinctions between p=.051 and p=.049.
The current proposal to use SSBs does not include removing in-text inferential statistics. Inferential statistics presented in text will continue to provide important information. For example, by
including information about degrees of freedom, the authors clarify the kinds of aggregation that was used in the statistics. Instead, the current proposal is to augment the in-text inferential
statistics by having the line and bar graphs clearly indicate the results of the inferential statistics.
Alternatives to SSBs.
An alternative scheme to using SSBs is to include markings (symbols or coloring) indicating which means are statistically different from other means. However, these markings are not always easily
interpreted. For example, one can have three groups A, B, C ordered A<B<C. Here, A and C can be significantly different from one another, and yet there are no statistically significant differences
between A and B or between B and C. How should the mean for group B be labeled? With more than three points, more complicated patterns of statistical significance are possible.
In considering various kinds of error bars, Estes (1997) argues against the use of confidence intervals and other constructs which incorporate statistical criteria tied to particular
a levels. Specifically, Estes raises the issue that lower a levels give wider bands, seemingly giving less confidence in the location of a point. As the better alternative, Estes argues that authors
should plot the appropriate pooled measure of variance (e.g., RMS) on the graphs, allowing the reader to multiply the bars mentally to produce the appropriate significance threshold test. However,
the mental multiplication by non-integers is not a simple perceptual task (e.g., by 1.39). Also, most readers do not have the appropriate post hoc criteria memorized.6 Finally, each paper implicitly
adopts a given a level for determining what is statistically significant and what is not. That same a level can be used in plotting SSBs. Thus, there will be no confusion about the precision of the
data from varying a levels because the a levels will not typically vary. Since most psychology papers use .05, it will probably be best to use this convention in graphing as well. Alternatively, one
might use two tiered error bars, which display both .05 and .01 error bars.
At the beginning of this paper, I listed several commonly mentioned reasons for not including error bars in graphs. The majority of these reasons have been addressed. Which error bar to use is now
clarified. How to compute this error bar has also been clarified. By now using appropriate error bars, statistically significant differences will continue to look significant. Excuses relating to the
abilities of the graphing package (e.g., it being difficult or impossible to add error bars) should never have been a good excuse. This excuse is equivalent to not reporting accurate reaction times
because of not knowing how to use data collection software (Gillan et al., 1998). Currently there are many readily available and inexpensive graphing packages that easily allow the addition of
use-specified error bars to line and bar graphs. Examples of how to calculate SSBs, including a Studentized Range table, and how to construct the appropriate graph in Microsoft Excel(c) can be found
on the world wide web at: hfac.gmu.edu/SSB. With these other excuses removed, perhaps editors of journals will also begin to require the inclusion of error bars in graphs.
1. By error bars, I mean all visual display constructs that are used to convey variance information, including standard deviation bars, standard error bars, confidence intervals, interquartile range
bars, and boxplots.
2. This included all issues of 1998 for each journal, producing 26 Psychological Review articles, 17 JEP:G articles, and 79 Psychological Science articles.
3. There is some debate regarding when to use line graphs versus bar graphs relating to issues of avoid unwarranted interpolation, ease of reading, and aesthetics. Since the same error bars can be
used in either case, this debate will not be discussed further here.
4.In the case of uneven ns, one can use the harmonic mean, k/(1/n1 + 1/n2 + ... + 1/nk), as long as the ratio between the largest n and the smallest n is no more than three to one (Rankin, 1974).
5. Of course, in some cases it is important to display the full range of the scale rather than magnify the effect.
6. For other arguments against the use of such error bars, see (Cleveland, 1985).
American Psychological Association (1994). Publication manual of the American Psychological Association. (4th ed.). Washington, DC.
Cleveland, W. S. (1985). The elements of graphing data. Monterey, CA: Wadsworth.
Cohen, J. (1994). The Earth is round (p<.05). American Psychologist, 49, 997-1003.
Dixon, P. (1998). Why scientists value p values. Psychonomic Bulletin & Review, 5(3), 390-396.
Estes, W. K. (1997). On the communication of information by displays of standard errors and confidence intervals. Psychonomic Bulletin & Review, 4(3), 330-341.
Gillan, D. J., Wickens, C. D., Hollands, J. G., & Carswell, C. M. (1998). Guidelines for presenting quantitative data in HFES publications. Human Factors, 40(1), 28-41.
Kosslyn, S. M. (1989). Understanding charts and graphs. Applied Cognitive Psychology, 3, 185-225.
Larkin, J. H., & Simon, H. A. (1987). Why a diagram is (sometimes) worth 10,000 words. Cognitive Science, 4, 317-345.
Loftus, G. R. (1993). A picture is worth a thousand p values: On the irrelevance of hypothesis testing in the microcomputer age. Behavioral Research Methods, Instruments, & Computers, 25, 250-256.
Loftus, G. R., & Masson, M. E. J. (1994). Using confidence intervals in within-subject designs. Psychological Bulletin & Review, 1(4), 476-490.
Rankin, N. O. (1974). The harmonic mean method for one-way and two-way analyses of variance. Biometrika, 61, 117-129.
Shah, P., & Carpenter, P. A. (1995). Conceptual limitations in comprehending line graphs. Journal of Experimental Psychology: General, 124, 43-61.
Tufte, E. R. (1983). The visual display of quantitative information. Cheshire, CT: Graphics Press.
|
{"url":"http://www.lrdc.pitt.edu/Schunn/SSB/SSBpaper.html","timestamp":"2014-04-21T04:45:10Z","content_type":null,"content_length":"41429","record_id":"<urn:uuid:27ba5548-75a8-4ad7-ac65-3be52ec5939a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Annandale, VA Prealgebra Tutor
Find an Annandale, VA Prealgebra Tutor
...I can help you or your student be more confident and successful. As a life-long learner I am a PhD candidate at George Mason University in math education and have been working with the most
up-to-date methods for teaching mathematics to all kinds of learners. I believe any student can be successful with the right individualized plan for learning and guidance.
19 Subjects: including prealgebra, calculus, statistics, algebra 2
...Good organizational skills such as knowing how to sort and retrieve information is essential to excel in academia especially when it comes to writing scientific reports. One of my favorite
lessons to give to students at the beginning of each semester is how scientists use common themes and uniqu...
19 Subjects: including prealgebra, writing, biology, GRE
...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a
system of self-learning and studying that allowed me to efficiently learn all the required materials whil...
15 Subjects: including prealgebra, calculus, geometry, physics
...I have experience teaching in grades 1-8 and am fully certified as a K-6 instructor. Working as an English Language Learner kindergarten teacher, I frequently used phonemic awareness as a goal
in the lessons I taught. I teach letters, the sounds they make, and how to chunk them and blend them into comprehensible words.
19 Subjects: including prealgebra, reading, Spanish, English
...My dissertation topic concerns the effects of teaching about religion on religious tolerance among upper-elementary students. While living in Massachusetts, I worked in a summer position as an
enrichment teacher to prepare students for the ISEE (most of these students wanted to attend Boston Lat...
35 Subjects: including prealgebra, reading, writing, English
Related Annandale, VA Tutors
Annandale, VA Accounting Tutors
Annandale, VA ACT Tutors
Annandale, VA Algebra Tutors
Annandale, VA Algebra 2 Tutors
Annandale, VA Calculus Tutors
Annandale, VA Geometry Tutors
Annandale, VA Math Tutors
Annandale, VA Prealgebra Tutors
Annandale, VA Precalculus Tutors
Annandale, VA SAT Tutors
Annandale, VA SAT Math Tutors
Annandale, VA Science Tutors
Annandale, VA Statistics Tutors
Annandale, VA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Alexandria, VA prealgebra Tutors
Arlington, VA prealgebra Tutors
Burke, VA prealgebra Tutors
Centreville, VA prealgebra Tutors
Fairfax, VA prealgebra Tutors
Falls Church prealgebra Tutors
Fort Washington, MD prealgebra Tutors
Herndon, VA prealgebra Tutors
Mc Lean, VA prealgebra Tutors
Oakton prealgebra Tutors
Reston prealgebra Tutors
Springfield, VA prealgebra Tutors
Takoma Park prealgebra Tutors
Vienna, VA prealgebra Tutors
Washington, DC prealgebra Tutors
|
{"url":"http://www.purplemath.com/Annandale_VA_prealgebra_tutors.php","timestamp":"2014-04-16T10:29:53Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:5ac2f49a-465a-477f-9433-78eb33a2b851>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Readings, Notes and Slides
Note that we will not have slides from all the lectures. Some lectures will be given on the board, and some slides will be hand done.
Web page
• Introduction to Data Compression I (53 pages, ps.gz or pdf).
Web page
Error Correcting Codes
Web page
Linear and Integer Programming
Web page
These readings will be/have been given out in class. These are some other potentially usefull readings
• Linear and Integer Programming 1, (ps, ps 4up, ppt )
• Linear and Integer Programming 2, (These were overheads and not available electronically)
• Linear and Integer Programming 3, (ps, ps 4up, ppt )
• Linear and Integer Programming 4, (ps, ps 4up, ppt )
Computational Biology and Sequence Matching
Web page
Scribe Notes
Indexing and Searching
Web page
Handed out during class, and available outside Keith Ledonne's office (Wean 7116).
Scribe Notes
Back to the Algorithms in the Real World page (V. 2001). Guy Blelloch, guyb@cs.cmu.edu.
|
{"url":"http://www.cs.cmu.edu/afs/cs/project/pscico-guyb/realworld/www/slides01.html","timestamp":"2014-04-20T09:50:31Z","content_type":null,"content_length":"7494","record_id":"<urn:uuid:18dfb3da-c6f2-482a-8415-3dfe59476667>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
osmotic pressure of a polymer
Goal: The molecular weight of a polymer is determined by analyzing osmotic pressure data.
Prerequisites: An introductory knowledge of ideal versus non-ideal solution behavior and colligative properties.
Resources you will need: This exercise should be carried out within a software environment that can generate a best-fit line for an x-y data set. The software should also report uncertainties in the
determined slope and y-intercept. You will also be graphing the data along with the fitted function.
© Copyright 2008 by the Division of Chemical Education, Inc., American Chemical Society. All rights reserved. For classroom use by teachers, one copy per student in the class may be made free of
charge. Write to JCE Online, jceonline@chem.wisc.edu, for permission to place a document, free of charge, on a class Intranet.
Suppose you are given an unknown biomolecular substance, such as a protein, RNA strand, or polysaccharide, and you are then asked to determine the molecular weight of the substance. What experimental
methods are available for determining the molecular weight of a large molecule (or perhaps a synthetic polymer or biopolymer)?
One of the more precise techniques involves the measurement of osmotic pressure. The instrument for carrying out these measurements is called an osmometer. The device consists of a semipermeable
membrane that separates two solution compartments. The semipermeable membrane is made of a material that permeates the solvent (not the solute). If the membrane separates pure solvent from a
solution, an osmotic pressure exists across the membrane which, in turn, drives the flow of solvent through the membrane from the pure solvent compartment to the solution compartment. The flow of
solvent that occurs due to a concentration gradient across the membrane is called osmosis.
Osmotic pressure is a colligative property, which means that it is proportional to the concentration of solute. The van’t Hoff equation is often presented in introductory chemistry for calculating
osmotic pressure (Π) from the moles of solute (n[solute]) that occupy a given volume (V) and the absolute temperature (T) of the solution;
Note the similarity between equation (1) and the ideal gas equation (P=nRT/V). But just like the ideal gas equation, the van’t Hoff equation is only valid for an ideal system. In this case, equation
(1) is only valid for an ideal solution, which is a hypothetical solution in which the solute-solvent, solvent-solvent, and solute-solute interactions are all equivalent. Since all non-volatile,
non-electrolytic solutions approach ideal behavior in the dilute limit, equation (1) is actually a limiting law, and should be written in the form
When using an osmometer, it is more convenient to express concentration in terms of the 'grams' of solute per liter,
whereby we can make the following substitution in equation (2):
where M is the molecular weight of the solute. In this fashion, and with minor rearrangement, equation (2) can be written as
According to equation (3), the molecular weight of a solute can be obtained by plotting osmotic pressure divided by c versus concentration and extrapolating the data back to c = 0.
Since equation (3) is only exact in the dilute limit, we can recognize this relationship as the first term in a more general power series expansion in c,
where A[2] and A[3] are called the second and third virial coefficients, respectively. These coefficients are empirically determined constants for a given solute-solvent system, and also depend on
temperature. According to statistical mechanical solution theory, A[2] represents the interaction of a single solute particle with the solvent, and higher order virial coefficients are associated
with correspondingly larger number solute particle cluster interactions with the solvent.
The value of A[2] is of practical importance to those who are interested in working with a particular solute-solvent system. A negative value of A[2] is indicative of a ‘good solvent’ (i.e. the
solute will be highly soluble in the solvent due to favorable solute-solvent intermolecular interactions). A positive value of A[2] indicates the solute is insoluble in the solvent. Since A[2] is
temperature dependent, there will exist a special temperature for each solute-solvent system where A[2]= 0. This temperature is called the Flory Θ-temperature, and represents the theoretical
temperature at which an infinite molecular weight solute just precipitates from solution. When a solvent has a Flory Θ-temperature that lies somewhere between ambient to slightly elevated temperature
conditions, then that solvent is often referred to as a 'Θ-solvent' for that solute. Among other things, saying that a solvent is a 'Θ-solvent' implies that it is a great solvent for recrystallizing
the solute (for purification purposes).
In this exercise, you will be given the osmotic pressure of a polymer solution, cellulose tricaproate in dimethylformamide, over a range of concentrations and at three different temperatures. You
will determine the molecular weight (M) of the polymer, the second virial coefficient (A[2]) at each temperature, and the Flory Θ-temperature for this system.
Experimental Data:
Cellulose is a natural polymer that makes up nearly 80% of all the biomass on earth (cotton is nearly 100% cellulose). Structurally, cellulose is a linear chain of glucose monomers, linked together
at the 1,4 positions of the glucose ring. These glucose chains can be hundreds of units long, and the corresponding molecular weights of these polymers can be several hundred thousand grams per mole.
The following table contains osmotic pressure data at three different temperatures for a cellulose derivative called 'cellulose tricaproate' dissolved in dimethylformamide at a number of
concentrations. The data was obtained from Krigbaum, W.R. and Sperling, L.H. Journal of Physical Chemistry, 64, 99 (1960).
│ Concentration (c) - (g/L) │ Osmotic pressure at 30.0°C - (atm) │ Osmotic pressure at 41.6°C - (atm) │ Osmotic pressure at 53.5°C - (atm) │
│ 2.7 │ 0.00046 │ 0.00052 │ 0.00061 │
│ 12.5 │ 0.00210 │ 0.00248 │ 0.00293 │
│ 17.0 │ 0.00265 │ 0.00343 │ 0.00384 │
│ 22.0 │ 0.00323 │ 0.00442 │ 0.00543 │
1. Calculate Π/c for each of the three data sets. Using an appropriate software environment, plot Π/c versus c and determine the best-fit line for each data set. By fitting our data to a line, we are
assuming that terms higher than the second virial coefficient in equation (4) are negligible. Determine a value for the molecular weight (M) of the polymer and A[2] at each temperature (NOTE: report
the second virial coefficent in units of (mol·ml)/g^2, which are the units most commonly utilized in practice).
2. Of course, the molecular weight (M) should not depend upon temperature. Does your determined values of M vary significantly over the three temperatures investigated here (When answering, you
should consider the uncertainty in the y-intercept of your best-fit lines).
3. Based upon the values of A[2] that you obtained, is dimethylformamide a 'better' solvent for cellulose tricaproate at lower or higher temperatures? Plot your three A[2] values against temperature
and estimate the Flory Θ-temperature for this system.
|
{"url":"http://www2.stetson.edu/~wgrubbs/datadriven/polymermw/polymermwwtg.html","timestamp":"2014-04-16T21:55:43Z","content_type":null,"content_length":"20626","record_id":"<urn:uuid:478eeb84-2909-4d47-9612-97fd9861b4a7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Factoring a quadratic using the perfect square method - Watch video (Algebra
In this part of lesson you'll learn how to factor a quadratic using the perfect square method. In such cases, not only can the quadratic can be factored into two expressions, but the expressions
are the same.If we try to explain it in text, here is the general rule -- if you have a quadratic equation in which first and last term are both perfect squares and middle term is two times the
square root of the first and last terms multiplied, it simplifies the quadratic to a binomial product or just one binomial raised to the second power. Reading this explanation in text is confusing
to many of you -- just click on the video of our instructor explaining it above, and you'll understand the concept much more easily. (More text below video...)
(Continued from above) Note that perfect square trinomials are often expressions of one of the following forms:
• (x^2 + 2ax + a^2), which is the same as (x + a)^2
• (x^2 - 2ax + a^2), which is the same as (x - a)^2
With x^2 + 6x + 9, or any perfect square of form ax^2 + bx + c, look at the 'a' term and the 'c' term to see if they are all squares. 1 and 9 both check out. Then set up an expression like this (x
+ 3)(x + 3). We need to see that the 'b' term comes out as 6, and therefore we should multiply again to check. The way we simplified the quadratic to (x + 3)^2 is that the first term is the square
route of 'a' and the later term is the square route of 'c'. A square route of a number is the exact number that can be squared to give the original. For example: the square route of 4, is 2 because
2 * 2 = 4 and 2 =2. Another type of perfect square is where 'a' (the x^2 coefficient) is not 1. For example: 4x^2 + 4x + 1 , the 'a' term is a square and 'c' is a square so this is a perfect square
quadratic. There is one more thing to check, though. FOIL the two expressions and find if the 'b' term comes out right. So the expression is (2x + 1)^2.
Winpossible's online math courses and tutorials have gained rapidly popularity since their launch in 2008. Over 100,000 students have benefited from Winpossible's courses... these courses in
conjunction with free unlimited homework help serve as a very effective math-tutor for our students.
- All of the Winpossible math tutorials have been designed by top-notch instructors and offer a comprehensive and rigorous math review of that topic.
- We guarantee that any student who studies with Winpossible, will get a firm grasp of the associated problem-solving techniques. Each course has our instructors providing step-by-step solutions
to a wide variety of problems, completely demystifying the problem-solving process!
- Winpossible courses have been used by students for help with homework and by homeschoolers.
- Several teachers use Winpossible courses at schools as a supplement for in-class instruction. They also use our course structure to develop course worksheets.
|
{"url":"http://winpossible.com/lessons/Algebra_I_Factoring_a_quadratic_using_the_perfect_square_method.html","timestamp":"2014-04-19T04:22:48Z","content_type":null,"content_length":"57923","record_id":"<urn:uuid:1c99edbf-cab2-4259-a639-63688a021218>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Annu. Rev. Astron. Astrophys. 1998. 36: 17-55
Copyright © 1998 by . All rights reserved
5. H[0] FROM PHYSICAL CONSIDERATIONS
To first order, there is a natural SN Ia peak luminosity - the instantaneous radioactivity luminosity, i.e. the rate at which energy is released by ^56Ni and ^56Co decay at the time of maximum light
(Arnett 1982, Arnett et al 1985, Branch 1992). With certain simplifying assumptions, the peak luminosity is predicted to be identical to the instantaneous radioactivity luminosity. The extent to
which they differ, for a hydrodynamical explosion model, can be determined only by means of detailed light curve calculations that take into account the dependence of the opacity on the composition
and the physical conditions. The state of the art is represented by the calculations of Höflich & Khokhlov (1996). The calculated peak luminosity of the models turns out to be proportional to M[Ni]
within uncertainties, and for models that can be considered to be in the running as representations of normal SNe Ia (carbon ignitors that take longer than 15 days to reach maximum light in the V
band), the characteristic ratio of the peak luminosity to the radioactivity luminosity is about 1.2 (Branch et al 1997). The physical reason that the ratio exceeds unity in such models was explained
by Khokhlov et al (1993) in terms of the dependence of the opacity on the temperature, which is falling around the time of maximum light.
Höflich & Khokhlov's (1996) light-curve calculations can be used to estimate H[0] in various ways. Höflich & Khokhlov (1996) themselves compared the observed light curves of 26 SNe Ia (9 in galaxies
having radial velocities greater than 3000 km s^-1) to their calculated light curves in two or more broadbands, to determine the acceptable model(s), the extinction, and the distance for each event.
From the distances and the parent-galaxy radial velocities, they obtained H[0] = 67 ± 9. Like the empirical MLCS method, this approach has the attractive feature of deriving individual extinctions.
But identifying the best model(s) for a SN while simultaneously extracting its extinction and distance, all from the shapes of its light curves, is a tall order. This requires not only accurate
calculations but also accurate light curves, and the photometry of some of the SNe Ia that were used by Höflich & Khokhlov (1996) has since been revised (Patat et al 1997). And because Höflich &
Khokhlov's (1996) models included many more underluminous than overluminous SNe (the former being of interest in connection with weak SN Ia like SN 1991bg) while the formal light-curve fitting
technique has a finite "model resolution," a bias towards deriving low luminosities and short distances for the observed SNe Ia is possible.
There are less ambitious but perhaps safer ways to use Höflich & Khokhlov's (1996) models to estimate H[0] that involve an appeal to the homogeneity of normal SNe Ia and rely only on the epoch of
maximum light when both the models and the data are at their best. The 10 Chandrasekhar-mass models having 0.49 M[Ni] M[Ni] within the acceptable range for normal SNe Ia - have a mean M[Ni] = 0.58 M
[] and a mean M[V] = -19.44. Alternatively, the five models (W7, N32, M35, M36, PDD3) that Höflich & Khokhlov (1996) found to be most often acceptable for observed SNe Ia have a mean M[Ni] = 0.58 and
a mean M[V] = -19.50. Using M[V] = -19.45 ± 0.2 in Equation 1 gives H[0] = 56 ± 5, neglecting extinction of the non-red Hubble-flow SNe Ia.
Höflich & Khokhlov's (1996) light-curve calculations were used in another way by van den Bergh (1995). Noting that the maximum light M[V] and B - V values of the models obey a relation that mimics
that which would be produced by extinction (Figure 15), he matched the model relation between M[V] and B - V to the relation of the observed Hubble-flow SN Ia and obtained values of H[0] in the range
of 55-60, depending on how the models were weighted. If helium-ignitor models had been excluded, the resulting H[0] would have been a bit lower because the models are underluminous for their B - V
colors. This procedure has the virtue of needing no estimates of extinction. The same result can be seen from figure 3 of Höflich et al (1996), who plotted M[V] versus B - V for Höflich & Khokhlov
(1996) models and for the Calán-Tololo SNe Ia. For the assumed value of H[0] = 65, the relations between M[V] and B - V for the models and the observed SNe Ia are offset, and H[0] would need to be
lowered to about 55 to bring them into agreement.
Figure 15. M[V] is plotted against B - V for the models of Höflich & Khokhlov (1996). Helium-ignitor models are indicated by vertical lines. The arrow has the conventional extinction slope of 3.1.
Adapted from van den Bergh (1996).
In their Figure 2, Höflich et al (1996) also plot M[V] versus a V-band light-curve decline parameter that is analogous to m[15], for the models and for the Calán-Tololo SNe Ia. Again with H[0] = 65,
the models and the observed SNe I are offset, with H[0] needing to be lowered to about 55 to bring them into agreement.
Distances to SNe Ia also can be derived by fitting detailed NLTE synthetic spectra to observed spectra. Nugent et al (1995b) used the fact that the peak luminosities inferred from
radioactivity-powered light curves and from spectrum fitting depend on the rise time in opposite ways, in order to simultaneously derive the characteristic rise time and luminosity of normal SNe Ia
and obtained H[0] = 60^+14[-11]. If SN Ia atmospheres were not powered by a time-dependent energy source, the spectrum-fitting technique could be independent of hydrodynamical models. The procedure
would be to look for a model atmosphere that accounts for the observed spectra without worrying about how that atmosphere was produced, estimate (or derive by fitting two or more phases) the time
since explosion, and obtain the luminosity of the model. But, owing to the time-dependent nature of the deposition of radioactivity energy, SNe Ia "remember" their history (Eastman 1997, Nugent et al
1997, Pinto 1997). Light-curve and spectrum calculations really are coupled, and more elaborate physical modeling needs to be done.
|
{"url":"http://ned.ipac.caltech.edu/level5/March03/Branch/Branch5.html","timestamp":"2014-04-18T14:11:31Z","content_type":null,"content_length":"10401","record_id":"<urn:uuid:8051bc79-75b5-4e2d-8c7d-424c8f5bbfff>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
For other uses, see (disambiguation).
International System of Units
, (symbol:
) (for the
Système International d'Unités
), is the most widely used
system of units
. It is used for everyday commerce in virtually every country of the world except the United States. SI was selected from the existing
system of units (MKS), with the addition of extra units, rather than the older
Centimetre-Gram-Second system of units (CGS)
. SI is sometimes referred to as the
metric system
in the United States
, which has not widely adopted it, and the
, where conversion is incomplete).
There are seven base units and several derived units, together with a set of prefixeses. Non-SI units can be converted to SI units (or vice versa) according to the conversion of units.
The units of the SI system are decided by international conferences organised by the Bureau International des Poids et Mesures (International Office of Weights and Measures). The SI system was first
given its name in 1960, and last added to in 1971.
SI is built on seven SI base units, such as the kilogram, metre and second. These are used to define various SI derived units.
SI also defines a number of SI prefixes to be used with the units: these combine with any unit name to give subdivisions and multiples. For example, the prefix kilo denotes a multiple of a thousand,
so the kilometre is 1 000 metres, the kilogram 1 000 grams, and so on. Note that a millionth of a kilogram is a milligram, not a microkilogram.
SI writing style
• Symbols are written in lower case except for in symbols where the unit is the same as the name of a person, or derived from the name of a person. This means that the symbol for the SI unit for
pressure, named after Blaise Pascal, is Pa, whereas the unit itself is written pascal. The official SI brochure lists the symbol for the litre as an allowed exception to the capitalization rules:
either capital or lowercase L is acceptable.
• Symbols are written in singular e.g 25 kg (not "25 kgs")
• It is preferable to keep the symbol in upright roman type (for example, kg for kilograms, m for metres), so as to differentiate from mathematical and physical variables (for example, m for mass,
l for length).
• A space is left between the numbers and the symbols: 2.21 kg, 7.3÷10^2 m^2
• SI uses spaces to separate decimal digits in sets of three. e.g. 1 000 000 or 342 142 (in contrast to the commas or dots used in other systems e.g. 1,000,000 or 1.000.000).
• SI used only a comma as the separator for decimal fractions until 1997. The number "twenty four and fifty one hundredths" would be written as "24,51". In 1997 the CIPM decided that the British
full stop (the "dot on the line", or period) would be the decimal separator in text whose main language is English ("24.51"); the comma remains the decimal separator in all other languages.
• The symbol (or prefix) can be used in place of the decimal seperator. 2.3k would be 2k3, 4.7A (Amps) would be 4A7, and 0.0047F would be 4m7F.
The system can legally be used in every country in the world, and in many countries its use is obligatory. Those countries that still give official recognition to non-SI units (e.g. the US and UK)
define them in terms of SI units; for example, the inch is defined to be exactly 0.0254 metres. It was adopted by the 11th General Conference on Weights and Measures (CGPM) in 1960. (See weights and
measures for a history of the development of units of measurement.)
Base Units
The following are the fundamental units from which all others are derived, they are dimensionally independent. The definitions stated below are widely accepted.
│ Name │ Unit │ Measure Of │ Definition │
│ │ Symbol │ │ │
│ metre │ m │ Length │ The unit of length is equal to the length of the path travelled by light in a vacuum during the time interval of 1/299 792 458 of a second │
│ kilogram │ kg │ Mass │ The unit of mass is equal to the mass of the international prototype kilogram (a platinum-iridium cylinder) kept at the Bureau International des Poids et │
│ │ │ │ Mesures (BIPM), Sèvres, Paris. │
│ second │ s │ Time │ The unit of time is the duration of exactly 9 192 931 770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground │
│ │ │ │ state of caesium-133 atom. │
│ ampere │ A │ Electrical │ The unit of electrical current is the constant current which, if maintained in two straight parallel conductors, of infinite length and negligible │
│ │ │ Current │ cross-section, placed 1 metre apart in a vacuum, would produce a force between these conductors equal to 2×10 ^−7 newton per metre of length. │
│ kelvin │ K │ Absolute │ The unit of thermodynamic temperature (or absolute temperature) is the fraction 1/273.16 (exactly) of the thermodynamic temperature at the triple point of │
│ │ │ Temperature │ water. │
│ mole │ mol │ Amount of │ The unit of amount of substance is the amount of substance which contains as many elementary entities as there are atoms in 0.012 kilogram of pure carbon-12. │
│ │ │ substance │ [elementary entities may be atoms, molecules, ions, electrons, or particles]. │
│ candela │ cd │ Luminous │ monochromatic radiation of frequency 540×10^12 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian. │
│ │ │ intensity │ │
Dimensionless derived units
The following SI units are derived from the base units and are dimensionless.
│ Name │ Unit │ Measure Of │ Definition │
│ │ Symbol │ │ │
│ radian │ rad │ Angle │ The unit of angle is the angle subtended at the centre of a circle by an arc of the circumference equal in length to the radius of the circle. There are │
│ │ │ │ radians in a circle. │
│ steradian │ sr │ Solid │ The unit of solid angle is the solid angle subtended at the centre of a sphere of radius r by a portion of the surface of the sphere having an area r^2. There │
│ │ │ Angle │ are steradians in a sphere. │
Derived units with special names
Base units can be put together to derive units of measurement for other quantities. Some have been given names.
│ Name │ Unit Symbol │ Measure of │ Expressed in base units │
│ hertz │ Hz │ Frequency │ s^-1 │
│ newton │ N │ Force │ kg m/s ^2 │
│ joule │ J │ Energy │ N m = kg m^2/s^2 │
│ watt │ W │ Power │ J/s = kg m^2/s^3 │
│ pascal │ Pa │ Pressure │ N/m^2 = kg/m s^2 │
│ lumen │ lm │ Luminous flux │ cd sr │
│ lux │ lx │ Illuminance │ cd sr/m^2 │
│ coulomb │ C │ Electric Charge │ A s │
│ volt │ V │ Electric Potential Difference │ J/C = kg m^2 A^-1 s^-3 │
│ ohm │ Ω │ Electric resistance │ V/A = kg m^2 A^-2 s^-3 │
│ farad │ F │ Electric capacitance │ A^2 s^4 kg^-1 m^-2 = ^-1 s │
│ Weber (Wb)>weber │ Wb │ Magnetic flux │ kg m^2/s^2 A │
│ tesla │ T │ Magnetic flux density │ Wb/m^2 = kg/s^2 A │
│ henry (inductance)>henry │ H │ Inductance │ kg m^2 A^-2 s^-2 = s │
│ Siemens_(unit)>siemens │ S │ Electric conductance │ ^-1 = kg^-1 m^-2 A^2 s^3 │
│ becquerel │ Bq │ Radioactivity (decays per unit time) │ s^-1 │
│ gray (unit)>gray │ Gy │ Absorbed dose (of ionising radiation) │ J/kg │
│ sievert │ Sv │ Dose equivalent (of ionising radiation) │ J/kg │
The unit of volume litre, abbreviated L or l and being equal to 0.001 m^3, is not an SI unit but is "accepted for use with the International System."
Spelling variations
Several nations, notably the United States, typically use the spellings 'meter' and 'liter' instead of 'metre' and 'litre', in keeping with standard American English spelling (for example, Americans
also use 'center' rather than 'centre'; see also American and British English differences). In addition, the official US spelling for the SI prefix 'deca' is 'deka' (again, a variation not recognized
by the BIPM).
The US government has approved these spellings for official use, but the BIPM only recognizes the British English spellings as official names for the units. In scientific contexts only the
abbreviations are used; since these are universally the same, the differences do not arise in practice in scientific use.
The unit 'gram' is also sometimes spelled 'gramme' in English-speaking countries, though that is an older spelling and is falling out of use.
Related topics
• Other measurement systems:
• UTC (Coordinated Universal Time)
• Binary Prefixes - used to quantify large amounts of computer data
External links
Further reading
• I. Mills, Tomislav Cvitas, Klaus Homann, Nikola Kallay, IUPAC: Quantities, Units and Symbols in Physical Chemistry, 2nd ed., Blackwell Science Inc 1993, ISBN 0632035838.
|
{"url":"http://july.fixedreference.org/en/20040724/wikipedia/SI","timestamp":"2014-04-18T10:57:45Z","content_type":null,"content_length":"21551","record_id":"<urn:uuid:4b950dcb-ddf4-4d3f-acd7-61893dc3b1c6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Why is it that the mathematical world can contain an infinity of
infinities, but the physical world can't contain one of them?
Replies: 2 Last Post: Aug 4, 2013 2:34 PM
Messages: [ Previous | Next ]
Re: Why is it that the mathematical world can contain an infinity of
infinities, but the physical world can't contain one of them?
Posted: Aug 4, 2013 2:34 PM
On Sunday, August 4, 2013 8:42:32 AM UTC-5, Pfs...@aol.com wrote:
> On Sat, 3 Aug 2013 12:40:22 -0700 (PDT), donstockbauer@hotmail.com
> wrote:
> >Only de Shadow knows.
> That's not parfticularly unusual?
> Is there an "idea" floating arouund in the physical world?
> Or a "god" or a "heaven" or etc etc
As memes. That's it. Infinity is just a meme. In Luling, Texas there's a restaurant named "Meme."
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2584754&messageID=9187085","timestamp":"2014-04-17T11:19:06Z","content_type":null,"content_length":"19293","record_id":"<urn:uuid:164d4b9c-f12d-434f-8dc6-8912bd9e0a73>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex number
March 1st 2010, 03:00 AM #1
Dec 2008
Complex number
Hi, I have a cubic equation:
$z^3 + az^2 + bz + c = 0$
where a,b and c are real numbers. I have to show that if z is a solution then so is its complex conjugate, z*.
It is given that z = -2 -i is a solution to the equation. I must then write down a second complex solution.
If z = 2 is the thrid solution, what are a,b and c?
So far I have said that if z = x + iy, then the expansion of the first equation becomes
$(x^3 + 3x^2iy - 3xy^2 - iy^3) + a(x^2 + 2xiy - y^2) + b(x + iy) + c = 0$
when I do the same for the complex conjugate it just gets the x cubed term to become
$(x^3 + 3x^2iy + 3xy^2 + iy^3)$
but that isn't right because the both equations equate to zero and therefore the other, but they are not the same.
What do I do?
Any help appreciated.
(sorry moderators if this is in the wrong topic area)
Hi, I have a cubic equation:
$z^3 + az^2 + bz + c = 0$
where a,b and c are real numbers. I have to show that if z is a solution then so is its complex conjugate, z*.
This much is true for ANY real polynomial $p(x)$ : its complex non-real roots come as conjugate pairs, because $p(\overline{z})=\overline{p(z)}$ , by the basic properties of the complex
It is given that z = -2 -i is a solution to the equation. I must then write down a second complex solution.
If z = 2 is the thrid solution, what are a,b and c?
Then you know that the roots are $2\,,\,-2-i\,,\,-2+i$ , and the either you compare coefficients of corresponding powers of x in the equality $ax^2+bx^2+cx+d=a(x-2)(x-(-2-i))(x-(-2+i))$ , or what
is the same but more direct and simpler: you remember Viete's formulae
$a_1a_2a_3=-\frac{d}{a}\,,\,\,a_1a_2+a_1a_3+a_2a_3=\frac{c}{a} \,,\,\,a_1+a_2+a_3=-\frac{b}{a}$ , with $a_1,\,a_2,\,a_3$ the polynomial's roots.
So far I have said that if z = x + iy, then the expansion of the first equation becomes
$(x^3 + 3x^2iy - 3xy^2 - iy^3) + a(x^2 + 2xiy - y^2) + b(x + iy) + c = 0$
when I do the same for the complex conjugate it just gets the x cubed term to become
$(x^3 + 3x^2iy + 3xy^2 + iy^3)$
but that isn't right because the both equations equate to zero and therefore the other, but they are not the same.
What do I do?
Any help appreciated.
(sorry moderators if this is in the wrong topic area)
Hi, I have a cubic equation:
where a,b and c are real numbers. I have to show that if z is a solution then so is its complex conjugate, z*.
This much is true for ANY real polynomial : its complex non-real roots come as conjugate pairs, because , by the basic properties of the complex conjugation.
I have to show this though which is the problem because I don't know how.
(sorry if the answer is really obvious but my maths is poor)
Hi, I have a cubic equation:
where a,b and c are real numbers. I have to show that if z is a solution then so is its complex conjugate, z*.
This much is true for ANY real polynomial : its complex non-real roots come as conjugate pairs, because , by the basic properties of the complex conjugation.
I have to show this though which is the problem because I don't know how.
(sorry if the answer is really obvious but my maths is poor)
Check the very basics of complex numbers: this is really easy.
March 1st 2010, 03:46 AM #2
Oct 2009
March 1st 2010, 03:54 AM #3
Dec 2008
March 1st 2010, 06:08 AM #4
Oct 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/131377-complex-number.html","timestamp":"2014-04-18T00:31:10Z","content_type":null,"content_length":"45339","record_id":"<urn:uuid:0e25e4f6-52d1-4dde-8136-b7ff7c2d4122>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using Hard Problems to Create Pseudorandom Generators
Noam Nisan
EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-88-486
January 1989
This thesis describes two methods of constructing pseudorandom generators from hard problems.
We first give a simple and very general construction of pseudorandom generators. They stretch a short string of truly random bits into a long string that looks random to any algorithm from a
complexity class C (e.g. P, NC, PSPACE, ...) using an arbitrary function that is hard for C. This construction reveals an equivalence between the problems of proving certain lower bounds and of
constructing pseudorandom generators.
This construction has many consequences. The most direct one is that efficient deterministic simulation of randomized algorithms is possible under much weaker assumptions than previously known. The
efficiency of the simulations depends on the strength of the assumptions, and may achieve P=BPP. We believe that our results are very strong evidence that the gap between randomized and deterministic
complexity is not large.
Using the known lower bounds for constant depth circuits, our construction yields unconditionally proven pseudorandom generators for constant depth circuits. As an application we characterize the
power of NP with a random oracle.
Our second pseudorandom generator stretches short truly random strings to long strings that look random to all Logspace machines. This is proved without relying on any unproven assumptions. Instead,
we use lower bounds on the complexity of the following multiparty communication game:
Let f(x1, ... ,xk) be a Boolean function that k parties wish to collaboratively evaluate. The ith party knows each input argument except xi; and each party has unlimited computational power. They
share a blackboard, viewed by all parties, where they can exchange messages. The objective is to minimize the number of bits written on the board.
We prove lower bounds on the number of bits that need to be written on the board in order to compute a certain function. We then use these bounds to construct a pseudorandom generator for Logspace.
As an application we present an explicit construction of universal traversal sequences for general graphs.
Advisor: Richard M. Karp
BibTeX citation:
Author = {Nisan, Noam},
Title = {Using Hard Problems to Create Pseudorandom Generators},
School = {EECS Department, University of California, Berkeley},
Year = {1989},
Month = {Jan},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1989/6148.html},
Number = {UCB/CSD-88-486},
Abstract = {This thesis describes two methods of constructing pseudorandom generators from hard problems. <p>We first give a simple and very general construction of pseudorandom generators. They stretch a short string of truly random bits into a long string that looks random to any algorithm from a complexity class <i>C</i> (e.g. P, NC, PSPACE, ...) using an arbitrary function that is hard for <i>C</i>. This construction reveals an equivalence between the problems of proving certain lower bounds and of constructing pseudorandom generators. <p>This construction has many consequences. The most direct one is that efficient deterministic simulation of randomized algorithms is possible under much weaker assumptions than previously known. The efficiency of the simulations depends on the strength of the assumptions, and may achieve <i>P</i>=<i>BPP</i>. We believe that our results are very strong evidence that the gap between randomized and deterministic complexity is not large. <p>Using the known lower bounds for constant depth circuits, our construction yields unconditionally proven pseudorandom generators for constant depth circuits. As an application we characterize the power of NP with a random oracle. <p>Our second pseudorandom generator stretches short truly random strings to long strings that look random to all Logspace machines. This is proved without relying on any unproven assumptions. Instead, we use lower bounds on the complexity of the following multiparty communication game: <p>Let <i>f</i>(<i>x</i>1, ... ,<i>x</i>k) be a Boolean function that <i>k</i> parties wish to collaboratively evaluate. The <i>i</i>th party knows each input argument except <i>x</i>i; and each party has unlimited computational power. They share a blackboard, viewed by all parties, where they can exchange messages. The objective is to minimize the number of bits written on the board. <p>We prove lower bounds on the number of bits that need to be written on the board in order to compute a certain function. We then use these bounds to construct a pseudorandom generator for Logspace. As an application we present an explicit construction of universal traversal sequences for general graphs.}
EndNote citation:
%0 Thesis
%A Nisan, Noam
%T Using Hard Problems to Create Pseudorandom Generators
%I EECS Department, University of California, Berkeley
%D 1989
%@ UCB/CSD-88-486
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/1989/6148.html
%F Nisan:CSD-88-486
|
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1988/6148.html","timestamp":"2014-04-21T12:12:45Z","content_type":null,"content_length":"9127","record_id":"<urn:uuid:1036ef03-69ea-4960-885a-e2f3fd70948a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
|
$C^1$ isometric embedding of flat torus into $\mathbb{R}^3$
up vote 32 down vote favorite
I read (in a paper by Emil Saucan) that the flat torus may be isometrically embedded in $\mathbb{R}^3$ with a $C^1$ map by the Kuiper extension of the Nash Embedding Theorem, a claim repeated in this
Wikipedia entry. I have been unsuccessful in finding a description of such a mapping, or an image of what the embedding looks like. I'd be grateful to any pointers on this topic. Thanks!
Addendum. It seems Benoît Kloeckner's answer below is definitive. What I asked for apparently does not yet exist, but is "in process" and will soon be available through the work of the Hévéa project.
[23Apr2012] This is taken from the link in DamienC's comment and Benoît's update in the latter's answer below:
mg.metric-geometry dg.differential-geometry reference-request
Is there any reason to expect there to be a closed form expression for this object? Nash-Kuiper embedding theorem is through a limiting process where the embedding converges in $C^1$ and not
higher. So in general I am somewhat doubtful of a formula. Now, on the other hand, from the statement of the Nash-Kuiper theorem there should be a $C^1$ embedding that is $\epsilon$-close to the
standard picture of a torus in $\mathbb{R}^3$. So that sort of tells you what one possible embedding looks like. – Willie Wong Jul 9 '10 at 18:00
jstor.org/stable/1969840 is also very, very readable. – Willie Wong Jul 9 '10 at 18:03
Great question, I've wondered this myself. The closest thing I've ever seen for this kind of thing is here, though the goal is the hyperbolic plane, and the method of construction seems ad-hoc and
is not that of the Nash-Kuiper theorem: xs4all.nl/~westy31/Geometry/Geometry.html#Embed – j.c. Jul 9 '10 at 18:03
Would it be feasible to generate by computer an image of an approximate embedding under a few iterations of the Nash-Kuiper technique? – j.c. Jul 9 '10 at 18:06
@jc: I was hoping someone did this. Even searched Google images, unsuccessfully. – Joseph O'Rourke Jul 9 '10 at 18:09
show 1 more comment
3 Answers
active oldest votes
A group of french mathematicians and computer scientists are currently working on this. The project is named Hévéa, and has already produced a few images. Edit: a few images and the PNAS
paper have been released, see http://math.univ-lyon1.fr/~borrelli/Hevea/Presse/index-en.html
Just a few word to explain what I understood of their method (which is by using h-principle) from the few image I saw in preview. Start with a revolution torus. The meridians are cool,
because they all have the same length, as expected from those of a flat torus. But the parallels are totally uncool, because their lengths differ greatly: they witness the non-flatness
of the revolution torus.
up vote 33 Now perturb your torus by adding waves in the direction of the meridians (like an accordion), with large amplitude on the inside and small amplitude on the outside. If you design this
down vote perturbation well, you can manage so that the parallels now all have the same length. Of course, the perturbed meridian have now varying lengths! So you do the same thing by adding small
accepted waves in another direction, getting all meridians to have the same length again. You can iterate this procedure in a way so that the embedding converges in the $C^1$ topology to a flat
embedded torus. But to prove that the precise perturbation you chose in order to get a nice image does converge, and that your maps are embeddings needs work (getting an immersion is
easier if I remember well).
Also, the Hévéa project plans to draw images of Nash spheres, that is $C^1$ isometric embeddings of spheres of radius $>1$ inside a ball of unit radius.
Cool! I'll track the project, and look forward to the release of images. – Joseph O'Rourke Jul 9 '10 at 18:42
4 The h-principle technique is really cool, and has some strange applications. One of my favourites is DeLellis and Szekelyhidi's demonstration that weak solutions to the Euler equation
is non-unique arxiv.org/abs/math/0702079 Essentially you start with a compactly supported (in space and in time) function and alternatingly add horizontal and vertical ripples to make
it converge to a solution in the weak sense. – Willie Wong Jul 10 '10 at 11:36
Follow-up: there are some news from the Hevea project. A paper in PNAS (pnas.org/content/early/2012/04/18/1118478109.abstract) and a broad-audience presentation (math.univ-lyon1.fr/
~borrelli/Hevea/Presse/index-en.html), with beautiful pictures!!!! – DamienC Apr 23 '12 at 7:22
@DamienC: I edited the answer to announce the release, thanks. – Benoît Kloeckner Apr 23 '12 at 11:37
add comment
On the other hand, if you are willing to settle for conformally flat, there is a beautiful theory of these. (The idea is to consider flat embeddings in the three-sphere, and then "project
them into $R^3$ using stereographic projection.) The classification of flat embeddings of the torus in the three-sphere goes back to Bianchi in the 1800's, and Ulrich Pinkall recently found
some particularly nice ones (the so-called "Bianchi-Pinkall Tori) by taking inverse images of a simple-closed curve under the Hopf fibration (so one set of circles are Hopf fibers). If you
up vote would like to see some example images, some applets to play with and morph them, and an explanatory pdf file, have a look here:
15 down
vote http://virtualmathmuseum.org/Surface/bianchi-pinkall_tori/bianchi-pinkall_tori.html
Wow, what beautiful tori at that link! I will investigate this notion of a conformally flat metric, of which I was only dimly aware. Thanks! – Joseph O'Rourke Jul 9 '10 at 20:43
Thanks for the link! – alvarezpaiva Apr 23 '12 at 19:27
add comment
I would like to mention something I learned from Igor Pak subsequent to posing this question: there is a piecewise-linear embedding of the flat torus! In the paper by V. A. Zalgaller, "
Some bendings of a long cylinder," Journal of Mathematical Sciences, 100(3):2228--2238, 2000 (translated from a 1997 article in the Russian journal Zapiski Nauchnykh Seminarov POMI), he
proves this theorem:
"Theorem 1. A direct flat torus can be isometrically embedded in $\mathbb{R}^3$ 'in the origami style' if its development is a rectangle sufficiently large compared to its altitude."
up vote 9
down vote He defines a direct flat torus as the result of identifying the opposite sides of a rectangle. (I have never seen the term "direct flat torus," and I don't know what role the modifier
"direct" plays.) "In the origami style" describes how he bends a triangular prism such that "its middle part is broken to a complicated form." The embedding is a triangular prism bent into
the shape of a regular hexagon. The "bendings" are piecewise-linear crinklings of the surface.
add comment
Not the answer you're looking for? Browse other questions tagged mg.metric-geometry dg.differential-geometry reference-request or ask your own question.
|
{"url":"http://mathoverflow.net/questions/31222/c1-isometric-embedding-of-flat-torus-into-mathbbr3/31244","timestamp":"2014-04-19T15:11:50Z","content_type":null,"content_length":"74784","record_id":"<urn:uuid:6f167a27-56bb-4816-94d3-5c747017fc5a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What's the only place in the world whose temperature in Fahrenheit and Celsius degrees can be equal? - Homework Help - eNotes.com
What's the only place in the world whose temperature in Fahrenheit and Celsius degrees can be equal?
There is more than one place on Earth where the temperature can potentially be the same in Fahrenheit and Celsius. There is only one temperature that is the same in both Fahrenheit and Celsius, but
that temperature can occur at more than one spot on Earth’s surface.
The Fahrenheit and Celsius scales only match up at one temperature. Each degree Celsius is equal to 1.8 degrees Fahrenheit so the two scales do not match at most points. However, -40 degrees
Celsius is the same temperature as -40 Fahrenheit. We can see this because the formula to convert from Fahrenheit to Celsius is
Degrees Celsius = (1.8 x Degrees Fahrenheit) + 32.
Now, let us plug in -40 as the temperature in Fahrenheit. We would get
Degrees Celsius = (1.8 x -40) + 32 which becomes -72 + 32 which is -40.
So, when the temperature is -40 Fahrenheit, it is also -40 Celsius.
But there are many places on Earth that have had temperatures that low. As can be seen in this link, temperatures lower than that have been recorded in Antarctica, Asia, North America, and Europe.
This means that at some point, the temperature must have been -40 degrees on all of those continents.
Thus, there is more than one place on Earth where it is possible to have the temperature be the same in Fahrenheit and Celsius.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/whats-only-place-world-whos-fahrenheit-celsius-442066","timestamp":"2014-04-18T18:41:56Z","content_type":null,"content_length":"27166","record_id":"<urn:uuid:bfce2cc4-f7bc-4162-b30b-1c27e089f614>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IntroductionRelated WorkLimitation in Evaluation IndexExistence of Evaluation BiasResearches on Compression EvaluationEvaluation Principle and New CriterionBackground and Basic ConceptsCompression OverviewEvaluation PrincipleExperimental SetupRaw DataTest PlatformMethodology and Relevant AssumptionEvaluation ResultsCompression RatioCompression ComplexityESB of CompressionAdaptive Compression Arbitration SystemSystem DescriptionExperimental ResultsConclusions and Design ConsiderationsReferencesFigures and Tables
Wireless sensor networks (WSNs), a new network structure, have received continuous attention in recent ten years. Since the 1990s, when sensor networks emerged as a fundamentally new tool for
military monitoring, nowadays they are widely used in many application fields such as agriculture, ecosystems, medical care and smart homes, especially for regions which are inaccessible or
unattended. By right of the essential function in data collection, WSNs connect the physical environment with human beings [1].
Generally, each sensor node transmits monitoring data over its corresponding path to the sink. Since the nodes are battery-operated and no fixed infrastructure exists, energy becomes the primary
concern in such networks. Moreover, the number of nodes in WSNs can be extremely large. It is prohibitively difficult to replace or recharge them to extend the operational lifetime of network. Thus,
energy efficiency is considered as the major metric which impacts network performance significantly. Many advances have been made with the purpose of enhancing network lifetime [2,3].
Among different applications, continuous data collection for environmental monitoring is relatively popular [4]. In this scenario, sensor nodes continuously sample surrounding physical phenomena and
return them to the sink. The ubiquity of redundancies in the data inspires researchers to introduce compression technology for reducing data volume and saving communication energy costs. Recent
developments propose many challenges for data processing and the related technologies.
Lots of compression methods are designed specifically for sensor networks. However, it seems to be difficult to get proper advice about which one is more suitable for a certain application. The lack
of research on data compression evaluation and the corresponding criteria make it hard to provide efficient guidelines for both algorithm design and application. Besides, various kinds of evaluation
bias tend to lead to inaccurate conclusions, which then leads to wrong choices.
In this paper, we study current compression algorithms for WSNs, and propose a novel evaluation criterion which is more applicable for them. The main contributions of our work are threefold:
First, a new evaluation criterion is presented to give attention to the energy efficiency of compression implemented in the sensor nodes. Since energy consumption is one of the most important design
metrics in WSNs, this criterion will do well in such compression evaluation to provide useful suggestions during both design and application.
Second, current tunable compression algorithms aimed at WSNs are reevaluated in depth at the node level and the network level. Various kinds of real datasets are adopted, which cover almost all types
of environmental data. Evaluation results based on our criterion and several traditional indices are compared avoiding different evaluation bias.
Third, based on the results, a novel compression arbitration system is proposed to enhance the performance of compression algorithms by avoiding unnecessary energy losses. Furthermore, several design
considerations of compression are discussed. We suggest that design concept of compression algorithms should be changed due to the particularity of WSNs.
The remainder of the paper is structured as follows. Section 2 discusses the related work on both compression algorithms and evaluation methods. Several aspects that impact evaluation results are
analyzed. Section 3 presents the principle of evaluation and defines the new criterion. Experiment setup and the methodology are described in Section 4 with the results and corresponding discussions
given in Section 5. A new compression arbitration system is presented in Section 6 and Section 7 offers a summary to conclude the paper.
Data compression is regarded as a traditional technology used in digital communication, broadcasting, storage, and multimedia systems. Being applied to WSNs, compression faces more new challenges.
Although there have been a number of algorithms proposed for WSNs up to the present [5–25], problems exist in them at the same time. One is how can we select a proper algorithm for a given
application among several methods on hand. Reasons that lead to this confusion are discussed.
Traditional compression is used with the purpose of improving the performances in communication time, transmission bandwidth and storage space. Various evaluation indices are defined. Some of them
are extended for use in WSNs:
(1) Compression ratio
Compression ratio is one of the most important design indices in data compression. It visually describes the compression effect of algorithm, and is formulated as a ratio between the volume of the
compressed data and the raw one. Based on it, the improvements in communication time, transmission bandwidth and storage space can be quantitatively measured.
In WSNs, compression ratio is also considered as one of the major evaluation criteria. Since it can indicate the reduction of communication energy costs, researchers prefer to show the exciting
results produced by their new algorithms [8,9,14,16,17,19,21–24]. Likewise, users would like to choose the algorithm with better compression results in view of lower energy consumptions in data
(2) Compression error
Compression error is another important criterion with various expression forms, such as RMS (Root Mean Square) error, peak error, SNR (Signal to Noise Ratio), and so on. It describes the degree of
information loss after compressing.
In WSNs, lossy compression is much more popular due to the better compression ratio. Thus, compression error is unavoidable [14,17,23]. Based on this index, users can assess which compression will
get less data distortion or whether it meets the application requirements.
(3) Compression complexity
Compression complexity includes space complexity and time complexity, which represent the costs of hardware resources and execution time in data compressing, respectively. Lower space complexity
means less memory occupation required; lower time complexity incurs shorter delays.
Nevertheless, compression complexity has not been seriously considered in WSNs. It is accepted that algorithms with high complexity are unsuitable for sensor nodes with restricted capabilities.
Therefore, complexity seems more like a qualitative criterion. Users pay more attention to the feasibility of the algorithm, rather than the real costs of storage and time. Only in some specific
applications, has compression complexity been quantitatively investigated [7,22].
In a word, researchers still prefer to use traditional standards for data compression evaluation in WSNs. Compression ratio seems the main criterion for choosing a more satisfied algorithm. However,
as mentioned above, saving energy is the fundamental purpose in sensor networks. Each criterion listed above only partially reflects energy information. Thus, a new criterion is urgently desired for
WSNs, though the existing ones are doing well in traditional compression evaluation.
During compression evaluating, several kinds of bias will directly influence the results. Among them, data bias and execution bias are two main aspects.
Data bias appears when non-uniform experimental datasets are used for the comparison of algorithms. It is well known that datasets with different characteristics will produce greatly different test
results. For instance, data with higher redundancy trend to a lower compression ratio. So, it is difficult to distinguish which one improves the observed compression performance: the test data or the
algorithm itself.
Unfortunately, data bias is ubiquitous in compression evaluation. Designers use their own datasets [5,7,10–13,19–21,24,25], most of which are unpublished. This causes confusion during algorithm
selection. Hence, removing data bias in evaluation is important. We consider using uniform and open datasets as the most direct and simple way. On the other hand, execution bias has a similar impact
on evaluation. It will probably happen that different time overhead is taken even if the same algorithm is realized. Many factors affect the result such as coding style, test platform, compilation
tool, and so on [7,9–13,16,18–23,25]. Compared to data bias, the execution one is difficult to avoid. For all that, it is also wished to remove it as much as possible.
We list a series of compression algorithms and their related information in Table 1. All of them are specifically designed for WSNs. It is clear that not only criteria but also bias restricts an
objective evaluation. Although energy cost of compression has been noticed in some researches [10–13,16,18,20,23], a proper evaluation of data compression is still desired.
In the literature which is most closely related to our own [26], several off-the-shelf lossless compression methods for mobile devices were reevaluated and tested on a StrongARM SA-110 processor.
According to this work, the most popular coding algorithms were objectively compared under a uniform platform. Moreover, a more energy-efficient coding scheme was proposed based on the comparison
results of energy costs. This work enlightened us on a fair and comprehensive evaluation of data compression; however, most of the algorithms in [26] were infeasible for sensor nodes.
Five different types of compression methods were summarized in [27], which were all designed for sensor nodes. The mentioned algorithms covered a wide range of characteristics, including lossy and
lossless, one-dimensional and multi-dimensional, temporal and spatial. Whereas each method in [27] was focused on a given application, no comparison between them was reported owing to their different
Our work is aimed to establish a relatively objective environment for data compression evaluation in WSNs. Thus, compression algorithms, in special for sensor nodes, are selected; and the performance
is assessed which is focused on the energy consumption. To our best knowledge, it is the first time data compression has been evaluated systematically and objectively from the point of view of energy
efficiency in WSNs. The introduction of energy information in the evaluation represents the biggest difference between our work and the previous ones. It should be advisable to pay attention to our
evaluation results before new designing algorithms or choosing existing ones.
In this section, the selected compression algorithms are introduced briefly and the new evaluation criterion is proposed.
Two basic concepts are mentioned in this paper: compression ratio and peak error. Compression ratio, denoted by R[c], is defined as a ratio of two data volumes: R c = Volume of Compressed data
Volume of Raw data
It is obvious that the smaller R[c], the better compression effect. Peak error (e[P]) is one form of compression error, which is formulated as: e P = Max | y ( n ) − x ( n ) |
It indicates the maximum difference between raw data (x(n)) and the reconstructed one (y(n)), where n is sample number.
As mentioned in Section 2, there are several forms of compression error representation. Although RMS error and SNR seems more common in traditional compression methods, we think that peak error will
be more appropriate for use in WSNs. Due to nodes’ limited computational capability, compression error seems inapplicable if it is defined as RMS or SNR. Besides the high complexity and large energy
losses in error computation, compressed data need to be reconstructed at first, which will incur tremendous energy waste too. Since error requirement is generally given as an upper-bound beforehand
by applications, more and more algorithms [8–14,17,18,21] use peak error owing to its simplicity and being able to avoid data reconstruction for verification of requirements. Thus, we consider peak
error as the only error representation in this paper.
We introduce off-the-shelf compression algorithms designed for sensor nodes in this subsection. Their characteristics are all threefold:
First, peak error is defined as the maximum data deviation accepted by each application. It is predetermined and informed to the sensor nodes via communication links.
Second, compression methods are tunable with respect to data accuracy. Changing e[P], compression can be either lossless or lossy.
Third, algorithms belong to online compression with no training is needed.
(1) Predictive compression
In WSNs, environmental data show strong inter-relationships with each other in both temporal and spatial domains. Thus, various prediction models are established which predict current sample values
in terms of the previous ones. An actual sample which is close to the predicted one will be removed from the raw data stream. Only the rest need to be transmitted. That becomes the basic principle of
predictive compression.
Prediction based data compression was proposed well in [18], which covered almost all kinds of predictive compression suited for sensor nodes. To ensure the exhaustiveness, we choose them all in our
evaluation. According to the diverse predictive models, the algorithms can be categorized into three groups, as shown in Table 2.
(2) Wavelet transformation
Wavelet transformation based on lifting scheme is popular used in WSNs, owing to its low complexity in implementation. A 5/3 wavelet presented in [10–13] was designed for compressing data in spatial
domain; however, it also can be used in the temporal case conveniently. Originated from the Lazy wavelet, 5/3 wavelet introduces lifting scheme, an alternative method, to compute its coefficients.
The whole process is divided into three steps: split, predict, and update. More details were provided in [11].
(3) Data fitting
By right of the continuity in variation, it is proper to replace a data stream with a form of line to decrease the total bits needed in representation. In WSNs applications, several algorithms are
put forward based on this idea. We merge them into one group, and call it data fitting. Methods we select in this paper are LAA (Linear Approximation Algorithm) [17], PMC-MR (Poor Man’s
Compression-Midrange), PMC-MEAN (Poor Man’s Compression—MEAN) [8], and LTC (Lightweight Temporal Compression) [9].
To make an objective compression evaluation in WSNs, a proper criterion is needed, which focuses on the energy efficiency of each algorithm. We name it ESB (Energy-Saving Benefit) and denote it by η.
ESB shows the energy savings introduced by compression algorithms. The expression is formulated as: η = E uncomp − E comp E uncomp * 100 %
According to the various topologies, we describe ESB with two levels: node level and network level. The biggest difference between them is the consideration of energy costs in data receiving. At the
node level, ESB is formulated as: E uncomp = P TX ( d ) * L * T tran E comp = P MCU * L * T MCU ( e P ) + P TX ( d ) * L * R c ( e P ) * T tran
So, η = P TX ( d ) * [ 1 − R c ( e P ) ] * T tran − P MCU * T MCU ( e P ) P TX ( d ) * T tran
At the network level, ESB is expressed as: E uncomp = ∑ i = 1 h P TX ( d i ) * L * T tran + ∑ i = 1 h − 1 P RX * L * T tran E comp = P MCU * L * T MCU ( e P ) + ∑ i = 1 h P TX ( d i ) * L * R c ( e P
) * T tran + ∑ i = 1 h − 1 P RX * L * R c ( e P ) * T tran
So, η = 1 − R c ( e P ) − P MCU * T MCU ( e P ) ∑ i = 1 h P TX ( d i ) * T tran + ∑ i = 1 h − 1 P RX * T tran
Meanings of the symbols mentioned are listed in Table 3. As shown in (3), η is related with the energy consumptions of two cases that one is transmitting the raw data directly, and the other is
compressing data before transmitting. In the former case, almost all energy is spent on communication; while in the latter one, the total energy costs should include both computational and
communication part.
In the communication part, P[TX] is intimately related to d. It is common that transmit power is configurable according to the distance. It is notable that, at the node level, we remove energy cost
during data receiving from the communication part, which is reconsidered at the network level.
In the computational part, P[MCU] shows the power consumption when a microprocessor is in the active mode. T[MCU] and R[c] are highly dependent on the compression algorithm itself. Since the
compression algorithms we selected are error-tunable, different values of e[P], which are determined by applications, will affect both T[MCU] and R[c] directly and significantly.
From (6) and (9), we can see ESB includes the information of both compression ratio and time complexity explicitly. It is evident that neither compression ratio nor time complexity is competent for
estimating compression algorithms fairly from the energy point of view.
In addition, compression error is also included by ESB. Its effect works on compression ratio and time overhead, which impacts η indirectly. To avoid unnecessary data transmission, data precision is
usually pre-determined by each application. In other words, before sending compressed data to the sink, source nodes would know application demand in advance. In this case, compression error acts a
role of adjudicator that evaluates whether requirement is satisfied.
Thus, the new evaluation criterion includes almost all the main metrics for evaluating compression, and reveals their internal relations by the way of energy evaluation. Besides, ESB additionally
provides important information on whether data compression can bring energy savings or not. Just like our research presented in [28], compressions are not always energy efficient if the additional
computational costs introduced by compression cannot be compensated by the communication energy savings. Ensuring energy-saving effect of compression is crucial in WSNs. Therefore, we add the energy
costs in uncompressed case (E[uncomp]) to ESB.
To demonstrate the difference between the new criterion and the traditional ones, we show the evaluation results of all of them. For clearness, we summarize compression algorithms in Table 6. They
are classified into three groups with different parameters. N denotes the number of historical data used for prediction modeling; smoothing coefficient α is selected based on the trends in data.
(1) Preferences in predictive compression
In Groups 1 and 2, N and α are set to three different values, respectively. For the sake of conciseness, we show the test results under ambient temperature in Figures 1 and 2. Similar results can be
obtained with the other datasets. In the figures, error bound (e[P]) describes application requirements of data precision. With its increase, all algorithms achieve lower compression ratioa owing to
the improvement in forecast accuracy.
In Figure 1, a better compression effect is obtained when N is equal to 3. This can be attributed to the data characteristics and its short training period. As mentioned in [18], models need to be
established before predicting. Parameter N determines the accumulated number of data for modeling. By right of the strong correlation in data, only a few historical samples are needed for a
successful prediction. Since the amount of raw data is identical in the three conditions, the larger N is set, the more compressed data is left, which evidently worsens the compression effect.
In addition, compression ratio differences will be enlarged as the error bound increases. In large error bounds, more data can be eliminated from the raw data stream. N has more effects on
compression ratio.
In Figure 2, the optimal α are different in the methods. In single exponential smoothing, the compression ratio is slightly lower if α is 0.8. The smoothing coefficient α reflects the influence
degree of previous data in a prediction. Larger α indicates strong correlation in the data. Thus, a higher forecast accuracy is obtained.
The biggest difference between single exponential smoothing and the other two is that trend variation cannot be shown in the single one. As a result, in the other two methods, higher forecast
accuracy is obtained due to the additional information. Meanwhile, α is decreased with the contribution brought by this improvement.
(2) Compression ratio comparison
Compression ratios (R[c]) of all algorithms are shown in Figure 3. We test them under different data types and error bounds. The statistic information is developed by using quartile analyses. The
mean is marked as a solid diamond. Each algorithm in Groups 1 and 2 is presented in the best case of the three.
In the figure, PMC-MR obtains the best compression effect of them all, while wavelet transformation is slightly worse than the others. In Group 1, autoregressive forecasting is better than the other
three; in Group 2, single exponential smoothing is the best. It means simple model is competent for the test data. Wavelet transformation we use is one-level 5/3 wavelet. In this case, only half of
the data (namely high frequency part) is compressed, which evidently limits its compression effect.
(1) Preferences in predictive compression
Figures 4 and 5 show the time overheads on compressing per byte (T[MCU]) of Groups 1 and 2. It is derived from the total time spent on compressing.
In the figures, T[MCU] has a similar trend as R[c] when the error bound increases. In the predictive compression, real sample should be added into a transmit queue once the deviation between the real
and predicted one is larger than error bound. More operations are needed before transmitting. Thus, a less compressed data means a smaller T[MCU].
In Figure 4, time overhead is lower if N is larger. In the small N, more data need to be predicted. The operation time correspondingly increases. In Figure 5, the lowest costs is obtained when α is
equal to 0.5. In this case, division is replaced with shift operation, which requires less time consumption.
In Figure 5, T[MCU] is on the order of milliseconds. That is far longer than Group 1. It shows that algorithms consume a lot of time on division operations. As mentioned above, transmission of one
byte needs 32 μs; however, the algorithms in Group 2 need several milliseconds of compression per byte. It is no doubt that compression is superfluous in this situation, because no energy savings
will be obtained in any α.
(2) Compression complexity comparison
Because of the high time overheads in Group 2, we eliminate them from the time comparison. In Figure 6, LAA has the shortest time overhead due to its low computational complexity with no division.
Similar results are obtained in wavelet and PMC-MR, where shift operation is used instead of division.
(1) Preferences in predictive compression
Due to the high time overhead in Group 2, it is hard to save energy by compression in common cases. Thus, we eliminate them from the ESB evaluation. ESB at the node and network level in Group 1 is
presented in Figures 7 and 8. At the network level, the hop count (h) is 2. As shown in the figures, with the improvement of both compression ratio and execution time, ESB rise sharply when error
bound increases. Either at the node level or network level, ESB is a little bit better when N is equal to 3. Although compression ratio is obviously superior in this case, the advantage is weakened
by the drawback in the time overhead. Especially at the node level, the computational energy cost has a great impact on ESB. Moreover, it is noteworthy that compression saves the total energy only in
the large error bounds.
(2) ESB comparison
Except the three exponential smoothing forecasting ones, the remaining algorithms are evaluation based on ESB at the node and network level. The results are shown in Figures 9 and 10. The parameter N
of Group 1 is set to 3. At the node level, the comparison result is shown using quartile analyses; at the network level, the average values of ESB under the different hop counts (h) are recorded.
It is clear that we obtain new comparison results which are different from the compression ratio and time complexity. Mainly owing to the excellent compression ratio and relatively low computational
complexity, PMC-MR achieves the best energy-saving benefit among all algorithms listed. At the node level, it provides an average energy savings of 30% and the highest savings is as high as 70%. The
probability that PMC-MR saves the total energy is higher than 75%. At the network level, ESB raises to 50% with the increase of hop counts.
It is worth mentioning that ESB of LAA is second only to PMC-MR at the node level. According to Figure 3, the compression ratio of LAA is not as good as that of the other algorithms. However, owing
to its short execution time, LAA obtains a higher energy-saving benefit even than the algorithms with a lower compression ratio. It indicates that, viewed from the energy efficiency of compression, a
low computational complexity could make up for the lack of the compression ratio. Nevertheless, LAA loses its advantage at the network level, as shown in Figure 10. In that case, the compression
ratio has more effects on energy costs and more energy savings in communication benefit from it. With the increase of hop counts, the proportion of communication energy consumptions becomes large,
while the influence of computational complexity on energy savings is smaller.
On the other hand, the algorithms show possibilities of introducing additional energy consumptions, especially at the node level. It mainly appears in the small error bounds, because at those
moments, compressing data cannot save enough energy to offset the additional costs in computation, which makes compression unnecessary.
As shown in Section 5, ESB is not always positive. In other words, data compression in WSNs is not always beneficial to energy conservation due to the additional computational energy dissipations.
Thus, a low overhead method is needed as an assistant mechanism to avoid unnecessary losses in compression.
An adaptive compression arbitration system is proposed with its framework shown in Figure 11. This system predicts the probable energy savings of compression to make a decision on whether to compress
data before transmitting. The whole procedure is divided into three steps:
Prediction modeling
Before the arbitration, two models are established on-line to predict the compression ratio and the compression time. Information about the compression ratio and execution time for various datasets
and application requirements is recorded for each prediction model. Since it is an on-line modeling, only a few samples are used allowing for saving energy.
Compression evaluation
After the modeling, the compression arbitration calculates a probable compression ratio for the given accuracy requirement and the corresponding time overhead based on the models. Then, the balance
point between loss and benefit is estimated in the form of a compression ratio. Comparing the two kinds of compression ratio, the system draws a conclusion about whether compression will produce
energy savings or not in the “comparison and judgment” sub-module. The feedback result is subsequentially applied to control the behavior of data processing (compression before transmission or direct
Adaptive modification
In this step, several samples are randomly selected for the verification of judgment accuracy. Once the target sample is given, its actual compression ratio and time overhead are measured for
evaluating whether data compression is beneficial for energy savings. If the evaluation result is different from that of arbitration system, parameter modification is realized via remodeling with the
new data accumulated.
The adaptive compression arbitration system is evaluated in a single-hop network with LTC as the test algorithm. Since the ultimate purpose of the arbitration system is reducing the total energy
costs, we test the final energy savings provided by the new system under the different error bound levels and RF power levels. To show the efficiency of the system, two reference objects are used,
which are the total energy costs for directly transmitting the raw data and the costs of compressing the data all along and then transmitting.
Energy consumptions for all three cases are presented in Figure 12. It is obvious that combining the new arbitration system with data compression, considerable energy savings can be obtained in most
cases. The greatest saving is 33.4% of the cost of transmitting the data directly, which happens when both the error bound and the RF power are set to their maximums. In that case, the lowest
compression ratio is achieved and the corresponding energy saving in communication has a significant influence on the total energy savings. Similarly, comparing to the case of always compressing the
data, the highest percent savings is up to 39.2% when both the error bound and the RF power are set to their minimums. It is clear that, in that case, compression is no longer energy efficient,
because it cannot save enough communication energy, while the additional cost in computation leads an unexpected energy waste.
In the paper, many of the current tunable compression algorithms designed for WSNs are reevaluated based on the a criterion. Since all algorithms are aimed to be used in WSNs, which consider energy
consumption as the first design element, the new criterion ESB reveals the performances of algorithms more objectively.
Although several indices proposed before are do well in the traditional compression evaluation, they are probably unable to be felicitously applied to WSNs. According to the comparison results,
compression ratio and time complexity cannot express well the energy performance of the compression algorithms. Compression ratio only indicates the reduction in the data amount, which is numerically
expressed in communication savings; time complexity only affects the additional computational energy consumptions for compression. That is to say, neither of these two indices can reveal the complete
energy information about compression.
Besides the impartiality in algorithm evaluation, ESB can also be used to detect the case when compression wastes energy. It will probably happen if increased computational energy cannot be
compensated by the decreased communication energy consumption. This information is much more important in both design and application. However, it seems hard to obtain from the other criteria.
Therefore, several design considerations are discussed based on the evaluation results:
First, computational energy brought by data compression is not always negligible. It may occur that compression costs much more energy, even if it has a satisfactory compression ratio. So,
compression algorithm with a lower compression ratio does not mean it is the proper one for WSNs.
Second, different types of instructions have greatly different effects on the performance of algorithm. Especially in the division instruction, more execution time is needed, which deteriorates the
energy efficiency of compression rapidly. It is obviously shown in exponential smoothing forecasting. So, the division instruction should be avoided in sensor nodes. We suggest using shift operation
instead of it as much as possible.
Last but not least, an adaptive compression arbitration system is proposed with the enlightenment provided by the evaluation results. The system enhances the performance of compression algorithms by
avoiding unnecessary energy losses. With this arbitration system, the greatest energy savings are 33.4% when directly transmitting the data and 39.2% when compressing all the data.
|
{"url":"http://www.mdpi.com/1424-8220/10/4/3195/xml","timestamp":"2014-04-19T05:00:58Z","content_type":null,"content_length":"97810","record_id":"<urn:uuid:6f2fd8f7-cea6-4f01-b0ea-5daee394d778>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Counterexamples in algebraic topology?
up vote 68 down vote favorite
In this thread
Books you would like to read (if somebody would just write them...),
I expressed my desire for a book with the title "(Counter)examples in Algebraic Topology". My reason for doing so was that while the abstract formalism of algebraic topology is very well-explained in
many textbooks and while most graduate students are fond of the general machinery, the study of examples is somehow neglected. I am looking for examples that explain why certain hypotheses are
necessary for theorems to hold. The books by Hatcher and Bredon contain some interesting stuff in this direction, and there is Neil Strickland's bestiary, which is mainly focused on positive
To convey an idea of what I am after, here are a few examples from my private ''counterexamples in algebraic topology'' list. Some are surprising, some less so.
1. The abelianization of $SL_2 (Z)$ is $Z/12$, the map $BSL_2(Z) \to BZ/12$ is a homology equivalence to a simple space. But it is not a Quillen plus construction, since the the homotopy fibre is
$BF_2$ (free group on $2$ generators), hence not acyclic. See The free group $F_2$ has index 12 in SL(2,$\mathbb{Z}$) .
2. Maps $f:X \to Y$ which are homology equivalences, the homotopy groups are abstractly isomorphic, but though, $f$ is not a homotopy equivalence (a number of examples has been given in the answers
to these questions: Spaces with same homotopy and homology groups that are not homotopy equivalent?, Are there pairs of highly connected finite CW-complexes with the same homotopy groups?).
3. Self-maps of simply-connected spaces $X$ which are the identity on homotopy, but not on homology (let $X=K(Z;2) \times K(Z;4)$, $u:K(Z;2) \to K(Z;4)$ be the cup square, and $f:X\to X$ is given by
$f(x,y):= (x,y + u(x))$, using that EM-spaces are abelian groups). There are also self-maps of finite simply connected complexes that are the identity on homology, but not on homotopy, see
Diarmuid Crowleys answer to Cohomology of fibrations over the circle: how to compute the ring structure?
4. The stabilization map $B \Sigma_{\infty} \to B \Sigma_{\infty}$ induces a bijection on free homotopy classes $[X, B \Sigma_{\infty}]$ for each finite CW space $X$. However, it is not a homotopy
equivalence (not a $\pi_1$-isomorphism).
5. The fibration $S^1 \to B \mathbb{Q} \to B \mathbb{Q}/\mathbb{Z}$ is classified by a map $f:B \mathbb{Q}/\mathbb{Z} \to CP^{\infty}$, which can be assumed to be a fibration with fibre $B \mathbb
{Q}$. Now let $X_n$ be the preimage of the n-skeleton of $CP^{\infty}$. Using the Leray-Serre spectral sequence, we can compute the integral homology of $X_n$ and, by the universal coefficient
theorem, the homology of field coefficients. It turns out that this is finitely generated for any field, and so we can define the Euler characteristic in dependence of the field. It is not
independent of the field in this case (the reason is of course that the integral homology of $X_n$ is not finitely generated).
6. The compact Lie groups $U(n)$ and $S^1 \times SU(n)$ are diffeomorphic, their classifying spaces have isomorphic cohomology rings and homotopy groups, but the classifying spaces are not homotopy
equivalent (look at Steenrod operations).
Question: Which examples of spaces and maps of a similar flavour do you know and want to share with the other MO users?
To focus this question, I suggest to stay in the realm of algebraic topology proper. In other words:
1. The properties in question should be homotopy invariant properties of spaces/maps. This includes of course fibre bundles, viewed as maps to certain classifying spaces.
2. Let us talk about spaces of the homotopy type of CW complexes, to avoid that a certain property fails for point-set topological reasons.
3. This excludes the kind of examples from the famous book "Counterexamples in Topology".
4. The examples should not be "counterexamples in group theory" in disguise. Any ugly example of a discrete group $G$ gives an equally ugly example of a space $BG$. Same applies to rings via
Eilenberg-Mac Lane spectra.
5. I prefer examples from unstable homotopy theory.
To get started, here are some questions whose answer I do not know:
1. Construct two simply-connected CW complexes $X$ and $Y$ such that $H^* (X;F) \cong H^* (Y;F)$ for any field, as rings and modules over the Steenrod algebras, but which are not homotopy
equivalent. EDIT: Appropriate Moore spaces do the job, see Eric Wofsey's answer.
2. Let $f: X \to Y$ be a map of CW-complexes. Assume that $[T,X] \to [T;Y]$ is a bijection for each finite CW complex $T$ ($[T,X]$ denotes free homotopy classes). What assumptions are sufficient to
conclude that $f$ is a weak homotopy equivalence? EDIT: the answer has been given by Tyler Lawson, see below.
3. Do there exist spaces $X,Y,Z$ and a homotopy equivalence $X \times Y \to X \times Z$, without $Y$ and $Z$ being homotopy equivalent? Can I require these spaces to be finite CWs? EDIT: without the
finiteness assumptions, this question was ridiculously simple.
4. Do you know examples of fibrations $F \to E \to X$, such that the integral homology of all three spaces is finitely generated (so that the Euler numbers are defined) and such that the Euler
number is not multiplicative, i.e. $\chi(E) \neq \chi(F) \chi(X)$? Remark: is $X$ is assumed to be simply-connected, then the Euler number is multiplicative (absolutely standard). Likewise, if
$X$ is a finite CW complex and $F$ is of finite homological type (less standard, but a not so hard exercise). So any counterexample would have to be of infinite type. The above fibration $BSL_2
(Z) \to BZ/12$ is a counterexample away from the primes $2,3$, but do you know one that does the job in all characteristics?. Of course, the ordinary Euler number is the wrong concept here.
I am looking forward for your answers.
EDIT: so far, I have gotten great answers, but mostly for the specific questions I asked. My intention was to create a larger list of counterexamples. So, feel free to mention your favorite strange
spaces and maps.
at.algebraic-topology examples
2 For #1, what about a suspension of a torus versus $S^2 \vee S^2 \vee S^3$? For #3, it's pretty easy to find examples where $X$ is not a finite CW complex (say, an infinite product of $Y \times
Z$). – John Palmieri Feb 14 '11 at 0:24
5 This should probably be community wiki. – Andy Putman Feb 14 '11 at 1:09
2 For #1, what about a $\mathbf{Z}/4$ Moore space versus a $\mathbf{Z}/8$ Moore space? – Sam Isaacson Feb 14 '11 at 1:52
2 For #3: $X$ could be empty. Also, slightly less silly remark, without the finiteness assumption there is the Eilenberg swindle: Let $X$ be the product of infinitely many copies of $Y$ and let $Z$
be a point. – Tom Goodwillie Feb 14 '11 at 3:46
1 @John: that could be right, Hatcher is using maps $S^{n+m} \to S^n$ which are in the image of the J-homomorphism. If choosen correctlx, these cannot be detected by Steenrod operations. – Johannes
Ebert Feb 14 '11 at 10:15
show 6 more comments
6 Answers
active oldest votes
This is a great question. Here are two of my favorite counterexamples:
1. Rector proved in 1971 that there are uncountably many complexes $X$ (distinct in the homotopy category) such that $\Omega X \simeq S^3$.
up vote 17 2. It's possible to construct "ghost" maps $f:X\to Y$ that are zero on $\pi_\ast$, but nonetheless essential (e.g., maps of Eilenberg-Mac Lane spaces representing cohomology operations).
down vote Even wilder, there are "phantom" maps $f:X\to Y$ so that if $K$ is any finite complex and $i:K\to X$ a map, $fi \simeq \ast$, but $f \not\simeq \ast$. Dan Christensen has written
extensively about phantom maps in the stable homotopy category; I think the first example of a phantom map is due to Adams and Walker.
i guess $if$ should be $fi$. – HenrikRüping Feb 14 '11 at 9:52
And these phantoms are then zero on homology, cohomology and homotopy. – Johannes Ebert Feb 14 '11 at 10:23
2 Regarding ghosts: for each $n$, there are maps $f$ of finite complexes such that $\Sigma ^{2k} f$ induces $0$ on $\pi_*$ for $2k\leq n$. – Jeff Strom Feb 14 '11 at 15:51
5 ad 1: it should be noted that this is really a global phenomenon. I think, at each prime the delooping is unique. – Lennart Meier Feb 14 '11 at 17:00
1 @Henrik, thanks for the correction. @Lennart, you're right; $p$-locally, the only delooping of $S^3$ is $\mathbf{H}\mathrm{P}^\infty$. – Sam Isaacson Feb 14 '11 at 18:24
show 1 more comment
Poicare's dodecahedral sphere is a counterexample to the original homological version of Poincare conjecture.
up vote 10 down vote
add comment
Problem #3 is about noncancellation phenomena. Peter Hilton (often with G. Mislin and/or J. Roitberg) wrote about ten papers on the subject. There are plenty of examples, generally
related to localization.
Probably a good start:
up vote 9 down
vote Hilton, P. Non-cancellation phenomena in topology. Topics in topology (Proc. Colloq., Keszthely, 1972), pp. 405–416. Colloq. Math. Soc. Janos Bolyai, Vol. 8, North-Holland,
Amsterdam, 1974.
add comment
For (2), a sufficient condition is that the fundamental group $\pi_1(Y)$ be finitely generated. We prove this by showing that the map $X \to Y$ is an isomorphism on homotopy groups.
First, we recall that the set of unbased homotopy classes of maps $[S^n;Z]$ is always isomorphic to the quotient $\pi_n(Z)/\pi_1(Z)$ of the set of based maps by the action of the
fundamental group. The identity element $e \in \pi_n(Z)$ is always a singleton orbit. Therefore, taking $T = S^n$ we find that the bijection $[S^n;X] \to [S^n;Y]$ implies that the kernel of
the map $\pi_n(X) \to \pi_n(Y)$ is trivial. Therefore, the map is automatically injective on homotopy groups.
Even further, this allows us to reduce to showing that the map $\pi_1(X) \to \pi_1(Y)$ is surjective, as follows. For any $y \in \pi_n(Y)$, the conjugacy class $[y]$ is in the image, so
there exists an element $x \in \pi_n(X)$ whose image is ${}^g y$ for some $g \in \pi_1(Y)$. Surjectivity on $\pi_1$ would imply that we can lift $g^{-1}$ to a class in $h \in \pi_1(X)$, and
up vote 9 then the image of ${}^h x$ is $y$.
down vote
In order to prove surjectivity, we recall a similar result about maps from wedges of circles. A based map $\vee^k S^1 \to Z$ determines an ordered k-tuple of elements of $\pi_1 Z$, and the
set of unbased maps $[\vee^k S^1;Z]$ is the set of simultaneous conjugacy classes of such k-tuples.
If $\pi_1(Y)$ is finitely generated, construct a based map $\vee^k S^1 \to Y$ corresponding to a k-tuple $(y_1, \ldots, y_k)$ of elements generating the group. Since the map $[\vee^k S^1;
X] \to [\vee^k S^1; Y]$ is a bijection, the simultaneous conjugacy class is in the image; i.e. there exists an element $g \in \pi_1(Y)$ such that the k-tuple $(gy_1 g^{-1}, gy_2 g^{-1}, \
ldots, gy_k g^{-1})$ is in the image. However, this is another set of generators for $\pi_1(Y)$, and so the map $\pi_1(X) \to \pi_1(Y)$ is surjective, as desired.
1 Great! So, by the first two paragraphs, $\pi_1$-surjectivity is sufficient. This is also guaranteed if $\pi_1 (Y)$ is abelian. – Johannes Ebert Feb 14 '11 at 17:59
Yes, that's right. So counterexamples must fall into the "infinitely generated nonabelian fundamental group" category, which is a little more pathological than "non-simply-connected". –
Tyler Lawson Feb 14 '11 at 20:22
Also, it appears that I was lazy and assumed that X and Y are path-connected throughout, which is a case that you can reduce to using $\pi_0$. – Tyler Lawson Feb 14 '11 at 20:22
add comment
For #1, an example is given by two Moore spaces $M({\mathbb Z}/p^2,k)$ and $M({\mathbb Z}/p^3,k)$; the only cohomology is in degree $k$ and $k+1$ in characteristic $p$, and the ring and
up vote 8 Steenrod module structures are trivial. This works with $p^2$ and $p^3$ replaced by any two powers of $p$ that are not $p$ itself (since for $p$ you get a Bockstein in the cohomology).
down vote
The Bockstein is nonzero on $M(Z/p,k)$; that is why you insist on powers larger than one, right? – Johannes Ebert Feb 14 '11 at 11:40
1 Yes; these spaces can be distinguished by higher cohomology operations. You can produce stable counterexamples to #1 quite generally by looking at the cofibers of maps $S^n \to $S^0$
with Adams filtration greater than 1. – Sam Isaacson Feb 14 '11 at 18:28
add comment
What follows is merely a reference to the excellent answer and comment by Karol Szumiło in this mathoverflow question asked by Mike Shulman. There, Karol provides arguments and bibliographic
sources which prove the failure of Brown representability in the homotopy category of unpointed spaces, and in the homotopy category of not necessarily connected pointed spaces. Please upvote
Karol's comment and answer, together with the question. For convenience, I review below the main points of that discussion.$\newcommand{\Ho}{\mathrm{Ho}} \newcommand{\op}[1]{#1^{\mathrm{op}}}
In the article "Splitting homotopy idempotents II", Peter Freyd and Alex Heller construct a very special homotopy idempotent $Bf:BF\to BF$, i.e. $Bf$ is an idempotent in $\Ho$, the homotopy
category of spaces with the homotopy type of CW-complexes. Here, $F$ denotes Thompson's group, and $BF$ is its classifying space. Importantly, the homotopy idempotent $Bf:BF\to BF$ does not
split, i.e. does not admit a retract in the homotopy category. This example was also constructed independently by Dydak in his 1977 article "A simple proof that pointed connected FANR-spaces
are regular fundamental retracts of ANR's".
up vote The idempotent $Bf:BF\to BF$ provides a retract $R$ of the representable functor $[-,BF]:\op{\Ho}\to\Set$, since idempotents do split in the category $\Set$ of sets. Then $R$ is necessarily
5 down half-exact, i.e. preserves small products and weak pullbacks; these are the conditions in Brown's representability theorem. However, $R$ is not representable since a representing object for
vote $R$ would be a retract for $Bf:BF\to BF$ in $\Ho$. We can also apply the same argument to the pointed map $(Bf)_+ : (BF)_+ \to (BF)_+$ obtained by adding disjoint basepoints, giving a
half-exact, non-representable functor $\op{(\Ho_\ast)}\to\Set$ on the pointed homotopy category.
More examples of the failure of Brown representability are described by Alex Heller in the article "On the representability of homotopy functors", which provides some more fascinating
insights into the phenomenon of Brown representability. In particular, that article gives a functor $N:\op{\Ho}\to\Set$ which is half-exact, but is not even a retract of any representable
functor. The functor $N$ is defined for each space $X$ by $$ N(X)=\prod_{[x]\in\pi_0(X)} S\bigl(\pi_1(X,x)\bigr) $$ where $S(G)$ is the set of normal subgroups of $G$, for each group $G$.
Observe that the choice of $x\in X$ representing $[x]\in\pi_0 X$ is inconsequential. For a homotopy class of maps $[f]:X\to Y$ in $\Ho$, the function $N([f])$ is given by taking preimages of
normal subgroups by $f_\ast:\pi_1(X,x)\to\pi_1(Y,f(x))$. Then $N$ is not a retract of a representable functor because normal subgroups of $\pi_1(X,x)$ can have arbitrarily large index if one
varies the space $X$.
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology examples or ask your own question.
|
{"url":"http://mathoverflow.net/questions/55365/counterexamples-in-algebraic-topology?answertab=votes","timestamp":"2014-04-19T22:45:06Z","content_type":null,"content_length":"99888","record_id":"<urn:uuid:fec75218-1774-4c4f-9289-2c3a8f2c33d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hyperbolic Inverse Equation
January 22nd 2011, 01:44 AM
Hyperbolic Inverse Equation
Hi, I'm trying to work out the following:
Find non-trivial values of A and B for which u=Asech^2xBx is a solution of (d^2u/dx^2)-u+3u^2.
Any help would be really appreciated!
January 22nd 2011, 02:02 AM
That's not an equation. Where does the equals sign go?
January 22nd 2011, 02:20 AM
January 22nd 2011, 02:36 AM
Prove It
If $\displaystyle u = AB\,x\,\textrm{sech}^2\,{x}$ evaluate $\displaystyle \frac{d^2u}{dx^2}$.
Then substitute this all into $\displaystyle \frac{d^2u}{dx^2} - u + 3u^2 = 0$.
You should be able to solve for $\displaystyle A,B$.
January 22nd 2011, 02:38 AM
I would also point out that you might not be able to uniquely solve for A and B. But you just need to find nonzero values that work. Hence, if you find an equation for A in terms of B, just pick
a value for B and determine the corresponding A. You're not asked to find unique nontrivial values, just values.
January 22nd 2011, 02:48 AM
Sorry again, I made a stupid mistake when entering the question. u=Asech^2(Bx). Would I just follow the same method as you outlined?
January 22nd 2011, 03:04 AM
Prove It
January 23rd 2011, 02:02 PM
Thank you so much for all your help so far but I'm still a little stuck.
So, f(x)=Asech^2(Bx), then f'(x)=-2ABsech^2(Bx)tanh(Bx), so f''(x)=4AB^2sech^2(Bx)tanh^2(Bx)-2AB^2sech^4(Bx)
And 3u^2=3A^2sech^4(Bx)
So the whole equation is 4AB^2sech^2(Bx)tanh^2(Bx)-2AB^2sech^4(Bx)-Asech^2(Bx)+3A^2sech^4(Bx)
But I'm stuck on where to go, I've tried various relationships but can't get anything to work. Any more help would be extremely appreciated!
January 23rd 2011, 02:23 PM
Nevermind, I looked at it and realised I could take out Asech^2(Bx) as a common factor, then use 4B^2tanh^2(Bx)=1-4B^2sech^2(Bx) which allowed me to take out a further sech^2(Bx) common factor to
get: Asech^4(Bx)(-6B^2+3A) and so -6b^2+3A=0 so A=2, B=1 is a solution. Thanks for your help.
|
{"url":"http://mathhelpforum.com/calculus/169013-hyperbolic-inverse-equation-print.html","timestamp":"2014-04-20T07:15:15Z","content_type":null,"content_length":"10802","record_id":"<urn:uuid:6682fbb7-3d07-4f91-a0d8-f7855b0c5a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arnold Ross Lecture by Curtis McMullen and
Arnold Ross Lecture by Curtis McMullen and Who Wants To Be A Mathematician at the Museum of Science, Boston
The 2002 Arnold Ross Lecture was sponsored by the American Mathematical Society and the Museum of Science, Boston, on April 11. This was the thirteenth in a series of lectures for talented high
school mathematics students.
Curtis McMullen, Harvard mathematics professor and 1998 Fields Medalist, spoke on "From Triangles to Infinity." McMullen motivated the talk by asking the audience what path a lion should take to
capture a human, if both are in an enclosed ring. A little later in the talk, he asked students in the audience to assemble polyhedra using interlocking triangles, given the constraint that a fixed
number of triangles have to meet at each vertex. As the title of the talk suggests, there were many different areas of mathematics touched on by McMullen, including: Fermat's Last Theorem, Zeno's
Paradoxes, hyperbolic and spherical geometry, the harmonic series, and tiling. Near the end of his talk, McMullen showed a path that a human could take to elude the lion and used results about
infinite series to demonstrate the path's effectiveness. The teachers and students who filled the Boston Museum of Science auditorium thoroughly enjoyed the subject of the talk and the manner in
which it was delivered. Many students sought out McMullen after his talk to ask questions, and some even asked for his autograph.
After a break for refreshments, the AMS Public Awareness Office ran the Who Wants To Be A Mathematician game for 10 Boston-area high school students who pre-qualified to play by taking a short test
that had been sent to mathematics teachers a couple of weeks earlier. In this version of the game, the contestants were chosen by lottery before each round: Michelle Sonia (Notre Dame Academy),
Andrew Brandt (Swampscott High School), Kevin Hausherr (North Middlesex Township), and Kristen Grauer-Gray (Framingham High School) sat in the "hot seat" opposite the entertaining host Mike Breen and
were cheered by students and teachers from more than 18 schools. No one won the $2,000 grand prize, but Kristen Grauer-Gray succeeded in making it up to the last and hardest of 15 questions covering
algebra, geometry, trigonometry, logic and history of mathematics.
The other students who pre-qualified to play the game were Mike Bottom (Belmont High School), Mark Ferreira (Old Rochester Regional), Hankun Huang (Quincy High School), Sean Maran (Roxbury Latin),
Israel Molina (Chelsea High School), and Julie Proulx (Natick High School). All the contestants received a bag of mathematical gifts. Prizes awarded included AMS t-shirts and The College Mathematics
Journal published by the Mathematical Association of America. Other prize donors were The Museum of Science, Boston, Waterloo Maple software company, the Mathematical Association of America, John
Wiley & Sons, publisher, Texas Instruments, and the AMS. A student in the audience won a Maple Student Edition donated by Waterloo Maple in the raffle.
The AMS thanks speaker Curtis McMullen; Robert Devaney, Chair of the Arnold Ross Lecture Committee; Lynn Baum, High School Programs Manager, and staff at the Museum of Science; and Robin Aguiar, AMS
Conference Coordinator; and all the students and teachers who traveled from near and far for making this such a special and successful celebration of challenging mathematics.
The Committee for Arnold Ross Lectures is: Arthur T. Benjamin, Harvey Mudd College, Robert Devaney, Chair, Boston University, Deborah Tepper Haimo (Past Chair), University of California, San Diego
Victoria A. Powers, Emory University, and Judy L. Walker, University of Nebraska.
The 2003 Arnold Ross Lecture will take place at the Museum of Science and Industry in Chicago on October 28.
|
{"url":"http://ams.org/programs/students/wwtbam/arl-wwtbam2002","timestamp":"2014-04-21T00:50:52Z","content_type":null,"content_length":"12827","record_id":"<urn:uuid:c8c8807e-1a07-49ca-b1d5-ff85f37f9a75>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematica Slows Down with Repeated Evaluation
I'm trying to analyze a fairly large (order 10^3) collection of PDB files to look for cation-pi interactions for a class. This requires me to parse each PDB file into a table which gives the position
of each atom in the file. To do this, I've written the following code:
Timing[Open[AllFiles[[2]]]; (*AllFiles is a list of the filenames in the directory. I intend to replace the 2 with an iteration index when the code is working*)
Lines = Import[AllFiles[[2]], "Lines"];
FullTable = {};
LineStream = StringToStream[Lines[[j]]];
QATOM = Read[LineStream, Word];
If[QATOM == "ATOM", (*This condition looks for lines that actually describe atoms, instead of other information*)
ThisLine =
Read[LineStream, {Number, Word, Word, Word,
Number, {Number, Number, Number}}];
If[Or[StringLength[ThisLine[[3]]] == 3, StringTake[ThisLine[[3]], 1] == "A" (*This condition eliminates duplicate listings*)],
FullTable = Append[FullTable, ThisLine]]
, {j, 1, Length[Lines]}];
The code does what it's supposed to, but it slows down significantly each time I run it. The first run takes less than .2 seconds, but by the fifth run, it's already above 25 seconds to parse the
same file. Quitting the kernel session solves the speed problem, but of course this deletes all my data. CleanSlate, ClearAll, and adjusting $HistoryLength all had no effect. I haven't come across a
solution on this forum yet, so I would appreciate any suggestions.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3776338","timestamp":"2014-04-21T04:37:45Z","content_type":null,"content_length":"31858","record_id":"<urn:uuid:6be00eac-b608-4383-a75e-2d2c15485b08>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plugin for displaying LaTeX equations using MathJax. (Discontinued: MathJax plugin recommended instead.)
Similar to asciimath, asciimathml, format, latex, masciimath, math, math2, mathjax, mathpublish, mimetex, svgedit
This plugin, even though it should still work, has been abandoned. It is recommended to use the excellent plugin mathjax instead.
Install this plugin in the plugin manager and use LaTeX code right away!
For your convenience, the plugin is available in two different archive formats.
LaTeX-code is beautifully rendered by the client's browser using a JavaScript library.
Once this plugin is installed, you can embed LaTeX code into your wiki pages.
Each of the following will cause jsmath to display the formula inline:
Each of the following will display an equation in block format, on its own line:
On the DokuWiki Configuration Settings page (available in the admin menu) you can
The jsmath plugin for DokuWiki does not render the LaTeX-code itself. Instead it uses a JavaScript backend that renders the code in the client's browser. Two different JavaScript backends are
currently supported:
The default behaviour of the jsmath plugin is to use the MathJax CDN at http://cdn.mathjax.org/mathjax/latest, but you can also download and use a local copy. You only need to install one of those
libraries. If unsure, choose MathJax.
To get the cutting edge version of MathJax, get the most recent version from the developer's page. In short:
Now move the mathjax directory to your webspace. You can test your installation by going to URL/mathjax/test/
If you want to contribute to this plugin, go to the project page of this plugin or contact Holger.
I've just switched from jsplugin backend to MathJax one: it's awesome ! This plugin together MathJax makes publishing math in dokuwiki a real pleasure. And it is so easy to install both. Thank you
very much ! Now that MathJax is in version 1.0.1 I think you should recommend it over jsmath now.
Answer: Please report details to the issue tracker. – Holger.
How to set the JsMath backend_url in the plugin setting with the absolute/relative PATH on my server?
If I set it with current IP, it doesn't work when the IP changed.
Answer: Try dyndns.org. I'm not sure whether relative paths work. – Holger.
Thanks for the excellent plugin! I needed the AMSmath and AMSsymbols extensions so I manually added them to install_js.php as follows.
document.write('<script type="text/javascript" src="<?php echo($backend_url); ?>/MathJax.js">MathJax.Hub.Config({extensions: ["jsMath2jax.js","TeX/AMSmath.js","TeX/AMSsymbols.js"], jax: ["input/TeX",
"output/HTML-CSS"] });</script>');
It would be great if there was an option on the configuration page to list extensions that should automatically be loaded. Did I miss it? Thanks.
Anyone got an idea or workaround for how to label formulas and refer to or even link to them from the text?
It's not pretty but I use a combination of bookmark and wrap to label equations. It's tedious because they don't autonumber. For example:
where \(\delta_{kl}\) is the Kronecker delta function. Hence [[#Eq. 3.3]] reduces to <div right><BOOKMARK:eq_34>(Eq. 3.4)</div> \[ \text{Covar}[a_j,a_k] = C_{jk} \] and we find that \([C]\) is the
covariance matrix. The variance of a single parameter \(a_j\) is simply defined as <div right><BOOKMARK:eq_35>(Eq. 3.5)</div> \[ \text{Var}[a_j]=\text{Covar}[a_j,a_j]=C_{jj}. \]
The Mathjax people work on adding support for \label and \ref capabilities. https://github.com/mathjax/MathJax/issues/71 – Holger.
Some webspaces set an inode limit for the users (maximum number of files and folders) and, if you unzip the MathJax package directly on the server, it take about 36.000 inodes; this number is really
high for a lot of free webspaces, then to resolve the issue you can use the cdn network kindly offered by MathJax, by putting the following url in the field “plugin»jsmath»backend_url”:
|
{"url":"https://www.dokuwiki.org/plugin:jsmath","timestamp":"2014-04-16T22:17:23Z","content_type":null,"content_length":"38431","record_id":"<urn:uuid:1b2b9e2d-4287-453a-a418-9cc9af2a174f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
linear transformation ad standard basis
Let T: R^3-->R^3 be a linear transformation defined by T(x, y, z)=(2x+y, x+2z, x+y+z). Find the matrix A of T relative to the standard basis of R^3.
I know what the standard matrix is of R^3 I'm just confused by the 'A of T'. Does that mean that I take the coefficient matrix of T combined with the standard basis of R^3 and row reduce it as if
I was finding change-of-coordinate vectors?
Thank you to anyone who can set me on the right path.
|
{"url":"http://mathhelpforum.com/advanced-algebra/109753-linear-transformation-ad-standard-basis.html","timestamp":"2014-04-17T04:54:55Z","content_type":null,"content_length":"40644","record_id":"<urn:uuid:32d8a7d5-ec1e-46db-9e2f-369c11b4f211>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Categorical observed variables and growth curve modeling
Categorical observed variables and gr...
Scott C. Roesch posted on Monday, May 16, 2005 - 12:22 pm
We have run a latent growth curve model with covariates using 5 binary observed variables. The model is fairly straight-forward in that we are specifying an intercept factor and a linear growth slope
factor. After the unconditional model was fit, we added gender as a covariate to predict both the intercept and slope factors. This model converged and makes sense. Now we removed gender as a
covariate and add a binary predictor representing ethnicity (0,1). This model does not converge in the 1000 iterations that MPlus used. This ethnicity variable has roughly an equal split between
Caucasian and minority levels of this binary covariate. I was surprised that the model did not converge in light of how the previous analysis with gender converged nicely. Could this just be a case
of having to submit better starting values for the relation between the ethnicity covariate and the intercept and slope factors, respectively.
Linda K. Muthen posted on Monday, May 16, 2005 - 1:33 pm
Starting values are usually not needed so I doubt that this is the case here. Is the ethnicity variable also binary? It sounds like it may not be. If there are more than two categories of ethnicity,
you need to create a set of dummy variables to represent the categories. You need k-1 dummy variables to represent k categories. If you can't solve the problem, you should send your input/output,
data, and license number to support@statmodel.com.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=23&page=654","timestamp":"2014-04-20T05:46:17Z","content_type":null,"content_length":"18221","record_id":"<urn:uuid:f3136138-3316-418c-b15f-b50ac5e2fc12>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What are the Different Disciplines of Mathematics?
@kentuckycat - That's funny, because I am the complete opposite. I love doing math, but I'm not very good at it. It sort of depends on the problems, though.
When I was in high school, I took calculus and was very good at it. I was even at the point where I could do a lot of problems in my head. Once I got to college, though, a lot of my math classes were
in statistics.
There is something inherent about statistics that I don't get. I guess it is just the way my brain works. The way experiments are set up and what analyses they need always makes me have to stop and
|
{"url":"http://www.wisegeek.org/what-are-the-different-disciplines-of-mathematics.htm","timestamp":"2014-04-20T01:03:13Z","content_type":null,"content_length":"82572","record_id":"<urn:uuid:a53906d0-a8b9-4374-8140-3e593e1649a4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Great Neck Geometry Tutor
Find a Great Neck Geometry Tutor
...I think I'm pretty personable and down to earth, but also knowledgeable and structured. I believe that everybody learns at his or her own pace and with his or her own style and that one's
instructor needs to be cognizant of this. Although I have taken several upper level engineering courses, I ...
16 Subjects: including geometry, reading, English, calculus
...I currently volunteer as a chess instructor at PS 39 in Brooklyn. I excel at opening, middle, and end-game study, and during my lessons I use a mix of students' games, chess puzzles, online
videos and computer analysis to keep students engaged. Some quick facts about me: -Graduated Yale College...
13 Subjects: including geometry, Spanish, English, algebra 2
...I also would run a homework help program with the kids to further explain the material and many of them started improving their homework and test grades. I myself aced the class while I was in
middle school. I am currently tutoring a student in precalculus and her understanding has increased incredibly.
9 Subjects: including geometry, algebra 1, algebra 2, SAT math
...I can help improve test scores in the Regents, particularly the Math A, Math B, Integrated Algebra, Algebra 2/Trigonometry, Geometry, and Sequential Mathematics exams. For example, I helped a
high school student in a New York school get prepared for the exam after only a few sessions and pass to...
52 Subjects: including geometry, English, chemistry, physics
...Many times students are asked to memorize and apply formulas, and this is not only difficult but prompting for kids to ask "when will I use this in life?" And the truth is that they will never
be asked to recite a memorized formula. The purpose of math is to engage in logical thinking. Througho...
5 Subjects: including geometry, algebra 1, algebra 2, precalculus
|
{"url":"http://www.purplemath.com/great_neck_ny_geometry_tutors.php","timestamp":"2014-04-21T02:28:23Z","content_type":null,"content_length":"24187","record_id":"<urn:uuid:5a9ca57f-9a13-4194-a552-c7bd27954ff9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SailNet Community - View Single Post - Interesting Sailboats
Paulo -
In a recent post, you gave two different definitions of prismatic coefficient - one was not correct.
This one is incorrect - this one is the definition of block coefficient.
Prismatic Coefficient is a mathematical measurement of the relative shape of the bow and stern of the boat. It displays the ratio of the underwater volume of the hull relative to a rectangular block.
And this is the correct one:
We express the "full hull" property by the prismatic coefficient, which is the ratio of volume displaced to the product of waterline length and maximum cross-sectional area.
|
{"url":"http://www.sailnet.com/forums/788307-post1749.html","timestamp":"2014-04-20T01:28:46Z","content_type":null,"content_length":"33624","record_id":"<urn:uuid:58226f2a-15d9-4d6c-a885-4a3e4c4a02b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carrier and Symbol Timing
Recovering symbol and carrier timing
in digital QAM.
Needless to say proper recovery of symbol and carrier timing in QAM is vital. Without it you won't be able to receive a thing.
There are many ways to accomplish proper symbol and carrier timing. in my first implementation of QAM I avoided altogether the recovery of the symbol timing and concentrated only on carrier recovery.
Once I had attained proper carrier timing the blind complex equalizer would correct for not knowing the symbol timing by over sampling the signal by two times. This meant all I had to do was
associate the symbol timing frequency with the carrier frequency. This was all very well but it was kind of hard to obtain proper carrier frequency recovery without first having proper symbol timing
recovery. this was due to the fact that if you don't have proper symbol timing the constellation will tend to oscillate between showing you nice points to showing you a whole lot of meaningless means
(because the symbol timing frequency of the receiver does not quite match the symbol timing frequency of the transmitter), sort of a Catch-22 problem. So in later implementations I separated the
association between carrier frequency and symbol frequency.
symbol timing recovery:
It took me awhile to figure out which timing to go after first, the carrier, or the symbol. With a bit of thinking I believe symbol timing is a far better thing to go after first. To get an idea of
what's going on look at what comes out of one of the RX RRC Fir Filters. figure 2 shows an example of what could come out in a small period of time. figure three shows what happens when you trace the
output of one of the RX RRC Fir Filters over the same graph over and over again in time. figure 3 is called an eye diagram.
All these constellations, timing and eye diagrams made more sense when I thought about it as a long rectangle spinning in space. when you look in one direction you see the constellation, when you
look at the rectangle on a side view you see the eye diagram. with proper carrier frequency recovery you see the rectangle stops spinning, and symbol recovery tells you how to cut up the rectangle.
well that's just how I think about it anyway, I know it sounds a little silly but we all need ways of visualizing these things.
Although I say the algorithm I used to obtain symbol timing is a modified Gardener algorithm I do wonder if it's so different I could call it the Jonti algorithm, patent pending :-). but honestly it
is very different, using a gardener algorithm with QAM64 yielded very poor results indeed, whilst my algorithm was very quick.
The data being sent in figure 2 is 1,-1,1,-1,1 and so on. so the best time to sample (sample timing) is at the peak and again in the trough of the wave. like Gardiner I over sample the RX RRC filters
by two times. Figure 4 shows the three possible symbols timing situations, sampling to early, too late or just right. Gardenner's algorithm says E[g]=y[2](y[3]-y[1]). if E[g ]is negative then you are
sampling to early, if positive you are sampling too late and if zero you are sampling at the right time.
for gardener to work properly, binary phase shift keying (BPSK is a constellation of only two points) is about all that it does well. for it to know if it is sampling to early or too late it must get
a transition from positive to negative or vice versa, and the magnitude of the 2 points must be equal. QAM has many more points than two with all sorts of magnitudes, so I had to think of something
from figure 4 as transitions tend to be somewhat "sine" like, this means the height of point 2 should be roughly equal to the average of point 1 and 3. from this I created my error function E[J]=0.5
(y[3]+y[1])-y[2]. My error function says if it's negative you are sampling to early and if it's positive you are sampling to late. the advantage mine is it does not need a transition from positive to
negative or vice versa, and the number of constellation points is not a big factor. all it needs is a symbol transition. we have two RX RRC FIR filters so what does it look like if we use both of
them? figure 5 shows an example of late timing. using the same idea as in figure 4 my formula for using both RX RRC filters becomes E[J]=Real( 0.5(z[3]+z[1])-z[2 ])+Imaginary( 0.5(z[3]+z[1])-z[2 ]).
Here the z points are complex numbers, the real part is equal to the output of one of the RRC filters and the imaginary part is equal to the output of the other RRC filter. figure 6 shows this with
the over sampling that is also needed for my algorithm.
I say that at this point the carrier frequency and phase need not be recovered yet. For my algorithm to work all I need is that the receivers carrier frequency be relatively close to the
transmitters. clearly my algorithm is carrier phase invariant, as a change in carrier phase would just rotate all three points in figure 5 around the point (0,0) not changing E[J]. in addition as a
frequency offset is just a changing phase with respect time all three points in figure 5 will tend to move together leaving their relative positions unchanged and hence not affecting E[J ]much, as
long as the receiver's carrier frequency is relatively close to the transmitters.
E[J] directly controls symbol timer clock moving the clock in what direction that minimizes E[J]. if the constellation has been acquired then directed decisions can be made on points z[1] and z[3]
this produces wonderful results, but it does require the carrier frequency and the volume of a constellation to be acquired first. well that's it for symbol timing recovery. One incredibly simple
carrier frequency recovery:
now we have symbol timing, plotting the complex numbers that come out on a graph will show you a constellation that will be rotating due to the frequency difference between the receivers carrier
frequency and the transmitter's carrier frequency, but at leased they look like dots moving around rather than a whole lot of mess.
initially my recovery of carrier frequency was obtained by taking the complex numbers that come out and putting each one to the fourth power. Because they are complex numbers this causes all the
angles of the points to be multiplied by four times. figure 7 shows QAM16 before the points have been raised to the fourth power whilst figure 8 shows what happens to these same points after being
raised to the fourth power.
as you can see if you were to add up all the points in figure 8 you would get a point that sits somewhere to the left of the y-axis on the x-axis. this can be used to determine what rotation the
constellation is presently at, and then you can adjust the carrier frequency of the receiver so that the average point raised the fourth power is somewhere on the x-axis to the left of the y-axis. it
works but it is a little tricky to implement as you need a lot of points before taking an average, and the graph in figure 8 moves at four times the angular velocity as the graph in figure 7 so
slowing it down can be tricky.
in later implementations of QAM I used a very simple carrier frequency recovery scheme. Initially I was a bit surprised that it worked so well. all it is, is a directed decision algorithm. when it
receives a point, it finds the closest constellation point to the received point and slightly adjusts the phase of the carrier frequency to move the received point closer to the constellation point,
it also adjusts the amplitude of the received signal to move to receive point closer to the constellation point. figure 9 shows this.
as can be seen from figure 9 the algorithm will reduce the amplitude by a bit and change the phase of the receiver's carrier frequency to move the green point anticlockwise. at this point you might
ask we have not changed the receiver's carrier frequency at all. this is true, so what I did here was to add up all the little shifts in the carrier frequencies phase over a period of thousands of
received symbols, calculated the average phase shift per symbol and from this worked out the frequency offset from between the receiver and transmitter, then adjusted the receiver's carrier frequency
to match. at this point the receiver does not have to adjust the phase of the carrier frequency very much at all.
the advantage of this algorithm is it's incredibly simple and it works well. it automatically corrects for any amplitude variation in the transmission line. Technically it is possible for this
algorithm to place the received points at 45° to where they should be, however this is very unlikely in real life as any slight error will cause the points to shift to where they should be. but even
if they were to do so it's not a big deal, it is only a phase offset, a complex equalizer with over sampling can correct for this and rotate them the right way without a problem.
Jonti Olds
6 April 2008
|
{"url":"http://homepages.paradise.net.nz/peterfr2/QAMtiming/QAMtiming.htm","timestamp":"2014-04-16T10:17:30Z","content_type":null,"content_length":"12909","record_id":"<urn:uuid:18ed34b5-8ed2-42ce-a03d-e50f504b09b2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Back to home page Last revision: Mar 12, 2014.
Background on my Research Area
I work in the subject of non-commutative algebra. Broadly speaking, this subject is about solving systems of "polynomial" equations where the solutions are functions (typically differential operators
or matrices, etc). This means that we cannot assume that the variables in the equations commute with each other. Such equations arise in the theory of quantum mechanics, statistical mechanics,
physics, etc.
The problem of solving a system of equations in non-commutative algebra may be translated to one involving an algebra over a field, and the representation theory (or module theory) of that algebra.
My research is in the subarea of non-commutative algebraic geometry, which is about using geometric methods to understand the algebra and its representation theory that arise in this way.
The originators of this kind of non-commutative algebraic geometry are Michael Artin, John Tate and Michel Van den Bergh through work they did in the late 1980's. The subject has grown through the
work of these people and of S. Paul Smith, Toby Stafford, Thierry Levasseur, Lieven Le Bruyn and James Zhang to name a few. New ideas and theories are continually being presented, and the research in
this subject has grown considerably since the late 1980's. My publication list is below. Click on the preceding names to find other publications or go to http://www.math.washington.edu/~smith/
Research/research.html for a more complete list.
Noncommutative Algebraic Geometry, Representation Theory and their Interactions
I am the Director of the UTA research group Noncommutative Algebraic Geometry, Representation Theory and their Interactions, which consists of myself, Dr. Dimitar Grantcharov (co-director) and our
Ph.D. students. Currently, the Ph.D. students in the group are: Justin Ahrens, Thomas Ferguson, Richard Chandler, John Griffis, Andrew Cavaness and Derek Tomlin. The group's focus is the study of
modules (representations) over an algebra studied from the viewpoint of algebraic geometry, and seeing how these 2 topics feed off each other. Many of these ideas are discussed in the AGANT Seminar
organized by myself, and in the local UTA seminar, Representations and Geometry Seminar, organized by D. Grantcharov and co-organized by myself, with schedule available from here.
Brief Biography
I obtained my Mathematics Ph.D. in 1993 from the University of Washington (Mathematics) under the supervision of S. Paul Smith. The University of Washington is in Seattle, WA, U.S.A.
I obtained my Mathematics bachelor degree in 1986 from the University of Warwick (Mathematics), which is in the Midlands in England.
I spent 6 months of my last academic year of my PhD in the Department of Mathematics of the University of Auckland, in Auckland, New Zealand.
After graduating from Warwick, I was a high school teacher in greater London for one academic year, after which I began my Ph.D.
After getting my Ph.D, I worked for 2 years at the University of Southern California (Mathematics) in Los Angeles, CA, U.S.A.; and then for one year at the University of Antwerp in Antwerp, Belgium;
and then for 2 years at the University of Oregon ( Mathematics ) in Eugene, OR, U.S.A. In August 1998, I began working in the Mathematics Department of the University of Texas at Arlington in
Arlington, Texas, where I am now a (full) professor.
For those wishing to use W. Schelter's Affine program......
Having received several questions from many different parts of the world in the past 12 months (Oct 2012 – Oct 2013) regarding Affine, I thought I would post online some (hopefully) helpful comments
about it. Readers should note, however, that I am only a user, not a developer. I believe these comments are accurate as of October 2013.
• The original program (binary?) file from the 1990s can be obtained here. It used to be the case that one could simply download the file, enter the file's name at the command line (in linux) and
it would work (i.e., no fancy installation etc). However, these days, it is rarely compatible with current-day computer architecture. Depending on the PC, it should run okay using Red Hat
Enterprise Linux 5.* , but not 6.* . There also appears to be a repository of the 1990s Affine located at ftp://ftp.ma.utexas.edu/pub/maxima/affine.tgz .
• A new version of Affine is available as a loadable package in the free Maxima program. Maxima can be obtained from http://maxima.sourceforge.net/ . If one is using Fedora Linux, it is available
via Fedora Linux' distribution site, and can be downloaded and installed using the ``yum'' command. Similarly for some other flavors of Linux such as Arch Linux using the appropriate commands. If
the user wishes to use Affine and the VI editor together, one must download the Clisp version.
• There are some differences between the 1990s version and the new version. The 1990s version uses upper-case characters for the commands, but the new version uses lower case. The new version needs
non-commutative multiplication to be defined; see the example file below for details. The new version of Affine for non-Linux use sometimes requires a ``;'' at the end of a line (even after y or
n answers) to operate correctly (albeit with an error message) in places where the Linux version does not require a ``;''. The new version has problems with exponents; sometimes an expression
that needs to be simplified needs to be multiplied by the user first, before entering into Affine for reduction subject to the defining relations.
• Documentation on Affine can be found here and here. A simple example is available here.
• Another program that does some, but not all, of the activities of Affine is Bergman.
My talk at MSRI Feb 23, 2000: pdf file Video of talk
My talk on regular algebras and graded skew Clifford algebras given at various venues in 2009 & 2010: pdf file. Note the corrigendum below.
My talk on graded skew Clifford algebras given at AMS meeting held at UC Riverside in Nov 2009: pdf file. Note the corrigendum below.
My talk on classifying quadratic regular algebras of global dimension three (quantum planes) using graded skew Clifford algebras given at AMS meeting held at the University of Hawaii in Mar 2012: pdf
My talk at MSRI Jan 25, 2013: video of talk pdf file with pauses pdf file without pauses (2nd file takes up more disk space).
Note the corrigendum below, and the talk written formally in this pdf file.
(Return to contents list)
Publications 4-7 were funded in part by NSF grant DMS-9622765; 8-11 by NSF grant DMS-9996056; 12-13 by NSF grant DMS-0200757, 13-14 by NSF grant DMS-0457022, 15-21 by NSF grant DMS-0900239 and 18-19
by DMS-1302050.
Cool Quadrics
copied from
Back to home page
|
{"url":"http://www.uta.edu/math/vancliff/R/","timestamp":"2014-04-17T04:00:19Z","content_type":null,"content_length":"30683","record_id":"<urn:uuid:4a186b74-6b9f-48c3-88ca-019e42b512f0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
find the area of the parallelogram spanned by vectors given, pls alo verify answer
6i+3j and 6i-3j - Homework Help - eNotes.com
find the area of the parallelogram spanned by vectors given, pls alo verify answer
6i+3j and 6i-3j
Where i,j, and k are mutually orthogonal triod.
area of parallelogram spanned by vectors u and v is magnitude of vector u x v .
Thus magnitude of u x v is
`=36` sq.unit
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/find-area-parallelogram-spanned-by-vectors-given-443539","timestamp":"2014-04-18T09:33:01Z","content_type":null,"content_length":"25705","record_id":"<urn:uuid:9a769e62-88d6-4e56-8aa7-a369da83882d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
|
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Program by Day
Current as of Sunday, October 14, 2007 00:27:36
Program | Deadlines | Registration/Housing/Etc. | Special Event or Lecture | Inquiries: meet@ams.org
2007 Fall Eastern Section Meeting
New Brunswick, NJ, October 6-7, 2007 (Saturday - Sunday)
Meeting #1031
Associate secretaries:
Lesley M Sibner
, AMS
Sunday October 7, 2007
• Sunday October 7, 2007, 8:00 a.m.-12:00 p.m.
Exhibits and Book Sale
Room 105, Scott Hall
• Sunday October 7, 2007, 8:00 a.m.-12:00 p.m.
Meetings Registration
• Sunday October 7, 2007, 8:00 a.m.-10:40 a.m.
Special Session on Noncommutative Geometry and Arithmetic Geometry, III
Room 119, Scott Hall
Caterina Consani, Johns Hopkins University kc@math.jhu.edu
Li Guo, Rutgers University liguo@andromeda.rutgers.edu
• Sunday October 7, 2007, 8:00 a.m.-10:50 a.m.
Special Session on Commutative Algebra, III
Room 101, Scott Hall
Jooyoun Hong, University of California Riverside hongucr@gmail.com
Wolmer V. Vasconcelos, Rutgers University vasconce@math.rutgers.edu
• Sunday October 7, 2007, 8:00 a.m.-10:50 a.m.
Special Session on Toric Varieties, III
Room 104, Scott Hall
Milena S. Hering, Institute for Mathematics and Its Applications hering@ima.umn.edu
Diane Maclagan, Rutgers University maclagan@math.rutgers.edu
• Sunday October 7, 2007, 8:00 a.m.-10:50 a.m.
Special Session on Partial Differential Equations of Mathematical Physics, III (in Honor of Shelly Goldstein's 60th Birthday)
Room 204, Scott Hall
Sagun Chanillo, Rutgers University
Michael K.-H. Kiessling, Rutgers University miki@math.rutgers.edu
Avy Soffer, Rutgers University
• Sunday October 7, 2007, 8:20 a.m.-10:50 a.m.
Special Session on Geometric Analysis of Complex Laplacians, III
Room 205, Scott Hall
Siqi Fu, Rutgers University, Camden sfu@camden.rutgers.edu
Xiaojun Huang, Rutgers University, New Brunswick huangx@math.rutgers.edu
Howard J. Jacobowitz, Rutgers University, Camden jacobowi@camden.rutgers.edu
• Sunday October 7, 2007, 9:00 a.m.-10:20 a.m.
Special Session on Probability and Combinatorics, III
Room 206, Scott Hall
Jeffry N. Kahn, Rutgers University jkahn@math.rutgers.edu
Van Ha Vu, Rutgers University vanvu@math.rutgers.edu
• Sunday October 7, 2007, 9:00 a.m.-10:50 a.m.
Special Session on Set Theory of the Continuum, III
Room 214, Scott Hall
Simon R. Thomas, Rutgers University sthomas@math.rutgers.edu
• Sunday October 7, 2007, 9:00 a.m.-10:50 a.m.
Special Session on Invariants of Lie Group Actions and Their Quotients, III
Room 116, Scott Hall
Tara S. Holm, Cornell University tsh@math.cornell.edu
Rebecca F. Goldin, George Mason University rgoldin@math.gmu.edu
• Sunday October 7, 2007, 9:00 a.m.-10:40 a.m.
Session for Contributed Papers, II
Room 201, Scott Hall
• Sunday October 7, 2007, 11:10 a.m.-12:00 p.m.
Invited Address
Random metrics and geometries in two dimensions
Room 123, Scott Hall
Scott Sheffield*, Courant Institute and Institute for Advanced Study
• Sunday October 7, 2007, 2:00 p.m.-2:50 p.m.
Invited Address
The topology of particle collisions.
Room 123, Scott Hall
Satyan L. Devadoss*, Williams College
• Sunday October 7, 2007, 3:00 p.m.-5:40 p.m.
Special Session on Noncommutative Geometry and Arithmetic Geometry, IV
Room 119, Scott Hall
Caterina Consani, Johns Hopkins University kc@math.jhu.edu
Li Guo, Rutgers University liguo@andromeda.rutgers.edu
• Sunday October 7, 2007, 3:00 p.m.-4:20 p.m.
Special Session on Probability and Combinatorics, IV
Room 206, Scott Hall
Jeffry N. Kahn, Rutgers University jkahn@math.rutgers.edu
Van Ha Vu, Rutgers University vanvu@math.rutgers.edu
• Sunday October 7, 2007, 3:00 p.m.-6:20 p.m.
Special Session on Mathematical and Physical Problems in the Foundations of Quantum Mechanics (in honor of Shelly Goldstein's 60th birthday)
Room 203, Scott Hall
Roderich Tumulka, Rutgers University tumulka@math.rutgers.edu
Detlef D\"urr, M\"unchen University duerr@mathematik.uni-muenchen.de
Nino Zanghi, University of Genova zanghi@ge.infn.it
• Sunday October 7, 2007, 3:00 p.m.-4:20 p.m.
Special Session on Toric Varieties, IV
Room 104, Scott Hall
Milena S. Hering, Institute for Mathematics and Its Applications hering@ima.umn.edu
Diane Maclagan, Rutgers University maclagan@math.rutgers.edu
• Sunday October 7, 2007, 3:00 p.m.-5:30 p.m.
Special Session on Geometric Analysis of Complex Laplacians, IV
Room 205, Scott Hall
Siqi Fu, Rutgers University, Camden sfu@camden.rutgers.edu
Xiaojun Huang, Rutgers University, New Brunswick huangx@math.rutgers.edu
Howard J. Jacobowitz, Rutgers University, Camden jacobowi@camden.rutgers.edu
• Sunday October 7, 2007, 3:00 p.m.-4:20 p.m.
Special Session on Invariants of Lie Group Actions and Their Quotients, IV
Room 116, Scott Hall
Tara S. Holm, Cornell University tsh@math.cornell.edu
Rebecca F. Goldin, George Mason University rgoldin@math.gmu.edu
Inquiries: meet@ams.org
|
{"url":"http://ams.org/meetings/sectional/2151_program_sunday.html","timestamp":"2014-04-18T15:52:06Z","content_type":null,"content_length":"63398","record_id":"<urn:uuid:031ae557-f921-4b69-a145-242c7c3c8174>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carmel, CA Math Tutor
Find a Carmel, CA Math Tutor
...Practice makes for continuous improvement and, the more you improve your math skills the more you enjoy it. The same goes for any other subject. Good, better, best, never let it rest until your
good is better and your better is...well, better.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I earned a score of 5 on both the AP Biology and AP Chemistry exams when I was in high school, so I would also be a valuable tutor for a student enrolled in either course. I would be
particularly helpful with students looking to increase their scores on AP Biology essays. Other related science ...
10 Subjects: including algebra 1, algebra 2, biology, chemistry
Hello & thank you for viewing my profile! If you are looking for a patient, experienced, efficient, and fun tutor, then I am an excellent choice for you! I excel in making learning fun, and always
make my students feel comfortable and empowered during tutoring and coaching sessions.
31 Subjects: including algebra 1, geometry, reading, English
...I've passed the AP tests for English literature and composition (4), European history (5), US history (4), macroeconomics (4), Calculus AB (5), computer science A (5), environmental science
(5), and received an 800 for the SAT math II subject test and a 700 for the US history subject test. Combi...
11 Subjects: including algebra 1, biology, chemistry, geometry
...My graduate work focused on how people understand what they hear. This colors my teaching practice. I can teach mathematics, foundational work and then on to higher levels.
13 Subjects: including SAT math, discrete math, logic, probability
|
{"url":"http://www.purplemath.com/Carmel_CA_Math_tutors.php","timestamp":"2014-04-20T23:46:51Z","content_type":null,"content_length":"23574","record_id":"<urn:uuid:7b1e60e5-7993-46e0-a172-dc5a3b7b277c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Categorified algebra and quantum mechanics
Jeffrey Morton
The process some call `categorification' consists of interpreting set-theoretic structures in mathematics as derived from category-theoretic structures. Examples include the interpretation of N as
the Burnside rig of the category of finite sets with product and coproduct, and of N[x] in terms the category of combinatorial species. This has interesting applications to quantum mechanics, and in
particular the quantum harmonic oscillator, via Joyal's `combinatorial species', and a new generalization called `stuff types' described by Baez and Dolan, which are a special case of Kelly's
`clubs'. Operators between stuff types be represented as rudimentary Feynman diagrams for the oscillator. In quantum mechanics, we want to represent states in an algebra over the complex numbers, and
also want our Feynman diagrams to carry more structure than these `stuff operators' can do, and these turn out to be closely related. We will describe a categorification of the quantum harmonic
oscillator in which the group of `phases' - that is, U(1), the circle group - plays a special role. We describe a general notion of `M-stuff types' for any monoid M, and see that the case M = U(1)
provides an interpretation of time evolution in the combinatorial setting, as well as recovering the usual Feynman rules for the quantum harmonic oscillator.
2000 MSC: 81P05, 05A15, 18D10, 18B40
Theory and Applications of Categories, Vol. 16, 2006, No. 29, pp 785-854.
TAC Home
|
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/TAC/volumes/16/29/16-29abs.html","timestamp":"2014-04-21T12:09:47Z","content_type":null,"content_length":"2991","record_id":"<urn:uuid:ee3104cf-087a-448a-98a2-38ecec22c423>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: November 2007 [00287]
[Date Index] [Thread Index] [Author Index]
WeightingFunction and Showing Graphs with wieghts as Edge Lable
• To: mathgroup at smc.vnet.net
• Subject: [mg83111] WeightingFunction and Showing Graphs with wieghts as Edge Lable
• From: mumat <csarami at gmail.com>
• Date: Sun, 11 Nov 2007 03:00:17 -0500 (EST)
I couldn't find this in documentation Mathematica 6.
First Question: I construct a graph with edge weights selected from a
given list say, {2,3,5,7}. In the documentaion says:
SetEdgeWeights[g, e, w]=assigns the weights in the weight list w to
the edges in edge list \e. Here's one example
In[1]=t = Wheel[8];k = SetEdgeWeights[t, WeightingFunction ->
RandomInteger, WeightRange -> {0, 4}]
In[2]=Edges[k, EdgeWeight]
But this is not clearly what I want. I want to weights to be selected
from {2,3,5,7}.
Question 2: In the other part of the documentation says:
WeightingFunction can take values Random, RandomInteger, Euclidean, or
LNorm[n] for nonnegative n, or any pure function that takes two
arguments, each argument having the form {Integer,{Number,Number}}.
How can I construct one weightingFunction? Can anyone make one?
I don't know why the first argument of the first argument show be an
integer! Let's say the vertices of a given graph are labled {1,...n}.
How can we set weights of the edges to be the average of the vertices
incident on the edges!
Question 3. How can I graph a weighted Graph show the edgeweights show
up on the graph? for instance for this graph?
In[1]=t = Wheel[8];k = SetEdgeWeights[t, WeightingFunction ->
RandomInteger, WeightRange -> {0, 4}]
Any help would be greatly appreciated.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00287.html","timestamp":"2014-04-21T04:49:33Z","content_type":null,"content_length":"26372","record_id":"<urn:uuid:2cabe662-0067-4047-8b96-c2d3510de2e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5.1 Mechanical Models
Next: 5.1.1 Elastic Model Up: 5. Modeling of Oxidation Previous: 5. Modeling of Oxidation
There are several possibilities for stress analysis coupled with oxidation processes. Unfortunately, it is a complex problem to model the evolving stress simultaneously with a moving interface
boundary, which is normally handled by readaptation algorithms of the numerical simulation mesh. The usual procedure is to adopt an incremental approach solving the boundary displacements and mesh
adaptation for a very small time step of the oxidation taking the existing stresses into account. Incremental stress analysis is then performed using the oxide interface displacements as boundary
condition for the strain. Thereby, the stress analysis is cumulative: the total stress is the superposition of each incremental stress field depending on the detailed mechanical model.
Next: 5.1.1 Elastic Model Up: 5. Modeling of Oxidation Previous: 5. Modeling of Oxidation Mustafa Radi
|
{"url":"http://www.iue.tuwien.ac.at/diss/radi/diss/node70.html","timestamp":"2014-04-16T16:45:26Z","content_type":null,"content_length":"4019","record_id":"<urn:uuid:07bf8971-3274-45fc-a25f-2fa400481883>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A note on insensitivity in stochastic networks
A note on insensitivity in stochastic networks
ABSTRACT We give a simple and direct treatment of insensitivity in stochastic networks which is quite general and which provides probabilistic insight into the phenomenon. In the case of multi-class
networks, the results generalise those of Bonald and Proutiere (2002, 2003).
[show abstract] [hide abstract]
ABSTRACT: We address a conjecture introduced by Massouli\'e (2007), concerning the large deviations of the stationary measure of bandwidth-sharing networks functioning under the Proportional fair
allocation. For Markovian networks, we prove that Proportional fair and an associated reversible allocation are geometrically ergodic and have the same large deviations characteristics using
Lyapunov functions and martingale arguments. For monotone networks, we give a more direct proof of the same result relying on stochastic comparisons that hold for general service requirement
distribution. These results comfort the intuition that Proportional fairness is 'close' to allocations of service being insensitive to the service time requirement.
[show abstract] [hide abstract]
ABSTRACT: This chapter reviews the theory of loss networks, in which calls of various types are accepted for service provided that this can commence immediately; otherwise they are rejected. An
accepted call remains in the network for some holding time, which is generally independent of the state of the network, and throughout this time requires capacity simultaneously from various
network resources. Both equilibrium and dynamical behaviour are studied; for the former a new approach is taken to the theory of uncontrolled loss networks, while the latter is the key to the
understanding of stability issues in such networks.
11/2010: pages 701-728;
[show abstract] [hide abstract]
ABSTRACT: Polynomial convergence rates in total variation are established in Erlang--Sevastyanov's type problem with an infinite number of servers and a general distribution of service under
assumptions on the intensity of serving.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
0 Downloads
Available from
|
{"url":"http://www.researchgate.net/publication/2133132_A_note_on_insensitivity_in_stochastic_networks","timestamp":"2014-04-16T08:37:59Z","content_type":null,"content_length":"177742","record_id":"<urn:uuid:ac7643a9-9bbb-4216-a157-c42faa82be00>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Levels of Routines
Next: Data Types and Precision Up: Structure of LAPACK95 Previous: Structure of LAPACK95   Contents   Index
The subroutines are classified as follows:
• driver routines, each of which solves a complete problem, for example solving a system of linear equations, or computing the eigenvalues of a real symmetric matrix. Users are recommended to use a
driver routine if there is one that meets their requirements. The driver routines are listed in Section 2.2 and documented in Part II.
• computational routines, each of which performs a distinct computational task, for example an LU factorization, or the reduction of a real symmetric matrix to tridiagonal form. Each driver routine
calls a sequence of computational routines. Users (especially software developers) may need to call computational routines directly to perform tasks, or sequences of tasks, that cannot
conveniently be performed by the driver routines. The computational routines are documented in Part III.
Susan Blackford 2001-08-19
|
{"url":"http://www.netlib.org/lapack95/lug95/node25.html","timestamp":"2014-04-20T18:26:25Z","content_type":null,"content_length":"3810","record_id":"<urn:uuid:78e1b480-c803-49d9-a381-d4f3a4e11478>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
if any one can help me out
November 11th 2009, 08:58 AM #1
Jul 2009
if any one can help me out
I'm doing this lesson for pre algebra and I've gotten this question wrong 3 times already. the question is. Use six unit multipliers to convert 24,000,000 cubic inches to cubic yards? If anyone
can help get me aheading in the right direction I'd gladly appreciate it.
Hi kuttluse,
I don't know why you need 6 unit mutipliers. The simplest way is:
$24000000 \; in^3 \times \frac{(1 \; yd)^3}{(36 \; in)^3}\approx 514.4 \; yd^3$
You could muddy the waters by
$24000000 \; in^3 \times \frac{1 \; ft}{12 \; in}\times \frac{1 \;ft}{12 \; in}\times \frac{1\; ft}{12 \; in}\times \frac{1 \; yd}{3 \;ft} \times \frac{1 \; yd}{3 \;ft} \times \frac{1 \; yd}{3 \;
ft} \approx 514.4 \; yd^3$
thanks I did that one time b4 but I didn't get that as my answer I actionly tried that way first thanks
November 11th 2009, 09:25 AM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
November 11th 2009, 09:40 AM #3
Jul 2009
|
{"url":"http://mathhelpforum.com/algebra/113883-if-any-one-can-help-me-out.html","timestamp":"2014-04-21T08:11:24Z","content_type":null,"content_length":"32334","record_id":"<urn:uuid:101d804a-3a5a-4a77-856d-3ba77da6ea00>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|
General Maths
What’s new in general maths?
General maths being a track for those pursuing business, humanities and paramedical disciplines introduces taxation, depreciation, annuities and loan repayments. Correlation is also an addition to
help with probability studies when dealing with data sets. Spherical geometry builds on students’ earlier knowledge of geometry of solids. Relative frequency is also a new introduction for
statistical studies. Lastly, there is modelling linear and non-linear relationships.
Main content of general maths
General maths is divided into two parts: a preliminary course and an HSC track. The latter course expands and extends topics learnt in the first one. Financial maths which is the core of general
maths includes calculation involving earning and investing money as well as taxation. This is extended in the HSC course with calculations involving credit, borrowing, annuities, loan repayments and
depreciations. Data analyses cover topics from data collection and interpretation to statistical concepts of normal distribution curves and correlation. Measurements of area and volume and their
applications will be learnt alongside trigonometry of 2-dimensional figures and right-angled triangles as well as geometry of spheres. Probability studies will be introduced with the concept of
chance events and then relative frequency. Algebraic skills and techniques will involve modelling linear and non-linear relationships.
Common challenges for students in general Maths
The high concepts of geometry and trigonometry are common challenges for students studying general maths. Other common difficulties for students include correlation under data analysis and modelling
non-linear relationships.
Many students in general maths have trouble with understanding and interpreting questions. Questions in general maths tend to be more “wordy” and challenging to interpret in a meaningful way.
Main outcomes for general maths
At the end of the course, students are expected to be able to undertake financial calculations related to investment, spending, borrowing, repayment, taxation, depreciation and annuities. They should
also show proficiency in data collection and interpretation as well as applying the basic principles of probability. Other areas where students are expected to be well tutored are basic algebra
(including linear and non-linear relationships), trigonometry (measurements of area and volume) and geometry (2-dimensional figures and spheres).
Most important concepts to ensure your child understands in year General Maths
It is important that your child understand the topics under financial mathematics, data analysis and probability. An understanding of basic algebra is crucial as it is the bedrock of the other
aspects of this year’s maths.
If hiring a tutor, what study habits and content to focus on in general maths?
Since this year is largely about financial mathematics, a real world application of concepts is the way to learn general maths. Get a tutor who can provide ready examples of real financial problems
to use in calculations. Exposure to such exercises will ground your child in the understanding of financial terms. Try to encourage seeking more applications in your child’s immediate environment.
Main challenges involved in tutoring a general maths student
Many general maths students believe they will only learn financial mathematics this year. Getting a general maths student to learn the geometry and trigonometry as well may be challenging. This is
largely because most of these other topics have been studied to some detail in previous years. Each portion of the course will be examined, so it is important to focus on all the parts.
Some good ideas on how to help your child in general maths
Provide adequate real-world problems requiring the concepts learnt in financial mathematics. You can even let your child use the family’s finances as exercises to help him/her get a good grasp of the
course. Hiring a tutor will, however, afford him/her more time and more opportunities to learn and apply their knowledge.
It should also be noted that general maths is far less technical than other maths courses. As such many questions focus on the student providing written explanations of answers. It is important that
your child is comfortable with “math language”. A great way to help with this is to encourage verbal discussion of content and asking the student to explain how they arrived at an answer.
What they say about our tutoring:
“Yes very happy with Katie, she is lovely and very thorough, Jessica is doing well”
“It is all going well. Alon is very dedicated and my daughter is really enjoying the sessions. His personality is perfect for her.”
“Tanmoy is fantastic. Delaney is connecting with his teaching technique. Although it is early stage we are very glad Tanmoy is available to help Delaney.”
“Thanks so much for the follow up!! Daniel is fantastic and Tom is really enjoying his time with him!! He said he understood more after 2 hours than his first semester of Chemistry!! Really happy
and so is Tom!”
“Just wanted to let you know that i am very impressed with Angus. If you could have all your staff as good as hime you would be close to retirement. He is extremely thorough, knowledgeable and he
is very engaging. Hopefully Amy can take advantage of his knowledge. Thanks”
“Thank you for following up the tuition, actually we are very pleased with Joyce. Matilda says Joyce is very well organised ,uses proper techniques and tries to covers all problem areas. So far
all is good & we are looking forward to a good outcome.”
“We came to Ezymaths because our son has problems understanding some of the year 8 topics. When I contacted the company, the customer service staff was very understanding and spent time listening
to my problems with my son's attitude towards Maths subject. He answered all my questions and removed any doubt I had about the success of a tutoring service to my son in our own home. My son was
very happy with the first meeting with Amy and the second meeting he started to warm up. The 3rd meeting, he was already asking questions and was able to do his homework independently because he
understood the concept. He said that the tutor's explanation about the topic was very clear and he now have better understanding of the concepts. Since he is in the top class, he found that they
move very fast to the next topic in class and appreciated the one on one lesson from his tutor and the convenience that he doesn't have to go anywhere. The company is very easy to deal with. No
pressure nor hassle from the company. I would recommend parents to talk to Ezymaths if their children have problems in Math or if the children want to improve their grades. I will definitely call
Ezymaths again when my son encounters difficulty with certain topics in class.”
|
{"url":"http://www.ezymathtutoring.com.au/high-school-maths/general-maths/","timestamp":"2014-04-18T00:57:44Z","content_type":null,"content_length":"44945","record_id":"<urn:uuid:30683b6a-0dac-4b1c-9c85-1e0e95d4c4b4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Math Images
Hello World
About Me
I am rising sophomore at Swarthmore College and I am working on the Math Images Project in the summer of 2012. I am going to be a math major, and maybe a physics major or minor. I'll get to math in
the next paragraph. My other interests include strategy games, long-distance running, and music. I listen to classical, heavy/progressive metal, electronic/ambient, and like five alt rock bands. U2
is the best ever! And I try to play piano. Other than that, I can relate every situation to a Seinfeld episode, and my other favorite shows are Arrested Development and Star Trek: DS9.
In terms of math, I suppose I've always loved math. Earlier in my youth, when I was a young wippersnapper, I wanted to do physics. But college has taught me the amazingness of pure math. As a result
of a linear algebra class, I am really interested in algebra and related topics. I really want to make a page helping people with proving trig identities. I think understanding how to manipulate
those expressions algebraically would be beneficial to a student's mathematical thinking as a whole. But I really love all math and the articles that I write will be about varied topics.
Dont' look at this thing here. It's top secret!
Bad Jokes that I like
• Why did the scarecrow get a promotion? Because he was outstanding in his field.
• Why did the chicken go to the seafood store? For the halibut.
• I knew this alcohol-drinking chicken who liked magic. He could do some ginny hen tricks.
• Who wants to go to the orchestra? Let's have a Chopins. (Curtesy of my dad).
• Does February march? No but April may.
This is a work in progress.
My Pages
Contact Me
You can always email me at jschug1@swarthmore.edu. I'll probably get back to you the same day.
Jorin Jorin 10:06, 15 May 2012 (EDT) I hope this sentence is blue. Yay it worked!
Final Message
Peace, Love, and Understanding,
- Jorin
|
{"url":"http://mathforum.org/mathimages/index.php/User:Jschug1","timestamp":"2014-04-17T07:41:09Z","content_type":null,"content_length":"20747","record_id":"<urn:uuid:ab43779e-637d-4fd2-acd4-f9e6b72e615f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Holbrook, MA ACT Tutor
Find a Holbrook, MA ACT Tutor
...I taught math at an elite private school in St Louis, Mo for four years, including the entire Algebra sequence. I taught math at an elite private school in St Louis, Mo for four years,
including Geometry and Trig. I have 4 years of full-time class room experience as a math teacher.
29 Subjects: including ACT Math, reading, calculus, geometry
My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor
a wide array of math courses.
36 Subjects: including ACT Math, English, reading, chemistry
I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students
understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well.
16 Subjects: including ACT Math, French, elementary math, algebra 1
I'm a semi-retired lawyer, with years of trial experience. As you might expect from a lawyer, I teach primarily by the Socratic method, leading students to find the right answers themselves. I
have excelled in every standardized test I have taken: SAT 786M/740V, LSAT 794, National Merit Finalist.
20 Subjects: including ACT Math, reading, English, writing
...Teaching has been a long-time passion of mine and I hope to continue teaching throughout my life time. I have tutored SAT Math, SAT II and various science subjects since 2006. During my time
at MIT I was a teaching assistant for four semesters in Biology, Genetics, Biochemistry and Cell Biology classes, and I continuously served as a tutor in the department.
28 Subjects: including ACT Math, chemistry, calculus, algebra 1
Related Holbrook, MA Tutors
Holbrook, MA Accounting Tutors
Holbrook, MA ACT Tutors
Holbrook, MA Algebra Tutors
Holbrook, MA Algebra 2 Tutors
Holbrook, MA Calculus Tutors
Holbrook, MA Geometry Tutors
Holbrook, MA Math Tutors
Holbrook, MA Prealgebra Tutors
Holbrook, MA Precalculus Tutors
Holbrook, MA SAT Tutors
Holbrook, MA SAT Math Tutors
Holbrook, MA Science Tutors
Holbrook, MA Statistics Tutors
Holbrook, MA Trigonometry Tutors
Nearby Cities With ACT Tutor
Abington, MA ACT Tutors
Avon, MA ACT Tutors
Braintree ACT Tutors
Braintree Phantom, MA ACT Tutors
East Weymouth ACT Tutors
Hanover, MA ACT Tutors
Hanson, MA ACT Tutors
Hull, MA ACT Tutors
North Weymouth ACT Tutors
Norwell ACT Tutors
Randolph, MA ACT Tutors
Rockland, MA ACT Tutors
South Weymouth ACT Tutors
Stoughton, MA ACT Tutors
Whitman, MA ACT Tutors
|
{"url":"http://www.purplemath.com/Holbrook_MA_ACT_tutors.php","timestamp":"2014-04-20T04:27:21Z","content_type":null,"content_length":"23788","record_id":"<urn:uuid:155200e1-c791-4553-9bc8-0d8caaffb088>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SparkNotes: Data Analysis, Statistics & Probability: Probability
Probability deals with the randomness of the universe, and this inexact science vainly tries to apply numerical understanding to events that appear to happen in a noncontiguous, nonlinear
What the . . . ?
Oh wait, probability on the SAT. That’s easy. Probability on the SAT is expressed as a fraction. The numerator is the number of times a certain event might occur, and the denominator is the total
number of events that might occur. Here is the probability formula you need to know:
Let’s say you’re playing a game of High Card with your strange friend Rasputin. High Card is a silly game in which the two of you randomly pick cards from a 52-card deck, and the highest card wins
(now that’s a Friday night of fun!). You are going to choose from the deck first, and you really want an ace. What’s the probability that you will draw an ace?
And what’s the probability that you’ll end up with a non-ace?
Here’s a more complicated example involving a more exciting Saturday night with toy cars:
9. A certain bag contains five purple toy cars, eight orange toy cars, and seven yellow toy cars. If all the toy cars are placed into a bag, what is the probability that the first car picked will be
There are five ways to pick a purple toy car because there are five different purple toy cars. That’s the top number. To find the bottom number, you need to add up all the different cars, regardless
of color:
5 + 8 + 7 = 20
There are a total of 20 different toy cars to choose from in all. Therefore, the probability of picking a purple toy car is:
That’s choice A.
When calculating probability, always be sure to divide by the total number of chances. It would have been tempting to leave out the number of purple toy cars in the denominator, because we used them
in the numerator, which would have resulted in , or an incorrect answer waiting for you among the distractors.
Backward Probability
Backward probability is the basic probability item asked in reverse order. Instead of finding the probability, you are looking for the total, or real number the probability fraction represents. For
6. If there are five green toy cars in a bag, and the probability of choosing a green toy car is
(A) 5
(B) 10
(C) 15
(D) 20
(E) 25
All you need to do is set up the proper proportional equation. If 1 of 5 toy cars is green and there is a total of 5 green toy cars, then:
Now we cross multiply the equation to come up with x = 25, choice E.
Probability an Event Won’t Occur
Certain SAT items will ask about the probability of an event not occurring. No sweat. Figure out the probability of the event occurring, and subtract that number from 1:
Probability an event will not occur = 1 – probability an event will occur.
If there is a chance of rain, then the chance of no rain is , or .
Multiple or Unrelated Probabilities
More difficult probability items deal with multiple related and unrelated events. For these items, the probability of both events occurring is the product of the outcomes of each event:
A good example of two unrelated events would be: (1) getting heads on a coin toss and (2) rolling a 5 with a six-sided die. Just find the probability of each individual event, and multiply them
together. There’s a 1 in 2 chance of getting heads on a coin and a 1 in 6 chance of rolling a 5. Combining the two gets you:
The same principle can be applied to find the probability of a series of events. Let’s keep it simple and stick to toy cars:
. 14Don has a bag of toy cars divided into 8 blue, 9 green, 4 yellow, and 14 red. Sue bets Don a dollar that she can draw 3 green toy cars in a row. What is the probability that Sue will win the bet?
To find the probability of Sue drawing 3 green cars in a row, we need to find the probability of each individual event. The probability of her drawing a green car on her first try is , because there
are 9 green cars and 35 total cars (8 blue + 9 green + 4 yellow + 14 red = 35 total).
The probability of her drawing a green car on the second try is slightly different. With one car already removed from the bag, there are only 34 left, and assuming her first try was successful, there
are only 8 green cars left. So the probability of drawing a green car the second time is . Follow the same procedure for the probability of choosing a green car on the third try, and we come up with
. So the odds of Sue drawing three green cars in a row is:
That’s choice C.
The important point to remember here is that when solving for the probability of a series of events, always assume that each prior event was successful, just as we did in the example above.
Geometric Probability
Another difficult concept the SAT might present is geometric probability. The same basic concept behind probability still applies, but instead of dealing with total outcomes and particular outcomes,
you will be dealing with total area and particular area of a geometric figure. There’s nothing fancy here. Just remember this formula and you will be fine:
|
{"url":"http://www.sparknotes.com/testprep/books/newsat/powertactics/datastatsprob/chapter2section3.rhtml","timestamp":"2014-04-19T19:48:53Z","content_type":null,"content_length":"57881","record_id":"<urn:uuid:683d1ae2-1221-4d34-a0dc-055312560906>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1.2.2 Problems with the Standard Model
Next: 1.2.3 Supersymmetry Up: 1.2 Physics Overview Previous: 1.2.1 The Standard Model
Once the CP invariance broken? And many others. Among them, the far most important question is: Why is the electroweak symmetry broken, and why at the scale
The Standard Model cannot answer any of these questions. This is exactly why we believe that there lies a more fundamental physics at a higher energy scale which leads to the unanswered
characteristics of the Standard Model. Then all the parameters and quantum numbers in the Standard Model can be derived from the more fundamental description of Nature, leading to the Standard Model
as an effective low-energy theory. In particular, the weak scale itself prediction of the deeper theory. The scale of the fundamental physics can be regarded as a cutoff to the Standard Model. Above
this cutoff scale, the Standard Model ceases to be valid and the new physics takes over.
The mass term of the Higgs field is of the order of the weak scale, whereas the natural scale for the mass term is, however, the cutoff scale of the theory, since the quantum correction to the mass
term is proportional to the cutoff scale squared because of the quadratic divergence. This problem, so-called the naturalness problem, is one of the main obstacles we encounter, when we wish to
construct realistic models of the ``fundamental physics'' beyond the Standard Model. If the cutoff scale of the Standard Model is near the Planck scale, one needs to fine-tune the bare mass term of
the Higgs potential to many orders of magnitude to keep the weak scale very tiny compared to the Planck scale. There are only two known possibilities to solve this problem. One is to assume that the
cutoff scale of the Standard Model lies just above the weak scale and the other is to introduce a new symmetry to eliminate the quadratic divergence: supersymmetry. In the former scenario the Higgs
boson mass tends to be heavy, if any, while in the latter it is expected to be light.
Next: 1.2.3 Supersymmetry Up: 1.2 Physics Overview Previous: 1.2.1 The Standard Model ACFA Linear Collider Working Group
|
{"url":"http://acfahep.kek.jp/acfareport/node9.html","timestamp":"2014-04-17T21:22:44Z","content_type":null,"content_length":"6187","record_id":"<urn:uuid:8208baee-c93c-40d6-b346-36c4cc558847>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 691.52001
Autor: Soifer, Alexander (Engel, Philip L.; Erdös, Paul; Grünbaum, Branko; Rousseau, Cecil)
Title: How does one cut a triangle? With 80 illustr. and introductions by Philip L. Engel, Paul Erdös, Branko Grünbaum and Cecil Rousseau. (In English)
Source: Colorado Springs, CO: Center for Excellence in Mathematical Education. xiii, 140 p. (1990).
Review: This booklet considers and solves problems in dividing triangles into congruent and into similar pieces; it further studies extremal problems on placing points in convex figures. The booklet
is mainly written for students interested in geometry and it is written with much enthusiasm.
Reviewer: J.M.Wills
Classif.: * 52-01 Textbooks (convex and discrete geometry)
52A10 Convex sets in 2 dimensions (including convex curves)
05-01 Textbooks (combinatorics)
52C17 Packing and covering in n dimensions (discrete geometry)
Keywords: convex polygons; tilings; dividing triangles
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
|
{"url":"http://www.emis.de/classics/Erdos/cit/69152001.htm","timestamp":"2014-04-21T09:55:54Z","content_type":null,"content_length":"3610","record_id":"<urn:uuid:0ffeb52c-0b16-4ca7-bcaf-2fc2158d8cd6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Nakamura, Katsuhiro (ed.) et al., Quantum mechanics and chaos. Proceedings of the international conference on quantum mechanics and chaos, Osaka, Japan, September 19–21, 2006. Kyoto: Progress of
Theoretical Physics. Prog. Theor. Phys. Suppl. 166, 19-36 (2007).
From the text: A brief review of recent developments in the theory of the Riemann zeta-function inspired by ideas and methods of quantum chaos is given.
At the first glance the Riemann zeta-function and quantum chaos are completely disjoint fields. The Riemann zeta-function is a part of pure number theory but quantum chaos is a branch of theoretical
physics devoted to the investigation of non-integrable quantum problems like the hydrogen atom in external fields.
Nevertheless for a long time it was understood that there exist multiple interrelations between these two subjects. In Sections 2 and 3 the Riemann and the Selberg zeta-functions and their trace
formulae are informally compared. From the comparison it appears that in many aspects zeros of the Riemann zeta function resemble eigenvalues of an unknown quantum chaotic Hamiltonian.
One of the principal tools in quantum chaos is the investigation of statistical properties of deterministic energy levels of a given Hamiltonian. In such approach one stresses not precise values of
physical quantities but their statistics by considering them as different realizations of a statistical ensemble. According to the BGS conjecture energy levels of chaotic quantum systems have the
same statistical properties as eigenvalues of standard random matrix ensembles depended only on the exact symmetries. In Section 4 it is argued that is quite natural to conjecture that statistical
properties of the Riemann zeros are the same as of eigenvalues of the Gaussian unitary ensemble of random matrices (GUE). This conjecture is very well confirmed by numerics but only partial rigorous
results are available.
In Section 5 a semiclassical method which permits, in principle, to calculate correlation functions is shortly discussed. The main problem here is to control correlations between periodic orbits with
almost the same lengths.
In Sections 6 and 7 it is demonstrated how the Hardy-Littlewood conjecture about distribution of near-by primes leads to explicit formula for the two-point correlation function of the Riemann zeros.
The resulting formula describes non-universal approach to the GUE result in excellent agreement with numerical results.
In Section 8 it is demonstrated how to calculate non-universal corrections to the nearest-neighbor distribution for the Riemann zeros.
Spectral statistics is not the only interesting statistical characteristics of zeta functions. The mean moments of the Riemann zeta-function along the critical line is another important subject that
attracts wide attention in number theory during a long time.
In Section 9 it is explained how random matrix theory permit Keating and Snaith to propose the breakthrough conjecture about mean moments. This conjecture now is widely accepted and is generalized
for different zeta and $L$-functions and different quantities as well.
81Q50 Quantum chaos
11M50 Relations with random matrices
11M06 $\zeta \left(s\right)$ and $L\left(s,\chi \right)$
81-02 Research monographs (quantum theory)
|
{"url":"http://zbmath.org/?q=an:1167.81017","timestamp":"2014-04-19T22:13:38Z","content_type":null,"content_length":"24198","record_id":"<urn:uuid:f0afe1c0-d4db-4b6e-84f5-c6f5abf911c0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why Should You Plot Your Data? | Science Blogs | WIRED
• By Rhett Allain
• 03.21.13 |
• 10:10 am |
Let’s pick a lab. Maybe it is a lab that looks at masses oscillating on a spring. In this lab, students could put different masses on the end of a spring and let it oscillate up and down.
Theoretically, the period should have the following model.
Typically, students would change the mass on the spring and measure the period of oscillation. By changing the mass several times, they can get a value for the spring constant (or maybe they are
trying to measure π). Here is some sample data that I made up. I tried to add some errors to simulate actual student data.
Actually, I made these number is a google spreadsheet. Here they are if you want them.
And how do you find the spring constant? I always recommend that students make a graph of some type of linear function and find the slope of that line. In this case, they could plot T^2 vs. the mass.
This should be a straight line and the slope of this line should be 4π^2/k. So, you make the graph, you find the slope (maybe this is on graph paper with a best fit line) and then you use that slope
to find k. Simple. Here is a plot of that same data from the google spreadsheet.
I’m not sure how to add a best fit line here, but I know that I can find the slope with the SLOPE function (details here). Using this method with the above data, I get a spring constant of 11.65 N/m.
This is not what the students do. Instead, the students take each mass and period data points and then use that to find k. After they have calculated k for each data pair, they average the values for
k. With this data, you would get 13.63 N/m.
I tell students that this average value method isn’t as good since it treats all data points equally. In the case above, the average data point method gives a value of k closer to the expected value
(I used a value of k = 13.5 N/m plus random noise to generate the values).
Why didn’t my example work? I’m not sure. There is only one thing to do. Blow this sucker out of proportion. Yes. I am going to generate 1000 different sets of fake data and then use both methods to
get a value for k. We will see what happens then.
How will I do this 1000 times? No, 10,000 times. I will use python of course. Actually, I think I just figured out what the above problem might be. I used a flat random number generator to get
variation in the values. This isn’t very realistic – well, maybe it realistically represents the numbers students would get. Instead, I will use a normal distribution for the values of the masses and
the periods.
Here are the values of k from both methods for all of these experiments.
And that is the complete opposite of what I expected. I expected that the k values determined from the slope of the least squares fit would give a better value that the k from the all the k’s
calculated from each data point. I don’t have anything to say except that I was wrong. From this, it looks like the slope is NOT better than what the students do. Maybe I can say that by using the
slope to calculate the spring constant, it is less work. Maybe.
I’m not going to give up. Let me try something. Maybe there is something crazy going on since I am squaring the period before I plot it. Maybe my plotting method is better for cases where the
y-intercept isn’t near zero. Let me try something else. Suppose I just make up data that should fit the function:
I will put some error in the y-values and repeat the experiment. So, in one case I will find the slope with a least squares fit. In the other case, I will take each x-y data pair and solve for m like
Then I can average the values of m. Wait. I just found the issue. In this case, I couldn’t solve for m unless I know b. Just from one x-y data pair, you don’t get the y-intercept. Ok, so I am going
back to recommending the graphing method without even doing the experiment. How do you even know the intercept should be zero if you don’t plot the data.
Ah ha! Maybe this is the same reason that the graphical method is off. When I plot T^2 vs. m, I did a normal linear regression. This takes all the data and find the linear function that best fits the
data. That means that the y-intercept doesn’t HAVE to be zero. Instead, the y-intercept is whatever it needs to be in order to get the best fit. For the averaging method, there is assumed to be no
y-intercept (since it isn’t in the equation for the period).
What if I redo the linear fit and force the intercept to be zero? Would this give better results? Here is a sample plot showing both kinds of linear fits.
The first method gives a slope of 2.571 with an intercept of 0.05755 and the method that is forced to go through the origin gives a slope of 2.8954. So, different. Now let’s do this 10,000 times.
It might be hard to see, but the zero intercept graphical method and the averaging data points methods give essentially the same results.
What can we learn from this? First, if you know the function should pass through the origin, maybe you should plot it that way. In Excel, there is an option ot force the fitting equation to go
through the origin. In python, how do you do this? I don’t really know what I am doing here, but I found this snippet to work.
As far as I can tell, the first line takes the array of x-values (the mass in this case) and make it a column array instead of a row. I guess this is needed for the next step. The second line is the
least squares fit with the requirement that the line goes through the point (0,0) where a is the slope. However, it returns as an array. If you want just a number value for the slope, you would use a
[0]. Yes, I have no idea what I am doing – but this works.
The second thing to remember is that if there is indeed a y-intercept in your data, you really either have to know what this intercept should be or you need to make a graph. Either way, I am going to
still tell my students to make a graph. It’s just a good habit.
|
{"url":"http://www.wired.com/2013/03/why-should-you-plot-your-data/","timestamp":"2014-04-20T18:33:47Z","content_type":null,"content_length":"107563","record_id":"<urn:uuid:d1f2bf1b-b79a-49f4-8d0e-8eb3b1345f84>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Post a New Question | Current Questions
college resume
other than that do you think it could improved at all
Tuesday, November 5, 2013 at 2:52pm
college Algebra
Tuesday, November 5, 2013 at 2:49pm
college Algebra
Ln y = -0.35 Exponential form: e^(-0.35) = Y Y = 0.7047
Tuesday, November 5, 2013 at 2:46pm
college Algebra
Ln y = Log 9 = 0.95424 Exponential form: e^(0.95424) = y Y = 2.5967
Tuesday, November 5, 2013 at 2:38pm
college resume
This is one: Psychology Won first place in final Psychology project competition. The winning 11 Project project included writing a 5 pg essay on the cognitive affects of Award genetic algorithm on
intellectual performance, producing a well developed poster board and ...
Tuesday, November 5, 2013 at 2:32pm
college resume
where is it wordy to you
Tuesday, November 5, 2013 at 2:25pm
College Algebra
x/10000 = sin 14°
Tuesday, November 5, 2013 at 2:07pm
college resume
I think it's wordy in places. Capitalization and some punctuation need work, too. I have no experience with electronically submitted resumes, though, so the missing capital letters and punctuation
may not matter.
Tuesday, November 5, 2013 at 1:57pm
college resume
its for a college and will be sent electronically. Is it developed enough?Is there anything that sounds wrong or that needs to be elaborated on?
Tuesday, November 5, 2013 at 1:32pm
College Algebra
A plane takes off from the ground at a 14-degree angle. After flying straight for 10,000 feet, what is the altitude of the plane?
Tuesday, November 5, 2013 at 1:28pm
college resume
Here are many websites that have good ideas in them: http://www.bing.com/search?setmkt=en-US&q=how+to+write+resume Be sure to read through many of these to see what format you want to use, etc. One
thing you NEED to do, however, is not use any abbreviations or acronyms...
Tuesday, November 5, 2013 at 1:27pm
college resume
Is this resume for getting into college? or for finding a job after college graduation? Is it to be in a printable format so that it all fits on one page? Or is it to be acceptable as an electronic
resume? Or both?
Tuesday, November 5, 2013 at 1:25pm
college resume
Is there any way to edit or improve this? Community Service Description Grades(s) Hrs/wk wk/yr BuildOn Raised $2,000 through active fundraising and community service, donated all funds to Malawi,
lived with a host 10 56 4 family for a week to fully ascertain their lifestyle, ...
Tuesday, November 5, 2013 at 12:49pm
college algebra
thank you
Monday, November 4, 2013 at 10:44pm
college algebra
ln y = log 9 e^(ln y) = e^(log 9) y = e^(0.9542) y = 2.5967
Monday, November 4, 2013 at 9:10pm
college algebra
ln y = -0.35 e^(ln y) = e^(-0.35) y = e^(-0.35) y = 0.7047
Monday, November 4, 2013 at 9:09pm
college algebra
Use a calculator to find y to four decimal places, if possible. (If an answer is undefined, enter UNDEFINED.) ln y = log 9
Monday, November 4, 2013 at 8:18pm
college algebra
Use a calculator to find y to four decimal places, if possible. (If an answer is undefined, enter UNDEFINED.) ln y = −0.35
Monday, November 4, 2013 at 8:18pm
Why is it impossible to combine the expression radical 3x + 3^ radical 3x into a single term? Explain. (I'm having a real tough time with this one, can anyone here explain this to me? Thank you!)
Monday, November 4, 2013 at 6:26pm
college Algebra
Use a calculator to find y to four decimal places, if possible. (If an answer is undefined, enter UNDEFINED.) ln y = −0.35
Monday, November 4, 2013 at 5:40pm
college Algebra
Use a calculator to find y to four decimal places, if possible. (If an answer is undefined, enter UNDEFINED.) ln y = log 9
Monday, November 4, 2013 at 5:40pm
Poem: Annabel Lee by Edgar Allen Poe
MONDAY November 4, 2013 SCHOOL SUBJECTS Art Business Computers English Foreign Languages Health Home Economics Math Music Physical Education Science Social Studies GRADE LEVELS Preschool Kindergarten
Elementary School 1st Grade 2nd Grade 3rd Grade 4th Grade 5th Grade 6th ...
Monday, November 4, 2013 at 4:52pm
college Algebra
Reiny is right, this formula can't be right. Raising it to the power of x makes more sense.
Monday, November 4, 2013 at 4:46pm
college Algebra
your formula makes no sense at all according to your formula the life expectancy of this woman would be 78.5(1.001)(50) = 1946 years was it L = 78.5(1.001)^x ?? then L = 78.5(1.001)^50 = 82.5 that
makes more sense
Monday, November 4, 2013 at 4:44pm
college Algebra
Thank you, big help!
Monday, November 4, 2013 at 4:43pm
college algebra
really ?? look at your answer, how can it even make sense? .6 is a bit more than half so you are taking about half of 16(12) = half of 192 or appr 96 I get 115.2
Monday, November 4, 2013 at 4:40pm
college Algebra
Replace x with 50 and then multiply 78.5 x 1.001 x 50 to get your answer.
Monday, November 4, 2013 at 4:39pm
college Algebra
Use a calculator to help solve the problem. The life expectancy of white females can be estimated by using the function = 78.5(1.001)x, where x is the current age. Find the life expectancy of a white
female who is currently 50 years old. Give the answer to the nearest tenth. L...
Monday, November 4, 2013 at 4:32pm
College Algebra
thank you Reiny and Bob
Monday, November 4, 2013 at 4:29pm
college algebra
i did that and it is not working. i get 0.0348
Monday, November 4, 2013 at 4:24pm
College Algebra
amount162/amount164= 100/100 * e^(-.692*60/22)/e^(-.692*60/37) e^-.692*60(1/22-1/37)=.465 check my work.
Monday, November 4, 2013 at 4:18pm
college algebra
Just plug in the given values and evaluate I = 16(.6)(12) = .....
Monday, November 4, 2013 at 4:18pm
College Algebra
Assume we start with 1 unit of each after t = 2 hrs = 120 minutes amount of 162ho = 1(1/2)^(120/22) amount of 164ho = 1(1/2)^(120/37) ratio of the two = 1(1/2)^(120/22) / 1(1/2)^(120/37) = (1/2)^(120
/22 - 120/37) = (1/2)^(900/407) = .2159 = .216 to 3 decimals = 216 : 1000 =27...
Monday, November 4, 2013 at 4:15pm
college algebra
Use a calculator to help solve the problem. The intensity I of light (in lumens) at a distance x meters below the surface is given by I = I0kx, where I0 is the intensity at the surface and k
depends on the clarity of the water. At one location in the Atlantic ...
Monday, November 4, 2013 at 4:13pm
College Algebra
Use a calculator to help solve the problem. One isotope of holmium, 162Ho, has a half life of 22 minutes. The half-life of a second isotope, 164Ho, is 37 minutes. Starting with a sample containing
equal amounts, find the ratio of the amounts of 162Ho to 164Ho after two hours...
Monday, November 4, 2013 at 4:07pm
The business college computing center wants to determine the proportion of business students who have laptop computers. If the proportion exceeds 35%, then the lab will scale back a proposed
enlargement of its facilities. Suppose 200 business students were randomly sampled and...
Monday, November 4, 2013 at 2:35pm
College Physics
a. calculate the moment of inertia for the rod. forces is gravity, and table at one end. b. consider how far the cg has fallen. That loss of pE goes into reotational KE. c. torque about the cg=
I*angular acceleration. acceleation at tip= angluar acceleratio*length
Sunday, November 3, 2013 at 8:56pm
College Physics
A uniform rod of mass M = 437.0 g and length L = 37.1 cm stands vertically on a horizontal table. It is released from rest to fall. A)Which of the following forces are acting on the rod? B)Calculate
the angular speed of the rod as it makes an angle θ = 45° with ...
Sunday, November 3, 2013 at 8:24pm
College Physics
A force, F = (2x + 3y) N, is applied to an object at a point whose position vector with respect to the pivot point is r = (5x + 4y + 2z) m. Calculate the torque (in units on Nm) created by the force
about that pivot point.
Sunday, November 3, 2013 at 5:48pm
which of the following items would a personal loan be a better option than a credit card for a college student. A. Car insurance expense B. Tuition and dorm fees C. Trips home for the holidays D.
Tickets to sporting events
Sunday, November 3, 2013 at 1:11pm
The point of an AP class is to challenge the learner and earn college credit. Some students pass enough AP classes in high school to graduate from college a semester or so early. This saves time and
Saturday, November 2, 2013 at 10:36pm
addendum to my response
When I was a freshman in college all incoming students were given a test in English. Those scoring above some point were moved (we had no choice) to an honors English class. The prof explained that
we scored high enough that we didn't need the traditional course in English...
Saturday, November 2, 2013 at 3:06pm
College algebra
72/(-4) + 3(-4) = -18-12 = -30 5-(-5) = 5+5 = 10 -30/10 = -3
Saturday, November 2, 2013 at 3:03pm
College algebra
-I need help solving this 72/(-4)+3(-4) _________ 5-(-5)
Saturday, November 2, 2013 at 2:56pm
3. A study shows that 72% of graduates from local high school attended college. What fraction of the graduates attended college? A.18/25 B.17/25 C.13/25 D.1/2 E.3/8 A 6. The model below represents
the equation x+4=4x+2 The first box shows one x and has four 1's then shows ...
Saturday, November 2, 2013 at 12:11pm
A question I often ask to students who are incoming to college: Why would anyone ever pay for a course to get an easy grade from an easy teacher. My advice is to pick the most demanding teachers
teaching the most demanding courses.
Saturday, November 2, 2013 at 9:14am
At one time the university where I taught had a honors chemistry class taught in freshman class of college(not high school) and it was open to students who scored above a certain level on a test we
gave the first day of class to everyone enrolled in freshman chemistry. Those ...
Friday, November 1, 2013 at 11:21pm
Medical Billing and Coding
This 19-year-old college student was brought tot he ER and admitted with high fever, stiff neck, chest pain, cough, and nausea. A lumbar puncture was performed, and results were positive for
meningitis. Chest x-ray revealed pneumonia. Sputum clutures grew Pneumococcus. Patient...
Friday, November 1, 2013 at 12:24am
College Algebra
your finding the interest... but your looking for that in the year to pay off her credit card debt it would cost 2,091
Wednesday, October 30, 2013 at 12:56am
Ms sue
I could only access the abstract. http://onlinelibrary.wiley.com/doi/10.1525/nad.2007.10.1.18/abstract It looks like the author suggests looking at our family, friends, and neighborhood from an
anthropologist's point of view. When I took a similar course in college...
Tuesday, October 29, 2013 at 9:35pm
State the null hypothesis, Ho, and the alternative hypothesis, Ha, that would be used to test these claims. < ¡Ü = ¡Ù ¡Ý > (a) There is an increase in the mean difference between posttest and pretest
scores. Ho: ¦Ìd 0 Ha...
Tuesday, October 29, 2013 at 8:45pm
College writing supplement edit
Much better than before. Send it!
Monday, October 28, 2013 at 7:31pm
College writing supplement edit
can someone edit this Connecticut College is easily my prime choice, as I believe that its exemplary academic and career opportunities will not only help me advance as a student, but also as a future
professional English teacher. The college one chooses has a preeminent impact...
Monday, October 28, 2013 at 6:18pm
At Strawberry State College, the ratio of females to males is 2:1. If there are 2400 students, how many are female and how many are male?
Monday, October 28, 2013 at 2:17pm
College Algebra
I think the question is asking what she would pay at the end of the year. I tried $159.93 and it was not correct.
Monday, October 28, 2013 at 1:57pm
College Algebra
monthly rate = .23/12 = .0191666.. Since it is a credit card, I am assuming that you want equal monthly payments let that payment be P P( 1 - 1.01916666..^-12)/.019666.. = 1700 p(10.6296677) = 1700 P
= $159.93
Monday, October 28, 2013 at 12:56pm
A city ballot includes a local initiative that would legalize gambling. The issue is hotly contested, and two groups decide to conduct polls to predict the outcome. The local newspaper finds that 53%
of 1200 randomly selected voters plan to vote "yes," while a ...
Monday, October 28, 2013 at 12:55pm
College Algebra
A bank credit card charges interest at the rate of 23% per year, compounded monthly. If a senior in college charges $1,700 to pay for college expenses, and intends to pay it in one year, what will
she have to pay?
Monday, October 28, 2013 at 9:35am
A city ballot includes a local initiative that would legalize gambling. The issue is hotly contested, and two groups decide to conduct polls to predict the outcome. The local newspaper finds that 53%
of 1200 randomly selected voters plan to vote "yes," while a ...
Sunday, October 27, 2013 at 9:20pm
cYou think that in 15 years it will cost $214,000 to provide your child with a 4-year college education. Will you have enough if you take $75,000 today and invest it for the next 15 years at 5%? d.
If you can earn 5%, how much will you have to save each year if you want to ...
Sunday, October 27, 2013 at 5:40pm
Statistics Math
1. What type of car is more popular among college students, American or foreign? One hundred fifty-nine college students were randomly sampled and each was asked which type of car he or she prefers.
A computer package was used to generate the printout below for the proportion ...
Sunday, October 27, 2013 at 4:45pm
the cost in 15 years is 214,000 to provide a college education. Can I have enough to take $75,000 today and invest it for the next 15 years at 5%?
Sunday, October 27, 2013 at 10:47am
Is AP Psychology and introduction to psychology (college course) the same?
Saturday, October 26, 2013 at 11:11am
College Physics
A velocity selector consists of electric and magnetic fields described by the expressions vector E = E k hat bold and vector B = B j hat bold, with B = 0.0140 T. Find the value of E such that a 670
-eV electron moving along the negative x axis is undeflected. kV/m
Friday, October 25, 2013 at 7:58pm
College Physics
A proton moves perpendicular to a uniform magnetic field B at 1.20 107 m/s and experiences an acceleration of 2.00 1013 m/s2 in the +x direction when its velocity is in the +z direction. Determine
the magnitude and direction of the field.
Friday, October 25, 2013 at 7:52pm
You think that in 15 years it will cost $212,000 to provide your child with a 4 year college education. Will you have enough if you take $70,000 today and invest it for the next 15 years at 5%?
Friday, October 25, 2013 at 6:27pm
A gas containing both hydrogen molecules (each molecule contains two hydrogen atoms) and helium atoms has a temperature of 300 K. How does the average speed of the hydrogen molecules compare to the
helium atoms?
Friday, October 25, 2013 at 3:50pm
College writing supplement edit
(The Holleran Center for Community Action and Public Policy offers student teaching internships and other career preparing programs) Connecticut College is easily my prime choice, as I believe that
its exemplary academic and career opportunities will not only help me advance ...
Friday, October 25, 2013 at 3:18pm
business math
Sam Long anticipates he will need approximately $225,000 in 15 years to cover his 3-year-old daughter s college bills for a 4-year degree.How much would he have to invest today at an interest rate of
8 percent compounded semiannually?
Friday, October 25, 2013 at 10:42am
college physics
My previous question was marked science, but it's specifically a college physics question. Thanks.
Friday, October 25, 2013 at 12:09am
college macroeconomics
Suppose there is a temporary but significant increase in oil prices in an economy with an upward-sloping Short-Run Aggregate Supply (SRAS) curve. If policymakers wish to prevent the equilibrium price
level from changing in response to the oil price increase, should they ...
Thursday, October 24, 2013 at 11:35pm
Personal finance
Josh has decided to take a course at the local community college that could help him get a promotion at work. The course begins at 5pm and goes until 9pm on monday nights. Josh normally works until
5pm each day, but because of the drive time he will leave work at 3pm on class ...
Thursday, October 24, 2013 at 9:57pm
personal finance
which of the following is a credit management decision? a. Purchasing a used car. b. investing your savings in a stock market c. obtaining a student loan to attend college d. putting money into your
retirement account
Thursday, October 24, 2013 at 8:39pm
college algebra
see wolframalpha.com or http://rechneronline.de/function-graphs/
Thursday, October 24, 2013 at 5:02pm
college algebra
Go to wolframalpha. com and type the given.
Thursday, October 24, 2013 at 4:27pm
college algebra
graph the following: 7x + 6y = 42 (-6x+42)\(7)
Thursday, October 24, 2013 at 4:14pm
college algebra
graph the following: (5/4)x-4 (4x+16)/(5)
Thursday, October 24, 2013 at 3:59pm
College Anatomy
Within the nervous system, the (a) are to the ganglia as a tract is to a (b).
Thursday, October 24, 2013 at 3:35pm
college admissions
Why did you post this twice?
Wednesday, October 23, 2013 at 6:39pm
College writing supplement edit
Connecticut College is definitely my first choice, as I believe that its exemplary academic and career opportunities will not only help me develop as a student, but also as a future professional. ...
"as a future professional" what? There is no career simply called &...
Wednesday, October 23, 2013 at 6:38pm
college admissions
I need help editing a college admission writing supplement Connecticut College is definitely my first choice, as I believe that its exemplary academic and career opportunities will not only help me
develop as a student, but also as a future professional. The college you choose...
Wednesday, October 23, 2013 at 5:59pm
College writing supplement edit
I need help editing a college admission writing supplement Connecticut College is definitely my first choice, as I believe that its exemplary academic and career opportunities will not only help me
develop as a student, but also as a future professional. The college you choose...
Wednesday, October 23, 2013 at 4:45pm
Physics (College)
Wednesday, October 23, 2013 at 1:20am
Many factors enter into whether a college accepts a specific student. GPA is one, but personal essays, extracurricular activities and standardized test scores also play large parts. http://
Tuesday, October 22, 2013 at 10:13pm
what year does college look at gpa i got a 3.o for my midterm and if i get 4.0s all other erms of my freshmen year and rest years can i make it to a uc
Tuesday, October 22, 2013 at 10:09pm
Which of those do you think could be cut? I urge you to watch the video I posted above. It was shown in a local college sociology class today. It's only about 6 minutes long, and I think everybody
should see it.
Tuesday, October 22, 2013 at 7:57pm
College Physics
A Major League fastball is thrown at 92.1 mph and with a spin rate of 103 rpm. If the distance between the pitcher's point of release and the catcher's glove is exactly 61.4 ft, how many full turns
does the ball make between release and catch? Neglect any effect of ...
Tuesday, October 22, 2013 at 6:42pm
college physics
A small sphere of mass m and charge +q with zero velocity falls under the influence of gravity (acceleration = g from height h onto an infinite uniformly charged plane with positive charge density
sigma as shown below. What will the velocity of the sphere be when it hits the ...
Tuesday, October 22, 2013 at 1:31pm
college physics
Three negative charges with charge -q occupy the vertices of an equilateral triangle with sides of length L What is the magnitude of the electric field in units of k*q/L^2 at the center of the
triangle? Give a numerical answer that is the number that would go in front of k*q/L...
Tuesday, October 22, 2013 at 1:29pm
College Physics
An electric motor turns a flywheel through a drive belt that joins a pulley on the motor and a pulley that is rigidly attached to the flywheel as shown in the figure below. The flywheel is a solid
disk with a mass of 55.5 kg and a radius R = 0.625 m. It turns on a frictionless...
Tuesday, October 22, 2013 at 10:27am
college chemistry
exactly 11.2 ml of water at 23°c are added to a hot iron skillet. all of the water is converted into steam at 100°c. the mass of the pan is 1.5 kg and the molar heat capacity of iron is 25.19 j/
(m·°c). what is the temperature change of the skillet
Tuesday, October 22, 2013 at 1:14am
college Physics
Tuesday, October 22, 2013 at 12:31am
college algebra
Thanks, I finally got it.
Monday, October 21, 2013 at 7:25pm
college algebra
now all I need help on is finding the domain of the f(f) that you just posted.
Monday, October 21, 2013 at 7:11pm
college algebra
now all I need help on is finding the domain of the f(f) that you just posted.
Monday, October 21, 2013 at 7:09pm
college algebra
no sir, it asks for f(f).
Monday, October 21, 2013 at 7:03pm
college algebra
f(f) = √(f+2) = √(√(x+2)+2) Sure you didn't want f(g) = √(x^2-2+2) = |x| or g(f) = (x+2)-2 = x ?
Monday, October 21, 2013 at 7:01pm
college algebra
thank you so much steve!! could you help me with this one? Let f(x) = square root (x + 2) and g(x) = x^2 − 2. Determine the domain of the composite function. (Enter your answer using interval
notation.) f compose f _____? and Find the composite function. (f compose f)(x...
Monday, October 21, 2013 at 6:56pm
college algebra
g(f) = 1/(f-3) = 1/(1/(x-2)-3) = (2-x)/(3x-7) So, the domain is all reals except where 3x-7 = 0 or x-2=0 (because f(2) is not defined) or f(x) = 3 (because g(3) is not defined)
Monday, October 21, 2013 at 6:53pm
college algebra
all reals except where -3x+7=0.
Monday, October 21, 2013 at 6:44pm
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/college/?page=8","timestamp":"2014-04-16T19:39:11Z","content_type":null,"content_length":"35692","record_id":"<urn:uuid:12c8aba5-c8c5-4e28-95ba-9eafd9e970ae>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help on solving equation
July 26th 2007, 03:20 PM
Help on solving equation
I had taken a placement test for Math. I need to pass into a college credit math course and i missed out on two math problems that bumped me back to intermediate math. They were something like
for all of (a + b) find the solutions for x. Then the equation. I cant find the problem on any of the algebra sites i looked at nor how to solve it.
Any help is greatly appreciated!
July 26th 2007, 04:01 PM
I had taken a placement test for Math. I need to pass into a college credit math course and i missed out on two math problems that bumped me back to intermediate math. They were something like
for all of (a + b) find the solutions for x. Then the equation. I cant find the problem on any of the algebra sites i looked at nor how to solve it.
Any help is greatly appreciated!
We sort of have to know what the problem was (or at least what it was about) in order to help you.
|
{"url":"http://mathhelpforum.com/algebra/17251-help-solving-equation-print.html","timestamp":"2014-04-17T19:11:41Z","content_type":null,"content_length":"4623","record_id":"<urn:uuid:fd1caff1-90cc-4e78-a315-cb64d3b1c199>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
More cochlear mechanics.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
More cochlear mechanics.
Dear colleagues,
The formula for the basilar-membrane stiffness S presented
last Friday,
S = S_0 * [1 - x/(4d)]^4 (1)
(where x = longitudinal coordinate and d = a few mm), has
met with so much interest that I permit myself to add a
new insight:
It is fairly easy to see why Equation (1) is compatible with
the LG (Liouville-Green) approximation [also called WKB
(Wentzel, Kramers, Brillouin) approximation], and thus with
weak reflected waves.
If the BM (basilar membrane) impedance is stiffness-
dominated (i.e., if the BM mass and friction are negligible)
and the cochlear half channels have equal, x-independent
rectangular cross-sections, then the long-wave complex
liquid-pressure wave equation has the following solution:
p(x,t) = a_p(x) * e^[i*phi(x)] * e^[i*omega*t] . (2)
In Equation (2), a_p(x) is a real amplitude.
In the LG approximation, a_p(x) is found to be as follows:
a_p(x) = a_p(0) * [k(0) / k(x)]^(1/2) . (3)
The local wave number k(x) is:
k(x) = omega * [2*rho/(H*S)]^(1/2) . (4)
Here, rho = 1000 kg / (m^3) is the liquid density, and H is the
"effective" half-channel height (i.e., cross section divided by
BM width).
Equations (1), (3), and (4) yield:
a_p(x) = a_p(0) * [1 - x/(4d)] . (5)
Equation (5) guarantees that the condition for the accuracy
of the LG approximation,
(a_p)'' << (k^2) * a_p , (6)
is perfectly fulfilled at all angular frequencies omega
small enough to be compatible with the long-wave
approximation, k*H << 1 .
[ I plan to write another not-too-long text giving more details:
Reflections for S(x) = S_0 * e^(-x/d) ; possible validity of
Equation (1) above for the whole cochlea; I shall send that
text to those who have asked for the first one, and also to
"newcomers" of course.]
With best wishes,
Reinhart Frosch.
Reinhart Frosch,
Dr. phil. nat.,
r. PSI and ETH Zurich,
Sommerhaldenstr. 5B,
CH-5200 Brugg.
Phone: 0041 56 441 77 72.
Mobile: 0041 79 754 30 32.
E-mail: reinifrosch@xxxxxxxxxx .
|
{"url":"http://www.auditory.org/mhonarc/2006/msg00651.html","timestamp":"2014-04-17T04:05:20Z","content_type":null,"content_length":"5672","record_id":"<urn:uuid:1cf98e89-fea5-4965-9431-d3ccb96cbdff>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
cardinality proof
August 1st 2010, 04:37 AM #1
Dec 2009
cardinality proof
Let G be a group and let $A \leq G$ be a subgroup. If $g \in G$, then $A^g \subseteq G$ is defined as
$A^g=\{a^g | a \in A \}$ where $a^g=g^{-1}ag \in G$.
Show that the sets $A$ and $A^g$ have the same cardinality.
Hint: sets $A$ and $A^g$ have the same cardinality if and only if there exists an injective and surjective map $f: A \to A^g$.
Attempt: So, in order to show that the mapping f is injective I have to show that
$f(a)=f(a^g)=f(g^{-1}ag) \iff a = a^g$ (for $a \in A$ and $a^g \in A^g$).
But how can I deduce that when I don't know what the function f is?
You don't. You have to define a specific f and show that that f is both injective and surjective. If you simply assert the existance of such an f, you are assuming what you want to prove.
Since $A^g= {g^{-1}ag |g\in G}$ how about trying $f(a)= g^{-1}ag$?
To show that the mapping is injective:
$f(a)=f(a^g) \iff g^{-1}ag=g^{-1}a^gg$
$\iff agg^{-1} = a^g gg^{-1}$
$\iff a=a^g$
Is this correct?
To show that the mapping is surjective:
For any $a^g \in A^g$ let $a=g^{-1}a^gg$ (since $a^g = g^{-1}ag$).
$= g^{-1}a^gg = a^g$
Is it right?
August 1st 2010, 04:54 AM #2
MHF Contributor
Apr 2005
August 2nd 2010, 02:34 AM #3
Dec 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/152488-cardinality-proof.html","timestamp":"2014-04-19T13:39:23Z","content_type":null,"content_length":"40526","record_id":"<urn:uuid:15588277-3161-4623-a858-cb7c11519391>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
|
, 1996
"... This paper describes a new algorithm for constructing the set of free bitangents of a collection of n disjoint convex obstacles of constant complexity. The algorithm runs in time O(n log n + k),
where k is the output size, and uses O(n) space. While earlier algorithms achieve the same optimal run ..."
Cited by 86 (9 self)
Add to MetaCart
This paper describes a new algorithm for constructing the set of free bitangents of a collection of n disjoint convex obstacles of constant complexity. The algorithm runs in time O(n log n + k),
where k is the output size, and uses O(n) space. While earlier algorithms achieve the same optimal running time, this is the first optimal algorithm that uses only linear space. The visibility graph
or the visibility complex can be computed in the same time and space. The only complicated data structure used by the algorithm is a splittable queue, which can be implemented easily using red--black
trees. The algorithm is conceptually very simple, and should therefore be easy to implement and quite fast in practice. The algorithm relies on greedy pseudotriangulations, which are subgraphs of the
visibility graph with many nice combinatorial properties. These properties, and thus the correctness of the algorithm, are partially derived from properties of a certain partial order on the faces of
- In Proc. 11th Annu. ACM Sympos. Comput. Geom , 1995
"... We show that the k free bitangents of a collection of n pairwise disjoint convex plane sets can be computed in time O(k+n log n) and O(n) working space. The algorithm uses only one advanced data
structure, namely a splittable queue. We introduce (weakly) greedy pseudo--triangulations, whose combinat ..."
Cited by 31 (2 self)
Add to MetaCart
We show that the k free bitangents of a collection of n pairwise disjoint convex plane sets can be computed in time O(k+n log n) and O(n) working space. The algorithm uses only one advanced data
structure, namely a splittable queue. We introduce (weakly) greedy pseudo--triangulations, whose combinatorial properties are crucial for our method. 1 Introduction Consider a collection O of
pairwise disjoint convex objects in the plane. We are interested in problems in which these objects arise as obstacles, either in connection with visibility problems where they can block the view
from an other geometric object, or in motion planning, where these objects may prevent a moving object from moving along a straight line path. The visibility graph is a central object in such
contexts. For polygonal obstacles the vertices of these polygons are the nodes of the visibility graph, and two nodes are connected by an arc if the corresponding vertices can see each other. [9]
describes the first non-triv...
, 1996
"... We investigate the problem of constructing a shortest path of a point--like robot between two configurations in the euclidean plane cluttered with (intersecting) convex polygonal obstacles. One
common approach is to construct the visibility graph and search within this graph in a total time of O(n ..."
Cited by 16 (0 self)
Add to MetaCart
We investigate the problem of constructing a shortest path of a point--like robot between two configurations in the euclidean plane cluttered with (intersecting) convex polygonal obstacles. One
common approach is to construct the visibility graph and search within this graph in a total time of O(n 2 ). We show that in general it is not necessary to construct the entire visibility graph. In
contrast, we develop two hierarchical motion--planning techniques based on the monotonous bisector tree and the visibility graph, which are shown to be more efficient in scenes of low object density.
We show that in our setting the visibility graph can be incrementally constructed in time O(n 2 log n), where n is the complexity of the scene. A shortest path can then be constructed in time O(l 4
log l+n log l), if a shortest path of length l exists. The dependency on n can be further reduced by the use of a spatial data structure complying a query assumption to O(l 4 log l + l 2 log n). If
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2752252","timestamp":"2014-04-21T08:24:00Z","content_type":null,"content_length":"18671","record_id":"<urn:uuid:1b6cdbda-50ee-4a2f-8bed-ad4b922892de>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Garfield, NJ Algebra 2 Tutor
Find a Garfield, NJ Algebra 2 Tutor
...I find that the most detrimental thing to a mathematics student is lacking a core foundation. In math, everything you learn builds on top of what you learned in previous years, and without
that strong foundation, students can fall behind. When teachers explain something in class they assume that the students have a certain knowledge about math based on what they learned in previous
21 Subjects: including algebra 2, calculus, geometry, statistics
...There are two real issues I find students have: 1. They understand the material, but can't get problems and answers right, they just need some solid technique improvement and some coaching
into good practices. 2. They don't understand the topic and need to understand it better so as to help ...
8 Subjects: including algebra 2, physics, geometry, algebra 1
...Geometry is part of the everyday toolset of a physical scientist, I am very comfortable teaching this subject at any age group. I try my best to give an intuitive feel for the subject without
sacrificing rigor. I have a Bachelor of Science in Physics earned from a world-leading research institu...
17 Subjects: including algebra 2, chemistry, Spanish, calculus
...My name is Lawrence and I would like to teach you math! Since 2004, I have been tutoring students in mathematics one-on-one. My approach to mathematics tutoring is creative and
9 Subjects: including algebra 2, calculus, geometry, algebra 1
...That is why I have chosen to tutor in chemstry. My tutoring methods are flexible based on my tutee. I try to understand my tutee's needs, strengths and weaknesses, and use that information to
devise the right tutoring method.
5 Subjects: including algebra 2, chemistry, biology, algebra 1
Related Garfield, NJ Tutors
Garfield, NJ Accounting Tutors
Garfield, NJ ACT Tutors
Garfield, NJ Algebra Tutors
Garfield, NJ Algebra 2 Tutors
Garfield, NJ Calculus Tutors
Garfield, NJ Geometry Tutors
Garfield, NJ Math Tutors
Garfield, NJ Prealgebra Tutors
Garfield, NJ Precalculus Tutors
Garfield, NJ SAT Tutors
Garfield, NJ SAT Math Tutors
Garfield, NJ Science Tutors
Garfield, NJ Statistics Tutors
Garfield, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Garfield_NJ_algebra_2_tutors.php","timestamp":"2014-04-18T11:25:13Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:3881f5a0-98ce-41c3-a6df-c3b0a5e8f7e5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability a photon will couple with an electron
Hello, I am new to these forums and this is my first post.
Anyways, I am trying to come up with a formula of the probability amplitude of an electron-photon coupling; more specifically, a photon colliding with an electron. I used Q.E.D. by Richard Feynman as
a basis for the formula, using the simplistic Feynman diagrams and the basic formulas he describes.
The formula is based on an electron starting at an arbitrary point, 1, moving through space-time to an arbitrary point, 2, coupling with a photon, then continuing off in another direction to an
arbitrary point 3. Feynman made it sound ridiculously easy to calculate probability amplitudes for such events, however, he used an imaginary spin-0 electron to simplify it. A real electron obviously
takes every path connecting the two points and the probability amplitude that it will go from point 1 to 2 is:
E(1 to 2) = P(1 to 2) + P(1 to 3)*n^2*P(3 to 2) + P(1 to 4)*n^2*P(4 to 5)*n^2*P(5 to 2) + .... for all intermediate points
Where n is a number that makes the calculation agree with experiment and is the amplitude for each stop.
Can someone explain the basics of how I would write a formula that includes all intermediate terms? What would they be? When would they stop? Would it have to be represented by a definite integral
with an upper limit of infinity?
|
{"url":"http://www.physicsforums.com/showthread.php?t=462797","timestamp":"2014-04-19T04:40:54Z","content_type":null,"content_length":"20623","record_id":"<urn:uuid:ad7c5d30-d0fa-4667-ac6e-5fed86682ebd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
reject the null hypothesis?
July 29th 2008, 11:52 AM #1
Junior Member
Jun 2008
reject the null hypothesis?
Am I going crazy or is this a trick question? (homework assignment)
"Why don't we reject the null hypothesis when the test statistic from a sample falls in the non-rejection region?"
I thought we did?
Re-read what you wrote. We don't reject it because it is not in the rejection region. However, if we don't reject it and its false, then we've commited a type I error.
July 29th 2008, 12:51 PM #2
|
{"url":"http://mathhelpforum.com/advanced-statistics/44816-reject-null-hypothesis.html","timestamp":"2014-04-25T01:23:11Z","content_type":null,"content_length":"33664","record_id":"<urn:uuid:853f1012-b025-4a71-8b0f-95434236d3a1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From OpenContent Curriculum
Math Levels
Math Curriculum Committee 2009-2010
Five Easy Steps to a Balanced Math Program
A Balanced Math Program
Follow this link to find out more about the "5 Easy Steps to a Balanced Math Program" trainings by Jan Christinson and site plans for implementation
Recommendations for Implementing Math Review, Mental Math, Problem Solving, and a Math Fact Program by CASC 2009
Problem Solving Culturally Relevant Problems
CASC Resource Review
From the MA 07 CASC Review committee
KeyMath Teach & Practice (TAP) Alignment
The Teach and Practice sets are a great way to diagnostically / prescriptively address either a specific skill for an inidividual student, or for organizing small group math instruction around the
BSSD Standards. These are most useful for the typical math concepts and operations in a K-8 math curriculum.
The KeyMath-R screening tool can be used to identify student weaknesses via item analysis. These are tied to the TAP materials as remediation, or can be used to measure progress over time as you move
through any BSSD level.
The materials consist of sequential lessons that start with a teacher-delivered mini lesson for individuals or groups. These usually involve manipulatives, and could be expanded to use any modality.
Students then progress through the work packets until mastery. This can take a couple of days, or a week, depending on the concept and the students. Greg Johnson purchased these sets for each BSSD
school. Rebecca Concilus could explain over VTC how to use them.
These sets are very, very effective for sub-skill acquisition on a mastery model with any student population. Add Specific TAP Alignments to BSSD & Standards Here:
Math Print Resources
This section should be used to list print resources that have been found effective. Please include information about where this book or print item is available - BSSD Media Center, for purchase at
Amazon, etc.! - and about the author and year.
About Teaching Mathematics: A K-8 Resource (Marilyn Burns)
Math Tools by Options Publishing (glossary/reference for problem solving for grades 1-8)
Instructional Resources Online
Here is a compilation of websites that can be used in the classroom by teachers and students to address specific math skills in our curriculum.
Please add to the list to help other teachers looking for math!
General Online Math Resources
1. Times Table Raps Great Math Rap MP3s for students to learn 1-12 times tables.
1. PBS Mathline Numerous lessons for math in many grade levels and content areas.
2. Various Math Worksheets - Math worksheets good for levels 3,4,5
3. AAA Math Arithmetic Lessons - AAA Math features a comprehensive set of interactive arithmetic lessons. Unlimited practice is available on each topic which allows thorough mastery of the concepts.
Grades K-8.
4. Native Ways of Knowing Math Materials UAF has culturally-based integrated lessons/units
5. EdHelper - Standardized test prep materials, dynamic worksheets on all topics, critical thinking skills, story problems, and math puzzles. Full access to site requires a subscription.
6. Fun Brain.com FunBrain is self-described as hosting the Internet's most popular educational games, including those teaching math.
7. PurpleMath - Purplemath's algebra lessons are written with the student in mind. These lessons emphasize the practicalities rather than the technicalities, demonstrating dependable techniques,
warning of likely "trick" questions, and pointing out common mistakes. The lessons are cross-referenced to help you find related material.
8. Shodor Foundation Simulations - The Shodor Foundation is a non-profit research and education organization dedicated to the advancement of science and math education, specifically through the use
of modeling and simulation technologies. Excellent site for online models and simulations.
9. BCPS Math Links - Resources for grades K-5
10. AlgebraHelp - Sounds like it's some help for teaching Algebra
11. [1] -Songs for teaching multiple concepts and areas
12. CoolMath4Kids Games - Math games that will help kids have fun and achieve more at the same time
13. Sowash County SD Math Resources - A list of web-based math games for lower levels
14. Measurement Game - A math game that can help people with measurements
15. Science Vocabulary Hangman - The Jlab offers a stunning array of choices. Vocabulary areas include: 5th Grade and 7th Grade Math, Geometry Basics, Really Big Numbers, Numbers and Set, and an
impressive variety of science vocabulary. You can even send in your own word list and they will create a game for you!
16. Figure This! - Looking for problems of the week? Real world story problems? Three to four math challenges using real world examples are posted here each month. For those who need help solving the
challenge, there are hints and complete solutions, along with related problems.
17. Primary Math Games - Use the Curriculum Guide to find the right game or activity to meet your roughly K-4 grade classroom needs. The guide is broken up into four areas: Math, Language Arts,
Science and Social Studies. Within each subject area, a checklist shows the skills and grade level appropriateness of each activity. Choose the grade level and math concept and have fun!
18. Interactive Mathematics Activities - There are a multitude of resources on this website. The sheer number of interactive activities is amazing. If you are looking for something different, this is
the place. Just a few of the activities: Abacus in Various Number Systems, Equivalent Fractions, Euclid's Game, Binary Color Device, Merlin's Magic Squares (modular arithmetic, boolean and linear
algebra) and there are almost 300 more! Well worth exploring, even if you don't use any in the classroom!
19. Free Math Worksheets - Loads of free math worksheets for grades 2-5.
20. Sheppard Software - Free online math games for grades 2-5.
21. Illuminations Spanning the entire K-12 range and tied to NCTM Math Standards, this great resource includes:
1. i-Maths - These are online, interactive, multimedia math investigations. All i-Maths are built around interactive math applets, and some also include video clips.
2. Internet-Based Lesson Plans: - Many examples of how the Internet can be used to help create effective Standards-based mathematics lessons.
3. Mathlets - Mathlests are math applets you can download and use to explore math and create interactive lessons on your own.
4. Inquiry on Practice- video vignettes, research reports, and articles designed to encourage thinking and discussion about how to improve the teaching and learning.
5. Selected Web Resources- Teachers and students can use the dynamic table tool to selectively search over 1040 carefully reviewed resources.
22. TeacherXpress - The Education Web - All in One Place - For Busy Teachers: Welcome
23. Virginia State Standards of Learning- Science, Math and Technology Practice Tests
Math Resources By Topic
This is a list of concepts and topics in Math. It is only a partial list to start, but you can add both concepts and links here that will help teachers and students searching for materials,
activities and lessons by topic.
Five Easy Steps to a Balanced Math Program
Telling Time:
Imagining Large Numbers
1. How Big is a Billion? Real, Western Alaska Terms
Solving Equations:
Business Math:
Problem Solving:
1. TeacherTube Video Brain Teasers - Get your popsicle sticks ready; and solve the Video Brain Teasers (#1-17).
Teaching Various Other Concepts:
This is a list of resources that are harder to categorize and use, but which teachers have posted as useful in teaching math:
Note:- Most of these should ideally be moved to the concept headings above, even if it means creating a new one!
Contributed Lessons Aligned to BSSD & State Standards
Five Easy Steps to a Balanced Math Program
Lessons Plan Links
The category is for Math related resources only. To create lessons, use the {{subst:Lesson Plan}} or feel free to use your own lesson plan format.
This page will automatically pull in links to any lesson plan that has the category tag at the bottom of page of [[Category:Math]].
Important Note:Any lesson plan or page can below to multiple categories! In other words, a lesson plan can easily belong to the categories of Math, Social Studies, Writing and the "theme category" of
Iditarod by simply adding each these ANYWHERE on a new page:
The example above would automatically link that lesson plan to all four categories.
Here are the categories of the entire wiki:
(previous 200) (
next 200
This category has the following 43 subcategories, out of 58 total.
5 A cont. G cont.
A F I
Pages in category "Math"
The following 157 pages are in this category, out of 304 total.
A A cont. G cont.
B H
C • How Big is a Billion?
F I
(previous 200) (
next 200
|
{"url":"http://wiki.bssd.org/index.php/Math","timestamp":"2014-04-18T15:40:37Z","content_type":null,"content_length":"82197","record_id":"<urn:uuid:f3b18232-90c0-412a-8363-b94f4186663b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unsigned Arithmetic: Useful but Tricky
May 30, 2012
In general, arithmetic that combines signed and unsigned values yields an unsigned result.
C++, like C, supports two distinctly different kinds of integers: signed and unsigned. Each kind comes in several sizes, but the kinds differ in ways that transcend size. If a signed integer
overflows, the result is undefined. If an unsigned integer overflows, the result is defined modulo 2^w, where w is the number of bits in that particular unsigned integer. By implication, an unsigned
integer is never negative.
In general, arithmetic that combines signed and unsigned values yields an unsigned result. This property is particularly important for comparisons, in which a signed integer is converted to unsigned
for comparison purposes. For example, the comparison n>=0u, where 0u represents an unsigned integer zero, is always true — even if n is signed — because n is converted to unsigned and the result of
that conversion cannot be negative.
This aspect of how unsigned arithmetic behaves has some curious effects on programming technique. For example, suppose I have a vector named x, and I want to access its nth element. I do so by
writing x[n], of course.
Suppose further that n has type int, and that I want to be sure before I try to access x[n] that n is in range. The obvious way of doing so is to write something like this:
if (n < 0 || n >= x.size())
throw an out-of-range exception;
// Now it is safe to access x[n]
There is a subtle improvement possible in this code. The size member of the vector class returns an unsigned value. When we compare n to such a value, and n has a signed type, n will be converted to
unsigned for the comparison. If n is negative, this conversion will wrap around and yield a large positive number. Only if x has more elements than this number will the test succeed. As a result, so
long as x is not a huge negative number, we can omit the comparison n .
Omitting the test this way is slightly shaky, because it is possible that n might be such a large negative number as to wrap around and look like a large positive number. However, if we make n
unsigned, then we can definitely omit the comparison of n to 0, because of course an unsigned variable will never be less than zero.
As so often happens, the code simplification that comes from making n unsigned can complicate other kinds of programs. Suppose, for example, that we want to visit the elements of x in reverse order.
We cannot write
for (auto n = x.size() - 1; n >= 0; --n)
// Do something with x[n]
because the comparison n >= 0 will always be true. The reason is that x.size() yields an unsigned value, so n will also be unsigned.
Instead, it is necessary to write something like this:
auto n = x.size();
while (n) {
// Do something with x[n]
It is no coincidence that this technique is similar to the technique that one would use for iterators. In fact, there's an interesting story behind this similarity. I'll talk about that next week.
|
{"url":"http://www.drdobbs.com/cpp/unsigned-arithmetic-useful-but-tricky/240001198","timestamp":"2014-04-17T16:20:22Z","content_type":null,"content_length":"94425","record_id":"<urn:uuid:a7a6d516-e8ce-4f4e-aca7-49b633c562d9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|