content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
IPCC Is Warned About Using Unscientific Methods
“The results from uniform priors are arbitrary and hence non-scientific. The results may well be nonsense mathematically. You risk criticism from more or less the entire statistics community. If your
paper is cited in the IPCC report, IPCC may end up losing credibility.”
Last week, I posted about a comment Nic Lewis had written at RealClimate. In that comment, Lewis had spent some time discussing a study by Aldrin et al, and noted that its findings were distorted by
the use of a uniform (or “flat” prior). Although Gavin Schmidt did not respond directly to this point, one commenter pushed the question of the validity of the uniform prior approach a little
I thought James Annan had demonstrated that using a uniform prior was bad practise. That would tend to spread the tails of the distribution such that the mean is higher than the other measures of
central tendency. So is it justified in this paper?
This elicited a response from a statistician called Steve Jewson (a glance at whose website suggests he is just the man you’d want to give you advice in this area):
Following on from the comments by Nic Lewis and Graeme,
Yes, using a flat prior for climate sensitivity doesn’t make sense at all. Subjective and objective Bayesians disagree on many things, but they would agree on that. The reasons why are repeated
in most text books that discuss Bayesian statistics, and have been known for several decades. The impact of using a flat prior will be to shift the distribution to higher values, and increase the
mean, median and mode. So quantitative results from any studies that use the flat prior should just be disregarded, and journals should stop publishing any results based on flat priors. Let’s
hope the IPCC authors understand all that.
Nic (or anyone else)…would you be able to list all the studies that have used flat priors to estimate climate sensitivity, so that people know to avoid them?
RC regular Ray Ladbury then chimed in with this:
Steve Jewson,
The problem is that the studies that do not use a flat prior wind up biasing the result via the choice of prior. This is a real problem given that some of the actors in the debate are not “honest
brokers”. It has seemed to me that at some level an Empirical Bayes approach might be the best one here–either that or simply use the likelihood and the statistics thereof.
To which Steve Jewson replied:
I agree that no-one should be able to bias the results by their choice of prior: there needs to be a sensible convention for how people choose the prior, and everyone should follow it to put all
studies on the same footing and to make them comparable.
And there is already a very good option for such a convention…it’s Jeffreys’ Prior (JP).
JP is not 100% accepted by everybody in statistics, and it doesn’t have perfect statistical properties (there is no framework that has perfect statistical properties anywhere in statistics) but
it’s by far the most widely accepted option for a conventional prior, it has various nice properties, and basically it’s the only chance we have for resolving this issue (the alternative is that
we spend the next 30 years bickering about priors instead of discussing the real issues). Wrt the nice properties, in particular the results are independent of the choice of coordinates (e.g. you
can use climate sensitivity, or inverse climate sensitivity, and it makes no difference).
Using a flat prior is not the same as using Jeffreys’ prior, and the results are not independent of the choice of coordinates (e.g. a flat prior on climate sensitivity does not give the same
results as a flat prior on inverse climate sensitivity).
Using likelihood alone isn’t a good idea because again the results are dependent on the parameterisation chosen…you could bias your results just by making a coordinate transformation. Plus you
don’t get a probabilistic prediction.
When Nic Lewis referred to objective Bayesian statistics in post 66 above, I’d guess he meant the Jeffreys’ prior.
ps: I’m talking about the *second* version of JP, the 1946 version not the 1939 version, which resolves the famous issue that the 1939 version had related to the mean and variance of the normal
Nic Lewis was happy to concur and to provide a list of flat-prior studies.
Steve, Ray
First, when I refer to an objective Bayesian method with a noninformative prior, that means using what would be the original Jeffreys’ prior for inferring a joint posterior distribution for all
parameters, appropriately modified if necessary to give as accurate inference (marginal posteriors) for individual parameters as possible. In general, that would mean using Bernardo and Berger
“reference priors”, one targeted at each parameter of interest. In the case of independent scale and location parameters, doing so would equate to the second version of the Jeffreys’ prior that
Steve refers to. In practice, when estimating S and Kv, marginal parameter inference may be little different between using the original Jeffreys’ prior and targeted reference priors.
Secondly, here is a list of climate sensitivity studies that used a uniform prior for main results when for estimating climate sensitivity on its own, or when estimating climate sensitivity S
jointly with effective ocean vertical diffusivity Kv (or any other parameter like those two in which observations are strongly nonlinear) used uniform priors for S and/or Kv.
Forest et al (2002)
Knutti et at (2002)
Frame et al (2005)
Forest et al (2006)
Forster and Gregory (2006) – results as presented in IPCC AR4 WG1 report (the study itself used 1/S prior, which is the Jeffreys’ prior in this case, where S is the only parameter being
Hegerl et al (2006)
Forest et al (2008)
Sanso, Forest and Zantedeschi (2008)
Libardoni and Forest (2011) [unform for Kv, expert for S]
Olson et al (2012)
Aldrin et al (2012)
This includes a large majority of the Bayesian climate studies that I could find.
Some of these papers also used other priors for climate sensitivity as alternatives, typically either informative “expert” priors, priors uniform in the climate feedback parameter (1/S) or in one
case a uniform in TCR prior. Some also used as alternative nonuniform priors for Kv or other parameters being estimated.
Steve Jewson again:
Sorry to go on about it, but this prior thing this is an important issue. So here are my 7 reasons for why climate scientists should *never* use uniform priors for climate sensitivity, and why
the IPCC report shouldn’t cite studies that use them.
It pains me a little to be so critical, especially as I know some of authors listed in Nic Lewis’s post, but better to say this now, and give the IPCC authors some opportunity to think about it,
than after the IPCC report is published.
1) *The results from uniform priors are arbitrary and hence non-scientific*
If the authors that Nic Lewis lists above had chosen different coordinate systems, they would have got different results. For instance, if they had used 1/S, or log S, as their coordinates,
instead of S, the climate sensitivity distributions would change. Scientific results should not depend on the choice of coordinate system.
2) *If you use a uniform prior for S, someone might accuse you of choosing the prior to give high rates of climate change*
It just so happens that using S gives higher values for climate sensitivity than using 1/S or log S.
3) *The results may well be nonsense mathematically*
When you apply a statistical method to a complex model, you’d want to first check that the method gives sensible results on simple models. But flat priors often given nonsense when applied to
simple models. A good example is if you try and fit a normal distribution to 10 data values using a flat prior for the variance…the final variance estimate you get is higher than anything that
any of the standard methods will give you, and is really just nonsense: it’s extremely biased, and the resulting predictions of the normal are much too wide. If flat priors fail on such a simple
example, we can’t trust them on more complex examples.
4) *You risk criticism from more or less the entire statistics community*
The problems with flat priors have been well understood by statisticians for decades. I don’t think there is a single statistician in the world who would argue that flat priors are a good way to
represent lack of knowledge, or who would say that they should be used as a convention (except for location parameters…but climate sensitivity isn’t a location parameter).
5) *You risk criticism from scientists in many other disciplines too*
In many other scientific disciplines these issues are well understood, and in many disciplines it would be impossible to publish a paper using a flat prior. (Even worse, pensioners from the UK
and mathematicians from the insurance industry may criticize you too
6) *If your paper is cited in the IPCC report, IPCC may end up losing credibility*
These are much worse problems than getting the date of melting glaciers wrong. Uniform priors are a fundamentally unjustifiable methodology that gives invalid quantitative results. If these
papers are cited in the IPCC, the risk is that critics will (quite rightly) heap criticism on the IPCC for relying on such stuff, and the credibility of IPCC and climate science will suffer as a
7) *There is a perfectly good alternative, that solves all these problems*
Harold Jeffreys grappled with the problem of uniform priors in the 1930s, came up with the Jeffreys’ prior (well, I guess he didn’t call it that), and wrote a book about it. It fixes all the
above problems: it gives results which are coordinate independent and so not arbitrary in that sense, it gives sensible results that agree with other methods when applied to simple models, and
it’s used in statistics and many other fields.
In Nic Lewis’s email (number 89 above), Nic describes a further refinement of the Jeffreys’ Prior, known as reference priors. Whether the 1946 version of Jeffreys’ Prior, or a reference prior, is
the better choice, is a good topic for debate (although it’s a pretty technical question). But that debate does muddy the waters of this current discussion a little: the main point is that both
of them are vastly preferable to uniform priors (and they are very similar anyway). If reference priors are too confusing, just use Jeffreys’ 1946 Prior. If you want to use the fanciest
statistical technology, use reference priors.
ps: if you go to your local statistics department, 50% of the statisticians will agree with what I’ve written above. The other 50% will agree that uniform priors are rubbish, but will say that JP
is rubbish too, and that you should give up trying to use any kind of noninformative prior. This second 50% are the subjective Bayesians, who say that probability is just a measure of personal
beliefs. They will tell you to make up your own prior according to your prior beliefs. To my mind this is a non-starter in climate research, and maybe in science in general, since it removes all
objectivity. That’s another debate that climate scientists need to get ready to be having over the next few years.
I wonder how many of the flat prior studies will make it to the final draft of AR5? All of them?
|
{"url":"http://www.thegwpf.org/ipcc-warned-unscientific-methods/","timestamp":"2014-04-20T01:35:58Z","content_type":null,"content_length":"29761","record_id":"<urn:uuid:0b93255d-6371-49e9-bf62-e2fc0bbaaf64>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This material (including images) is copyrighted!. See my copyright notice for fair use practices.
When the direct method of trigonometric parallax does not work for a star because it is too far away, an indirect method called the Inverse Square Law of Light Brightness is used. This method uses
the fact that a given star will grow dimmer in a predictable way as the distance between you and the star increases. If you know how much energy the star emits, then you can derive how far away it
must be to appear as dim as it does. Stars become fainter with increasing distance because their energy is spread out over a larger and larger surface.
A star's apparent brightness (its flux) decreases with the square of the distance. The flux is the amount of energy reaching each square centimeter of a detector (eg., your eye, CCD, piece of the
sphere) every second. Energy from any light source radiates out in a radial direction so concentric spheres (centered on the light source) have the same amount of energy pass through them every
second. As light moves outward it spreads out to pass through each square centimeter of those spheres.
The same total amount of energy must pass through each sphere surface. Since a sphere has a surface area of 4p × (its radius)^2, the flux of energy on sphere-1 = (the flux of energy on sphere #2) ×
[(sphere #2's radius)/(sphere #1's radius)]^2. Notice that the radius for the reference flux (sphere #2) is on the top of the fraction while the radius for the unknown flux (sphere #1) is on the
bottom---this is an inverse square law! As the distance INcreases, the flux DEcreases with the square of the distance. See the math review appendix for help on when to multiply and when to divide the
distance factor.
Put another way: As the flux DEcreases, the star's distance INcreases with the square root of the flux. If you know how much energy pours through the star's surface and you measure how much energy
you detect here on the Earth, then you can derive the star's distance from you.
flux Inverse Square Law of Light Brightness
• Inverse Square Law: Brightness at distance A = (brightness at distance B) × [(distance B)/(distance A)]^2. Position (B) is the reference position.
• Unknown distance = reference distance × Sqrt[(reference flux)/(measured flux)].
1. Two identical stars have different apparent brightnesses (fluxes). One star is 10 parsecs away from us and the other is 30 parsecs away from you. Which star is brighter and by how many times?
2. Two identical stars have different fluxes. One star is 5 parsecs away from you and appears 81 times brighter than the other star. How far away is the dimmer star?
3. The Earth receives about 1380 Watts/meter^2 of energy from the Sun. How much energy does Saturn receive from the Sun (Saturn-Sun distance = 9.5 A.U.)? (A Watt is a unit for the amount of energy
generated or received every second.)
last updated: 23 May 2001
Is this page a copy of Strobel's Astronomy Notes?
Author of original content: Nick Strobel
|
{"url":"http://www.astronomynotes.com/starprop/s3.htm","timestamp":"2014-04-19T04:33:16Z","content_type":null,"content_length":"5862","record_id":"<urn:uuid:2668635e-52c8-487c-9ea3-47d34b346283>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trigonometry Proofs
Re: Trigonometry Proofs
Hi Fla$h;
Problem #3 is not an identity.
It should read:
Now here is the proof for #3:
Now cross multiply:
Now substitute cos²(A) = 1- sin²(A) and clean up:
Last edited by bobbym (2009-08-22 14:12:07)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=117484","timestamp":"2014-04-17T03:59:44Z","content_type":null,"content_length":"39926","record_id":"<urn:uuid:d08926e2-9a20-4885-b482-d8d97729c534>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Deformations of hypersurfaces
up vote 6 down vote favorite
Suppose I have a smooth hypersurface $X$ in $\mathbb{P}^n$ which is invariant under a (say finite) group $G$ of projective transformations. What can be said about the action of $G$ on the deformation
space $H^1(X,T_X)$? I could imagine that many examples (especially with $dim(X)=1$ or $dim(X)=2$) have been worked out, if this is of any interest at all.
ag.algebraic-geometry deformation-theory
add comment
2 Answers
active oldest votes
Let's assume that we are working over $\mathbb{C}$.
First of all, hypersurfaces in $\mathbb{P}^n$ are unobstructed, so their first-order deformations always correspond to small deformations (deformations over a disk).
As a general fact, when you consider a smooth variety $X$ with a finite group $G$ acting $holomorphically$ on it, the invariant subspace $H^1(X, T_X)^G$, it parametrizes those first-order
deformations that preserve the holomorphic $G$-action. This essentially comes from the fact that, being the action of $G$ holomorphic, if you take $\sigma \in G$, then $\sigma_*$ commutes
with $\bar{\partial}$ and the Green operator $\boldsymbol{G}$, so if $\varphi(t)$ solves the Kuranishi equation
$\varphi(t)=t + \frac{1}{2}\bar{\partial}^* \boldsymbol{G}[\varphi(t), \varphi(t)]$
for $t$, then $\sigma_*\varphi(t)$ solves the Kuranishi equation for $\sigma_*t$, and $\sigma_{*} \varphi(t) = \varphi(\sigma_*t)$.
Example Let us consider a quintic Fermat surface $X \subset \mathbb{P}^3$ of equation
It admits a free action of the cyclic group $\mathbb{Z}_5$ given as follows: if $\xi$ is a primitive $5$-th root of unity, then
up vote 6 $\xi \cdot (x,y,z,w)=(x, \xi y, \xi^2 z, \xi^3 w) $.
down vote
The quotient $Y := X/\mathbb{Z}_5$ is a Godeaux surface (i.e. a surface of general type with $p_g=q=0, K^2=1$ ) with fundamental group $\mathbb{Z}_5$. M. Reid proved that, conversely, every
Godeaux surface with fundamental group $\mathbb{Z}_5$ arises in this way and that, moreover, the corresponding moduli space is generically smooth of dimension $8$. Then in this case we have
$\dim H^1(X, T_X)=40$
$\dim H^1(X, T_X)^G=H^1(Y, T_Y)=8$,
since the number of moduli of quintics keeping the free $G$-action equals the number of moduli of the Godeaux surface $Y$ (well, Horikawa showed that the deformations of quintic surfaces
are complicated enough, anyway $40$ is the right number).
Actually, one can say more and check that for every irreducible character $\chi$ of $G$ one has
$\dim H^1(X, T_X)^{\chi} = 8$,
but I do not know any easy interpretation of these eigenspaces in terms of the deformations of the quintic.
That is an interesting example, as it both shows that the action may be nontrivial, and gives an interpretation of the fixed part. What would you say is the best way to 'check' the
decomposition of $H^1(X,T_X)$ as a $G$-representation? In the given case, I would try and write that as the quotient of $H^0(X,\mathcal{O}(5))$ by $H^0(X,T_{\mathbb{P}^3}$, but this does
no seem to depend on the given equation (except for its degree), and I wouldn't know the action on the latter. – bellini Aug 18 '10 at 19:26
Once you know that the invariant part is $8$-dimensional, the fact that also all the other eigenspaces are $8$-dimensional comes almost immediately by symmetry considerations. I never
tried the computation you are suggesting, anyway probably the action on the latter group should be not too difficult to understand: write a basis for $H^0(P^3, T_{P^3})$ by using Euler
sequence and restrict it to $X$ (here you use the particular form of the equation). – Francesco Polizzi Aug 18 '10 at 19:59
The contribution of the eigenspaces, not just for a hypersurface but in general when the variety you're working with is (say) smooth, is that they can be used to defined so-called
"natural" deformations to which the group action doesn't extend, at least when $G$ is abelian. See Pardini's paper Abelian Covers (Crelle early nineties) for the definition of natural
deformation. – Barbara Aug 20 '10 at 13:17
add comment
Suppose that the action of $G$ on the smooth hypersurface $X$ of degree $d$ in $\mathbb P^n$ comes from a linear action of $G$ on $k^{n+1}$, as in Francesco's example. Assume also that $n \
geq 3$, and $d \geq 2$. From the conormal sequence $$ 0 \longrightarrow \mathrm T_X \longrightarrow \mathrm T_{\mathbb P^n}{\mid}_X \longrightarrow \mathcal O_X(d) \longrightarrow 0 $$ we get
an exact sequence $$ \mathrm H^0(X, \mathrm T_{\mathbb P^n}{\mid}_X)\longrightarrow \mathrm H^0(X, \mathcal O_X(d)) \longrightarrow \mathrm H^1(X, \mathrm T_X)\ . $$ One can show that the
homomorphism $\mathrm H^0(X, \mathcal O_X(d)) \longrightarrow \mathrm H^1(X, \mathrm T_X)$ is surjective, except in the single case $n = 3$, $d = 4$, where there is a 1-dimensional cokernel.
Let us exclude this particular case (which can also be treated).
For each $i \geq 0$, let $V_i$ be the space of forms of degree $i$ in $n+1$ variables; there is a natural action of $G$ on $V_i$. In representation theoretic terms, $V_i = \mathop{\rm Sym}^i
(k^{n+1})^\vee$. Let $f \in V_d$ an equation for $X$ and $L$ the substspace generated by $f$. From the Euler sequence $$ 0 \longrightarrow \mathcal O_X \longrightarrow \mathcal O_X(1)^{n+1} \
longrightarrow \mathrm T_{\mathbb P^n}{\mid}_X \longrightarrow 0 $$ we get a surjection $\mathrm H^0(X, \mathcal O_X(1))^{n+1} \to \mathrm H^0(X, \mathrm T_{\mathbb P^n}{\mid}_X)$; thus $\
up vote mathrm H^1(X, T_X)$ can be interpreted as the cokernel of a map $\phi \colon \mathrm H^0(X, \mathcal O_X(1))^{n+1} \to \mathrm H^0(X, \mathcal O_X(d))$. We have $\mathrm H^0(X, \mathcal O_X
3 down (1)) = V_1$ and $\mathrm H^0(X, \mathcal O_X(d)) = V_d/L$; furthermore, by unwinding the definitions one can show that the map $\phi \colon V_1^{n+1} \to V_d/L$ sends $(\ell_0, \dots, \ell_n)
vote $ into the class of $\ell_0f_{x_0} + \cdots + \ell_n f_{x_n}$.
So, one can describe $\mathrm H^1(X, \mathrm T_X)$ as the quotient of $V_d$ modulo the subspace generated by $f$ and by the classes of the form $\ell_0f_{x_0} + \cdots + \ell_n f_{x_n}$,
where the $\ell_i$ are homogeneous of degree 1. If the characteristic of $k$ does not divide $d$, then from Euler's formula we see that $f$ is of the form $\ell_0f_{x_0} + \cdots + \ell_n f_
{x_n}$, so we don't need to add it.
This gives a description of the action of $G$ on $H^1(X, \mathrm T_X)$, which allows to compute it, at least in simple cases (the calculations could become unwieldy, particularly in the
non-abelian case).
Thanks, Angelo. Now I see where the equation $f$ comes in. I agree that it looks uninviting, but if no one else comes up with a better idea, this is the way to go. I would guess that
calculations like this have been done 150 years ago in invariant theory. – bellini Aug 21 '10 at 8:06
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry deformation-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/35985/deformations-of-hypersurfaces/36257","timestamp":"2014-04-20T06:42:00Z","content_type":null,"content_length":"63671","record_id":"<urn:uuid:9653685b-a33a-4775-8dc4-d8d26b2d1999>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: RE: RE: RE: Simple panel data mean plot
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: RE: RE: RE: Simple panel data mean plot
From "Martin Weiss" <martin.weiss1@gmx.de>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: RE: RE: RE: Simple panel data mean plot
Date Thu, 24 Jun 2010 13:48:29 +0200
So my solution gives you a -graph bar-, which is not exactly what you wanted
- and it cannot be -recast()- to a different -graph- type. Still: Use the
-preserve-/-restore- route...
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Martin Weiss
Sent: Donnerstag, 24. Juni 2010 13:41
To: statalist@hsphsun2.harvard.edu
Subject: st: RE: RE: RE: Simple panel data mean plot
Of course you can:
use http://www.stata-press.com/data/r11/nlswork.dta, clear
graph bar (mean) age, over(year)
You can also -preserve-/-restore- your way around the destruction of your
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Yann DE MEY
Sent: Donnerstag, 24. Juni 2010 13:25
To: statalist@hsphsun2.harvard.edu
Subject: st: RE: RE: Simple panel data mean plot
Thank you for this working solution. However, a method that does not
clear/replace my current dataset would be more practical. I really feel like
this is a simple 'problem' and hope there is another easy command?
-----Original Message-----
From: n.j.cox@durham.ac.uk [mailto:owner-statalist@hsphsun2.harvard.edu] On
Behalf Of Nick Cox
Sent: donderdag 24 juni 2010 12:07
To: statalist@hsphsun2.harvard.edu
Subject: st: RE: Simple panel data mean plot
-collapse- first; then it's -twoway line-.
Yann DE MEY
I have a very simple problem: I have a panel dataset and would like to
easily get an overview graph that plots the mean of all observations as
a function of year and connects them with a line. Basically I want to
plot the results from:
table year, contents (mean VARIABLE)
The 'mband' command is the closest I found, but this command uses the
median and not the mean.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-06/msg01341.html","timestamp":"2014-04-18T08:20:51Z","content_type":null,"content_length":"11153","record_id":"<urn:uuid:db186a22-4613-4e44-897f-abcd66a52fdb>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can anyone help me figure this problem out pretty please? Find an equation of the tangent line to the curve xe^(y)+ye^(x)=1 at the point (0,1) The equation is rewritten in the post with the equation
converter helper = )
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[xe ^{y}+ye ^{x}=1\] at (0,1)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
any idea how to do this one? = )
Best Response
You've already chosen the best response.
Bahhhhh gimme few minutes XD eating some food.
Best Response
You've already chosen the best response.
kk take your time = )
Best Response
You've already chosen the best response.
\[\huge xe^y+ye^x=1\]Were you able to take a derivative of this function? Or are we having some trouble on that part? :)
Best Response
You've already chosen the best response.
Or confused about the directions? heh
Best Response
You've already chosen the best response.
I was a little confused, since my teacher added implicit differentiation to our course last semester I always get confused. This is what I got when I took the derivative but I'm not sure if its
correct: \[ye ^{x}=1-xe ^{y}\] \[y=\frac{ 1-xe ^{y} }{ e ^{x} }\] \[y'=\frac{ e ^{x}(y'xe ^{y})-(1-xe ^{y})(e ^{x}) }{ (e ^{x})^{2} }\]
Best Response
You've already chosen the best response.
ew ew ew don't try to turn it into an explicit function in \(x\). It's actually impossible in this case. So just leave it alone, and take the derivative from where it is at the start D:
Best Response
You've already chosen the best response.
hahaha ok so show me = )
Best Response
You've already chosen the best response.
Implicit isn't too bad, just remember that whenever you take the derivative of a `y` term, a y' will pop out. Ok ok ok :)
Best Response
You've already chosen the best response.
so is it everytime the derivative of y will be y*y' ?
Best Response
You've already chosen the best response.
Everything you take the derivative of y, you'll apply the power rule just like you would for x. Giving you 1 (not y), and then you attach a y' to it. So the derivative of y will be 1*y'.
Best Response
You've already chosen the best response.
\[\huge \color{salmon}{xe^y}+ye^x=1\]We'll take the derivative of the pink term first. Applying the `Product Rule` will give us,\[\large \color{salmon}{(x)'e^y+x(e^y)'}\]Which simplifies to,\[\
large \color{salmon}{e^y+x(e^y)y'}\]
Best Response
You've already chosen the best response.
Understand how we did the first term? :)
Best Response
You've already chosen the best response.
ok that's right I remember now = )
Best Response
You've already chosen the best response.
that was for above hold on let me read what u posted lol
Best Response
You've already chosen the best response.
Hehe yah I understand :3
Best Response
You've already chosen the best response.
On the second part, we had to apply the chain rule. e^y gave us e^y. But then we also have to multiply by the derivative of the exponent.
Best Response
You've already chosen the best response.
ok so the next part would be (and altogether): \[(e ^{y}+xy'e ^{y})+(y'e ^{x}+ye ^{x})\]
Best Response
You've already chosen the best response.
=1 oops lol
Best Response
You've already chosen the best response.
Yes good c: `Except` we took the derivative of `both sides`, so what should the right side produce?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[\huge e^y+xe ^y\color{orangered}{y'}+e^x\color{orangered}{y'}+ye^x=0\]Our goal from here is to solve for y'
Best Response
You've already chosen the best response.
Yes, good 0* blah that disappeared somehow lol
Best Response
You've already chosen the best response.
haha its ok i was trying to figure out what to do next lol i was going to say plug 0 in for x but solve for y' that makes sense lol
Best Response
You've already chosen the best response.
Understand how to do that? It's not too bad, it just requires a bit of nasty algebra :)
Best Response
You've already chosen the best response.
yeah give me a sec i love algebra = )
Best Response
You've already chosen the best response.
haha nice XD
Best Response
You've already chosen the best response.
Oh my goodness betty... -_- are you using the equation tool? zzzzzzz lol
Best Response
You've already chosen the best response.
Ok I do love algebra but I could be wrong with this one but I think I might be right at the same time lol \[xy'e ^{y}+y'e ^{x}=-e ^{y}-e ^{x}y\] \[y'(xe ^{y}+e ^{x})=-e ^{y}-e ^{x}y\] \[y'=\frac{
-e ^{y}-e ^{x}y }{ xe ^{y}+e ^{x} }\]
Best Response
You've already chosen the best response.
haha yes i was
Best Response
You've already chosen the best response.
looks good c:
Best Response
You've already chosen the best response.
ok so now I plug 0 in for x?
Best Response
You've already chosen the best response.
what is that lol jk
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
This isn't necessarily what the function looks like. It's just to give us an idea of what they're asking.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
They want us to form an equation for the `straight line` that is tangent to the function at x=0.
Best Response
You've already chosen the best response.
Remember the form of a straight line? :)
Best Response
You've already chosen the best response.
That's right! Your good ole buddy \(y=mx+b\). Remember him?
Best Response
You've already chosen the best response.
oh yeah ok i wasn't sure what u were asking lol
Best Response
You've already chosen the best response.
Our slope \(m\) is going to be equal to the slope of the tangent line at x=0. Meaning \(y'(0)=m\).
Best Response
You've already chosen the best response.
I was actually helping a person out today a lot with that form and the slope form with two points
Best Response
You've already chosen the best response.
Oh cool XD
Best Response
You've already chosen the best response.
I guess I should have written it like this,\[y'(0,1)=m\]We're going to plug in the given x and y value to get our m.
Best Response
You've already chosen the best response.
oh ok i was wondering what i would do with the y-values
Best Response
You've already chosen the best response.
i got \[\frac{ -e-1 }{ 1 }\]
Best Response
You've already chosen the best response.
Yep looks good.
Best Response
You've already chosen the best response.
\[\large y=mx+b \qquad \rightarrow \qquad y=-(e+1)x+b\] And now we only need to find the y-intercept `b`. To do so, we'll again use the point that ws given (0,1).
Best Response
You've already chosen the best response.
kk let me work it out = )
Best Response
You've already chosen the best response.
ok so this is what i got: \[y=-(e+1)x+1\] Now would you multiply it out like this: \[y= -ex-x+1\rightarrow -x(e+1)+1\]
Best Response
You've already chosen the best response.
No don't multiply it out c: looks good! Hopefully we didn't make any silly mistakes in there. Looks correct though :D
Best Response
You've already chosen the best response.
and then thats it? = )
Best Response
You've already chosen the best response.
Yes, we found an equation for the line tangent to our curve at x=0, y=1.
Best Response
You've already chosen the best response.
awesome = ) ty
Best Response
You've already chosen the best response.
I got a couple others = ) stay posted lol
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5105e081e4b03186c3f9dda6","timestamp":"2014-04-19T15:19:02Z","content_type":null,"content_length":"185154","record_id":"<urn:uuid:b90ff55c-f706-4ef8-8789-8b56368822d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sportscience In Brief Oct-Dec 2001
Clinical vs Statistical Significance
Will G Hopkins, Physiology and Physical Education, University of Otago, Dunedin 9001, New Zealand. Email. Sportscience 5(3), sportsci.org/jour/0103/inbrief.htm#clinical, 2001 (630 words)
You have spent many months and many thousands of dollars studying an effect. You have analyzed the data in a new manner that takes into account clinical or practical significance. Here is the
outcome of the analysis for the average person in the population you studied: an 80% chance the effect is clinically beneficial, a 15% chance that it has only a clinically trivial effect, and a 5%
chance that it is clinically harmful. Should you publish the study? I think so. The effect has a good chance of helping people. Indeed, it has 16 times more chance of helping than of harming. If
you think that the 80% chance of helping is too low or that the 5% risk of harming is too high (it will depend on the nature of the help and harm), you could get more data before you publish. But if
there's no more money or time for the project, publish what you've got. Other researchers can do more work and meta-analyze all the data to increase the disparity between the likelihoods of help and
Will the editor of a journal accept your data for publication? To make that decision, the editor will send your article to one or more so-called peer reviewers, who are usually other researchers
active in your area. Most reviewers base their decisions on statistical significance, which they know has something to do with the effect being real. Statistical significance is defined by a
probability or p value. The smaller the p value, the less likely the effect is just a fluke. When the p value is less than 0.05, you can call the result statistically significant. Your article is
much more likely to be accepted when p=0.04 than when p=0.06.
So what is the p value for the above data? Incredibly, it's 0.20. Check for yourself on the spreadsheet for confidence limits, which I have recently updated to include likelihoods of clinically
important and trivial effects for normally distributed outcome statistics. To work out these likelihoods, you need to include the smallest clinically important positive and negative value of the
effect you have been studying. In this example I chose ±1.0 units. I made the observed value of the effect 3.0 units–obviously clinically important as an observed value, but at issue is the
likelihood that the true value (the average value in the population) is clinically important. You will also have to include a number for degrees of freedom; I chose 38 (as in, for example, a
randomized controlled trial with 20+20 subjects), but the estimates of likelihood are insensitive to all but really small degrees of freedom. Finally, of course, you will need the p value, here
0.20. You can get even more excitingly non-significant findings with smaller p values. For example, changing p to 0.10 makes the likelihoods 87%, 12% and 2% for help, triviality, and harm
respectively. Yet even these data would be rejected by most reviewers and editors, because p>0.05.
Something is clearly wrong somewhere. It's not the spreadsheet; it's the requirement for p<0.05. Statistical significance does not do justice to some clinically useful effects. We should be
reporting probabilities of clinical significance, not the probability that defines statistical significance. Reviewers and editors would then make better decisions. We still need to report
precision of estimation using likely (confidence) limits for the true value of the effect, but 95% limits give an impression of too much uncertainty for some clinically useful effects. Even 90%
might be too conservative in this respect, but there is something appealing about limits that define the true value correctly 9 times out of 10.
Reviewer's comment
Qualitative vs Quantitative Research Designs
Will G Hopkins, Physiology and Physical Education, University of Otago, Dunedin 9001, New Zealand. Email. Sportscience 5(3), sportsci.org/jour/0103/inbrief.htm#qual, 2001 (418 words)
This year I gave a series of talks in several places on exercise and sport research. I used simple PowerPoint slides to act as a stimulus for informal discussion. Most of the material is already at
this site in one form or another, but I sometimes added new stuff that might be useful for people giving or taking courses in our discipline. To download the slides for the talk I gave on research
design, click on this link. Other talks will follow in future issues of Sportscience.
Most of these slides represent a summary of an article on quantitative research published here last year, but I have now included an overview of qualitative research. My neo-positivistic perspective
will outrage radical post-modernists, but it's probably a fair representation of the world that the moderates inhabit.
I used to be critical of my story-teller colleagues, until I realized that qualitative research in its purest form is the science of single case studies, rather like the quest for truth in a court
case. You should employ a qualitative researcher anytime you want an answer to a question of the form what's happened here. For example: why is our team underperforming, why can't we swim as good
as the Australians, how should we reorganize our sports institute, and what can we learn from attitudes to sport in the 1930s? Qualitative researchers also engage in action research: an intervention
to change the world at the single-case level. A suitably qualified qualitative researcher might be able to make your team perform better.
On the other hand, a quantitative researcher has the skills to find out what's happening generally. For example, what’s the effect of strength training on rowing economy, what predicts individual
responses to the effect of exercise on blood lipids, what are the main causes of acute and chronic injuries in triathletes, and why do kids choose to play particular sports? Quantitative researchers
indulge in observational (descriptive) studies to quantify associations between variables, but they sort out cause and effect with experimental studies (interventions).
Qualitative researchers usually gather data by observing and interviewing, whereas quantitative researchers usually test and measure. But I don't think these methodologies should define the two
paradigms. What matters is the scope of your inferences: a conclusion about a single case is qualitative research; a generalization from two or more cases is quantitative.
Reviewer's comment
A Ban on Caffeine?
Will G Hopkins, Physiology and Physical Education, University of Otago, Dunedin 9001, New Zealand. Email. Sportscience 5(3), sportsci.org/jour/0103/inbrief.htm#caffeine, 2001 (372 words)
For most endurance athletes, a couple of 100-mg caffeine pills taken an hour or so before a race will increase power output by a few percent. The International Olympic Committee therefore lists
caffeine as a banned substance, but the caffeine in such everyday foods as coffee, tea, chocolate, and Coca-Cola has made enforcement of the ban impractical. The IOC has therefore somewhat
ambiguously made caffeine also a restricted substance by setting an upper limit on the amount athletes can have in a urine sample. A 70-kg athlete would probably exceed the limit by drinking more
than 5 cups of strong coffee or 5 liters of Coke.
Now there's been a call to enforce the absolute ban (Graham, 2001). The reason? Caffeine use is unethical, because caffeine is not a "traditional nutrient", and because some athletes take caffeine
"for the express purpose of gaining an advantage". The sentiment is well-intentioned, but the reasoning is illogical. Traditional foods contain caffeine, so caffeine is a traditional nutrient.
Athletes train hard, eat well, and buy expensive equipment to gain an advantage, but we aren't about to ban those practices. Sure, there's a sense in which caffeine is a drug, and there's a sense in
which use of any drug is unethical, even when there is no known health risk. But when the drug is part of normal food, an absolute ban would be more than a great inconvenience: in my view it is
unethical to make athletes change customary dietary behaviors for the sake of sport.
It would be appropriate to ban deliberate use of pure caffeine, but it’s unlikely anyone can develop a urine or blood test that would distinguish between the synthetic caffeine in capsules and the
natural caffeine in the normal diet. The caffeine in drinks containing extracts of guarana berries would also be a problem. These drinks probably work better than coffee, which contains something
that partly counteracts the ergogenic effect of caffeine. Guarana drinks are nevertheless natural, if not traditional, fare that should not be banned.
Graham TE (2001). Caffeine and exercise. Sports Medicine 31, 785-807
Editorial: Anti-Spamming Strategies
Will G Hopkins, Physiology and Physical Education, University of Otago, Dunedin 9001, New Zealand. Email. Sportscience 5(3), sportsci.org/jour/0103/inbrief.htm#editorial, 2001 (335 words)
Spam is unsolicited junk email inviting you to part with your money in various annoying and often offensive ways. Spammers now get email addresses off Web pages using automated search engines. To
offer some interim protection to authors of articles at this site, I have now replaced the "@" sign in all email addresses with something that should put the spammers' search engines off the scent.
When you click on an email link, you will have to change the address manually to make it work. At the moment I have only edited the html pages in this manner; doc and pdf files are unchanged. I
have also uploaded a large number of false email addresses to a hidden html page, to give the search engines something to find.
I have visited several anti-spamming sites to see what others are doing. I could find no convincing proactive strategy, but all offer advice for avoiding spam Here's an edited version of
spamrecycle.com's contribution…
• Never respond to spam.
• Never buy anything advertised in spam.
• Don’t put your address on any website.
• Use a second free email address in newsgroups, and change it frequently.
• Don’t give your email address without knowing how it will be used.
• Use a spam filter or other anti-spam email software.
The last point sounds good, but it may be impractical to keep updating filters. All sites I visited were short on specifics of how to do it. And you still get the spam, even if you don't see it.
You should also make sure that the address list of any mailing list you are on is not publicly accessible. People on the Sportscience mailing list and people mailing to the list are safe in this
Here are a few more anti-spam sites, courtesy of Caroline Burge:
Published Dec 2001
|
{"url":"http://www.sportsci.org/jour/0103/inbrief.htm","timestamp":"2014-04-19T19:01:35Z","content_type":null,"content_length":"53850","record_id":"<urn:uuid:3f02359e-006c-41ee-906d-9fca08e6726e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teaching Chaos
But First,
What is Complex Systems Theory?
Strategies and Rubrics for Teaching Chaos and Complex Systems Theories as Elaborating, Self-Organizing, and Fractionating Evolutionary Systems,
Fichter, Lynn S., Pyle, E.J., and Whitmeyer, S.J., 2010, Journal of Geoscience Education (in press)
Complex systems theory says that complex causes can produce simple effects.
Complexity theory studies how non-linear systems with many agents (individual interacting units, like birds in a flock, sand grains in a ripple, or the individual units of friction along a fault
zone), existing far from equilibrium, interact through positive and negative feedbacks to form emergent, interdependent, dynamic, evolutionary networks that increase in complexity, diversity, order,
and/or interconnectedness with time.
What complexity theory demonstrates is that, by following simple rules, all the agents end up coordinating their behavior so that what emerges is not vernacular chaos (utter disorder and confusion),
but recognizable patterns. Complex systems possess virtually all the properties of chaos systems (fractal, sensitive dependent, follow power laws, etc.), which is why we study deterministic chaos
first, but add their own properties and behaviors.
Complex systems modelling is agent based — create n agents, assign each a few simple rules of behavior, and have them all interact with all simultaneously in a parallel processor. Experimentally,
because the behavior of these systems is unpredictable, and we often want to explore how the behavior changes as various variables are tuned, computer simulation of the behavior with real time
graphical output is a common strategy. It is difficult, maybe impossible, to understand complex systems deductively; their properties are qualitative and emergent.
These simulation properties of complex systems are expressed in the
computational viewpoint
, stated these ways.
> The computational viewpoint in mathematics is the notion that to know a mathematical truth you must be able to compute it. That is, the outcome of an algorithm can only be known by calculating
the algorithm.
> Stuart Kauffman: The theory of computation is replete with deep theorems. Among the most beautiful are those showing that, in most cases by far, there exists no shorter means of predicting what
an algorithm will do than to simply execute it, observing the succession of actions and states as they unfold. The algorithm itself is its own shortest description. It is, in the jargon of the
field, incompressible.
> Heinz Pagels: The computational viewpoint of the Universe is that the material world and the dynamic systems in it are computers. The brain, the weather, the solar system, even quantum
particles are all computers. They don't look like computers, of course, but what they are computing are the consequences of the laws of nature. According to computational viewpoint, the laws of
nature are algorithms that control the development of the system in time, just like real programs do for computers For example, the planets, in moving around the sun, are doing analogue
computations of the laws of Newton.
Examples of complex system models are boids, genetic algorithms, cellular automata, oscillating chemical reactions, and self-organized criticality and they have wide application in just about every
discipline: physics, chemistry, biology, sociology, economics, etc.
Part of the complexity of understanding complex systems is that different mechanisms often work in concert. Broadly, however, complex system behavior fall into three categories:
elaborating mechanisms
self-organizing mechanisms
, and
fractionating mechanisms
|
{"url":"http://www.jmu.edu/geology/ComplexEvolutionarySystems/ComplexityTheory.htm","timestamp":"2014-04-16T18:58:07Z","content_type":null,"content_length":"11682","record_id":"<urn:uuid:2ea6959d-1e91-4121-8b95-37a9fc39e2a0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Apocrypha
Mathematical Apocrypha: Stories and Anecdotes of Mathematicians and the Mathematical
This book contains a collection of tales about mathematicians and the mathematical, derived from the author's experience. It shares with the reader the nature of the mathematical enterprise, and
gives a glimpse of mathematical culture. The book brings legendary names to life, and shares little known stories about names we have heard all our lives. The book is written in a brisk and engaging
manner and it also includes a number of attractive photographs and illustrations.
We haven't found any reviews in the usual places.
References from web pages
Mathematical Apocrypha Redux - Cambridge University Press
More Stories and Anecdotes of Mathematicians and the Mathematical. Series: Spectrum · Steven Krantz. Washington University, St Louis ...
www.cambridge.org/ catalogue/ catalogue.asp?isbn=9780883855546
Mathematical apocrypha redux; more stories and anecdotes
0883855542 Mathematical apocrypha redux; more stories and anecdotes of mathematicians and the mathematical. Krantz, Steven G. Amer. Mathematical Soci
www.encyclopedia.com/ doc/ 1G1-139424440.html
Untitled Document
Mathematical Apocrypha Redux: More Stories & Anecdotes of Mathematicians and the Mathematical by Steven G. Krantz. Mathematical Association of America, ...
www.lms.ac.uk/ newsletter/ 353/ 353_10.html
avaxhome -> ebooks -> Science -> Mathematics -> Mathematical ...
Home > ebooks > Science > Mathematics. Mathematical Apocrypha Redux: More Stories and Anecdotes of Mathematicians and the Mathematical (Spectrum) ...
avaxsphere.com/ ebooks/ science_books/ math/ Mathem_Apocry.html
SIAM: Anecdotes About Anecdotes
Anecdotes About Anecdotes. December 8, 2002. Book Review Philip J. Davis. Mathematical Apocrypha. By Steven G. Krantz, Mathematical Association of America, ...
www.siam.org/ news/ news.php?id=496
JSTOR: Reviews
Krantz, Steven G., Mathematical Apocrypha: Stories and Anecdotes of Mathematicians and the Mathematical, MAA, 2002; xiii + 214 pp (P), $32.95 ($25.95 for ...
links.jstor.org/ sici?sici=0025-570X(200304)76%3A2%3C156%3AR%3E2.0.CO%3B2-Y
Mathematician pens book about famous mathematician foibles and funnies
Steven G. Krantz, Ph.D., professor of mathematics at Washington University in St. Louis, illuminates mathematicians' very human brilliance in his book, ...
news-info.wustl.edu/ tips/ page/ normal/ 7255.html
title, author/editor, publisher, year, vol. call number, tosho no. 1, Introduction to quantum groups and crystal bases / Jin Hong, Seok-Jin Kang. ...
www.math.nagoya-u.ac.jp/ library/ y2006-10.html
Mathematical Apocrypha Redux - Boek - BESLIST.nl
Bekijk en vergelijk informatie, beoordelingen, vragen & antwoorden en de beste winkels voor 'Mathematical Apocrypha Redux' op BESLIST.nl ▪ Boeken Engels ...
boeken_engels.beslist.nl/ boeken_engels/ d0000269833/ Mathematical_Apocrypha_Redux.html
CMS de la SMC
Volume 35. No. 7. November/novembre 2003. MESSAGE FROM THE. VICE-PRESIDENT. Dr. Samuel Shen,. VP Western Provinces. Français page 17 ...
journals.cms.math.ca/ cgi-bin/ vault/ public/ view/ Notesv35n7/ body/ PDF/ Notesv35n7.pdf?file=Notesv35n7
Bibliographic information
|
{"url":"http://books.google.com/books?id=HxpQFHGMfNIC&source=gbs_similarbooks_r&cad=2","timestamp":"2014-04-18T14:10:51Z","content_type":null,"content_length":"100376","record_id":"<urn:uuid:6fb7d6ec-47c6-4740-ae0c-ba02251a8dc5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EC Publications
The Effect of Mean Stress on Damage Predictions for Spectral Loading of Fiberglass Composite Coupons « Energy Security « Downloads
In many analyses of wind turbine blades, the effects of mean stress on the determination of damage in composite blades are either ignored completely or they are characterized inadequately. Mandell,
et al [1] have recently presented an updated Goodman diagram for a fiberglass material that is typical of the materials used in wind turbine blades. Their formulation uses the MSU/DOE Fatigue Data
Base [2] to develop a Goodman diagram with detailed information at thirteen R-values. Using these data, linear, bi-linear and full Goodman diagrams are constructed using mean and “95/95″ fits to the
data. The various Goodman diagrams are used to predict the failure stress for coupons tested using the WISPEX spectrum [3]. Three models are used in the analyses. The first is the linear Miner’s rule
commonly used by the wind industry to predict failure (service lifetimes). The second is a nonlinear variation of Miner’s rule which computes a nonlinear Miner’s Sum based upon an exponential
degradation parameter. The third is a generalized nonlinear residual strength model that also relies on an exponential degradation parameter. The results illustrate that Miner’s rule dos not predict
failure very well. When the mean Goodman diagram is used, the nonlinear models predict failures near the mean of the experimental data, and with then 95/95 Goodman diagram is used, the predict the
lower bound of the measured data very well.
|
{"url":"http://energy.sandia.gov/?page_id=1991&did=348","timestamp":"2014-04-17T01:38:40Z","content_type":null,"content_length":"48058","record_id":"<urn:uuid:6f280503-30c2-42b8-b393-2deb05cbc156>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Poway Algebra 2 Tutor
Find a Poway Algebra 2 Tutor
...I am best at both teaching and drawing portraits and figures, but am also able to teach the basics of still life and landscapes. I would love to teach any beginning or intermediate student how
to draw. I have been drawing and painting for nine years.
19 Subjects: including algebra 2, chemistry, calculus, writing
...In the classroom, I have extensive and relevant experience with the Algebra 2 curriculum, offering it for 7 years. I have developed many hands-on projects that bring the content of Algebra 2 to
life, including modeling activities for studying quadratic, polynomial, exponential and logarithmic functions. Learning Algebra 2 content inherently requires a solid foundation of Algebra 1.
15 Subjects: including algebra 2, physics, geometry, GRE
...From there, I work with the student to help them understand the basics of the problem by using effective math methods, such as real life mathematical concepts. This is very important, because
math is a series of building blocks. Once the basics are learned, a student can then develop the skills to tackle any math problem.
35 Subjects: including algebra 2, reading, English, writing
...I am confident teaching Japanese as a native speaker and biology as a professional scientist. I have experience teaching Japanese to some university students personally and have mentored a lot
of undergrad and pre-med students for their biological projects. I often have to teach them some basic math/chemistry such as morality calculation to university students.
14 Subjects: including algebra 2, writing, biology, Japanese
...Lita has enormous patience and has developed many ways of explaining the same topic in order to reach students who may comprehend it better one way than another, and to deepen the students'
understanding of the topic. It is clear that Lita enjoys both the subject matter and working with kids and...
10 Subjects: including algebra 2, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/poway_algebra_2_tutors.php","timestamp":"2014-04-16T22:31:03Z","content_type":null,"content_length":"23948","record_id":"<urn:uuid:19b1a1db-4a74-4c0b-8016-91d5969b4bd9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the Flux Through a Cube
In (a), the electric field is strictly in the x-direction (parallel to the xy plane), although its magnitude has a y dependency. The important thing, though, is that the direction of the flux lines
will be perpendicular to the front and back faces of the cube (where the front and back are assumed to be the faces parallel to the yz plane), and parallel to all other faces.
Things in (b) are a tad more complicated, with the field having distinct x and y (i and j) components. Write the outward pointing unit vectors for the dA area elements for each cube face and take the
dot product with the field vector (Cartesian dot product). That will tell you what integrals you'll need to perform to sum up the flux.
|
{"url":"http://www.physicsforums.com/showthread.php?s=73a2095a0eb01db0f19250b5ad5e296b&p=3110022","timestamp":"2014-04-23T15:22:04Z","content_type":null,"content_length":"28717","record_id":"<urn:uuid:22e00d67-08f2-4b6c-800c-79676f8eea65>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Absorbed dose of nuclear radioactivity
There is one more step that you may have done, although I don't see it here. The absorbed dosage is measured in terms of unit mass of tissue, rather than unit mass of isotope. You have found the mass
of K-40 present in 1 kg. of tissue and the amount of energy released per decay of one K-40 nucleus. You will also need to find the number of K-40 nuclei present in 1 kg. of tissue.
You would make the standard calculation of the number of moles of K-40 present and then multiply by Avogadro's number to obtain the number of nuclei present. You can then find
(number of K-40 nuclei/1 kg. of tissue) x (J/K-40 nucleus) = (J/1 kg. of tissue) ,
which can then be converted into rads or Grays.
(There is a further factor which estimates the biological effect of various forms of ionizing radiation, which you would multiply the absorbed dose by in order to find the effective dosage in rems or
Sieverts. For gamma-radiation, though, this factor is just 1. See, for instance,
|
{"url":"http://www.physicsforums.com/showthread.php?t=238054","timestamp":"2014-04-16T19:03:02Z","content_type":null,"content_length":"32169","record_id":"<urn:uuid:ebd9d9d7-1bb3-47a7-921e-976a2a822061>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ROTS Opening Weekend Great Collector's Challenge
I'm going to have to tally officially later - but I'm somewhere in the neighborhood of $650.00
Some observations from my trip to a local Walmart and Target this morning...* Target was doing $1.00 off each item at the register, as the cashier would scan a coupon she had for each SW item bought.
So if you didn't get that $1 off coupon in the mail, head on over to Target, I believe they're only doing this on Saturday.
The problem with America is stupidity I'm not saying there should be a capital punishment for stupidity, but why don't we just take the safety labels off of everything and let the problem solve
|
{"url":"http://www.jedidefender.com/yabbse/index.php?topic=6898.30","timestamp":"2014-04-20T11:00:01Z","content_type":null,"content_length":"66729","record_id":"<urn:uuid:2a4fbce2-0cea-4c58-8ded-c0ff3693373c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Here is another example, looks like you can get Alpha to output anything you like, plug this in.
Integrate[ pi r^2 ] with r =f(h)
I have for you I hereby grant you lifetime permission to use Bob's constant whenever you wish, free of charge and without the requirement to acknowledge whose constant it is.
Well, at least I have that going for me.
At least you are on this planet!
Wish I could get off but they won't take me. There is one major difference between me and that fellow, if you hit me on the head three times hard with a hammer you can change my mind.
|
{"url":"http://www.mathisfunforum.com/post.php?tid=18816&qid=248716","timestamp":"2014-04-16T04:45:46Z","content_type":null,"content_length":"23029","record_id":"<urn:uuid:237e4de6-5bde-4b84-9cf9-a5e956e8089f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Opa Locka Algebra Tutor
Find an Opa Locka Algebra Tutor
...I am graduated from the Universidad Autonoma de Santo Domingo in Dominican Republic. I got a bachelor's degree in Chemical Engineering. In this career we have to pass several levels of
Mathematics, as Algebra I and II, Calculus I and II, Differential Equations I and II, and applications in other subjects that are directly related with mathematics.
14 Subjects: including algebra 2, chemistry, algebra 1, Spanish
Hello. My name is Kendra; currently I have a Master’s degree in Business Administration from Nova Southeastern University and a Bachelor in Accounting from Johnson and Wales University. Also, I
have completed my Accelerated Christian Education (ACE) Supervisory Training and Accelerated Christian Education (ACE) Professional Development Training as a Lead Teacher at a private school.
26 Subjects: including algebra 1, algebra 2, English, statistics
I have a Mechanical Engineering degree from Florida International University. I have a lot of experience from working for Miami Dade college for 4 years, I worked in a high school as well for a
year and now I am currently working at Barry University for the past 3 years. My approach to mathematics is that it required a lot of patience because everyone does not capture it the same.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...My career started with Probability and Statistics for Science and Economics, and Quantitative Methods, teaching students in the MBA program. Discrete Math is an active branch of contemporary
math that is used in business, finance, economics, and the life- and social sciences. My first cycle deg...
24 Subjects: including algebra 2, algebra 1, calculus, geometry
I have always loved being a student, my love of learning lead to a love of teaching. I started tutoring math in high school. Then, while completing my undergraduate studies I was a nanny for two
elementary age children for a school year.
44 Subjects: including algebra 1, algebra 2, English, SAT math
|
{"url":"http://www.purplemath.com/opa_locka_algebra_tutors.php","timestamp":"2014-04-16T05:04:23Z","content_type":null,"content_length":"24137","record_id":"<urn:uuid:e4dda402-2cb1-4117-a22a-a2f5cf836dce>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• A community about mathematical ***ysis. Tag and discover new products. Share your images and discuss your questions with mathematical ***ysis experts. — “: mathematical ***ysis”,
• Welcome to . We are a member and/or supporter of the following organizations: Click their icon to link to their site. — “Mathematics, Russian optics, Russian dolls, watches”,
• A magician's card trick has prompted a mathematical re-evaluation of the limits on data compression A magician's card trick has prompted a mathematical re-evaluation of the limits on data
compression. — “Card Trick Leads to New Bound on Data Compression”,
• Ilias Lappas There is no doubt that the contribution of Ancient Greeks in the shaping and development of modern civilization has been subjected to publicity in such an extent that The research
for justifying the birth of mathematical proof was being conducted for the last ten years by the. — “Democracy gives birth to mathematical proof! - Athens”,
• Department of Mathematical Sciences. The department offers B.S. degrees in Mathematical Sciences & Actuarial Mathematics, M.S. degrees in Applied Mathematics & Applied Statistics,. — “Department
of Mathematical Sciences - Welcome”, wpi.edu
• Math modeling. Browse research on mathematical models. Read about math models explaining the shape of the ear, stock performance, musical expression, diseases and more. — “ScienceDaily:
Mathematical Modeling News”,
• Mathematical Sciences (DMS) MPS/DMS: Advice to PIs on Data Management Opportunities for the Mathematical and Physical Sciences in Earth System Modeling. — “nsf.gov - Division of Mathematical
Sciences (DMS) - US”, nsf.gov
• Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries and sometimes leads to
the development of entirely new mathematical disciplines, such as statistics and game theory. — “Mathematics - Wikipedia, the free encyclopedia”,
• Nonprofit organization to promote and advance the discovery, learning, and application of mathematics. In June 2010 NSERC requested that the mathematical and statistical sciences community
prepare a long range plan for a five- to ten-year horizon. — “Canadian Mathematical Society”, cms.math.ca
• On our research paper for the definition of this concept we have started with the exploration of the mathematical models which can mark out the correlation between the components of The Optimum
Limit of the Profitability. Then we have pursued. — “Mathematical onsets concerning the determination of the”,
• Definition of mathematical in the Online Dictionary. Meaning of mathematical. Pronunciation of mathematical. Translations of mathematical. mathematical synonyms, mathematical antonyms.
Information about mathematical in the free online English. — “mathematical - definition of mathematical by the Free Online”,
• A non-profit organization, run by and for mathematicians, which is dedicated to publishing journals and books at the lowest possible cost and distributing them to the mathematical sciences
community as freely as possible. — “Mathematical Sciences Publishers”,
• By the eigh***th century this view had been turned on its head: not only was an ever increasing number of fields being subjected to mathematical ***ysis, but the world itself had come to be
understood as fundamentally mathematical in nature. — “mathematics: Definition from ”,
• This Mathematical Month. Mathematical Imagery. More Websites to Explore What's Happening in the Mathematical Sciences, Volume 8. Dana Mackenzie. Lectures on. — “American Mathematical Society”,
• Alexander Soifer, Colorado Mathematical Olympiad Twenty-eighth Colorado Mathematical Olympiad, to be held. Friday, April 22, 2011 with Award Presentation following on Friday,. — “UCCS | Colorado
Mathematical Olympiad”, uccs.edu
• DCU's School of Mathematics has built up a strong record in areas of differential equations, financial mathematics, mathematical physics and numerical ***ysis. It has also produced a substantial
number of research students. Members of the. — “Mathematical Sciences, DCU”, maths.dcu.ie
• Mathematical. Learn about Mathematical on . Get information and videos on Mathematical including articles on percent, calculation, solve and more!. — “Mathematical | Answerbag”,
• One of the world's largest video sites, serving the best videos, funniest movies and clips. Explanations about this strange mathematical object : the mobius strip. — “Videos tagged with
Mathematical - Metacafe”,
• On Mathematics, Mathematical Physics, Truth and Reality. This was always just a mathematical solution which never explained how matter was connected across the universe. — “On Mathematics,
Mathematical Physics, Truth and Reality”,
• The Applied Mathematical and Computational Sciences program is an interdisciplinary Ph.D. program in the Graduate College of the University of Iowa. — “Applied Mathematical and Computational
Sciences”, amcs.uiowa.edu
• Information about the Mathematical and Statistical Methods in Insurance and Economy group at the university of Copenhagen Welcome to the group for Mathematical and Statistical Methods in
Insurance and Economics at the University of Copenhagen. — “Laboratory of Actuarial Mathematics - University of Copenhagen”, act.ku.dk
• The Art and Craft of Mathematical Problem Solving One of life 's most exhilarating experiences is the aha! moment that comes from pondering a mathematical problem and then seeing the way to.
Publication. — “The Art and Craft of Mathematical Problem Solving | Learning”,
|
{"url":"http://crosswords911.com/mathematical.html","timestamp":"2014-04-16T04:13:22Z","content_type":null,"content_length":"82692","record_id":"<urn:uuid:535bf77e-b4fc-4c59-adad-929409c5483f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Get over 70,000 NP in one day!!!
Do this Daily to get NP.
1. Neopian Bank: Make sure you deposit money in the Neopian bank, and collect your interest daily.
2. Jelly Go here to collect one free jelly daily to feed to your neopet or to sell
3. Omelette Free Omelette you can feed to your neopet or sell:
4. King Altador: Free daily items -ONLY- for those who completed the Altador Plot (star mapping) Here is a guide: King Altador Guide
5. Tombola 1 free play a day. If he's asking for donations, leave and go back later.
6. Fruit Machine: 1 free play a day. You can win NP, BD, and PB's!
7. Lost Tomb 1 free play a day. You can win NP, items, etc.
8. Shop Of Offers: A rich slorg gives you NP.
9. Pet Lab & PetPet Lab You must have maps assembled. You must complete secret laboratory map before you can assemble the petpet lab map.
Games where you can win LOTS of NP:
1. Lotte Koala Matching Madness: NP Ratio: 1.0 A quick easy matching game. 450NP per game easily (3 games = 1350NP) and if you send a card, you also get 100NP
2. Cocoa Puffs: NP Ratio 0.40 Play expert to unlock bonus level. Do not send the expert score. Play the bonus 3 times for higher scores for around 1200NP per day. Cocoa Puffs Guide
3. Nesquick, Up, Up, and Away: NP Ratio 1.0 Pass the basketball up to the hoop, be careful with the timing. 600 points per play = 1800NP per day.
4. Chef Boyardee Bon Appetit: NP Ratio 1.0. Feed hungry Skeith. You can get easily 500 points per play... 1500NP per day.
5. Chef Boyardee Product Palace: NP Ratio 1.0 Feed hungry customers. Easily get 300 points per play. 900 NP per day.
6. Fruit Flavoured Fallout: NP Ratio 1.0 Catch falling snack boxes, avoid gross foods.
7. A Trix: Twisting Turn: NP ratio 1.0 It's a slow game but you can earn 3000NP per day. There is no need to finish all levels.Trix Guide
8. Monster House: NP Ratio 1.0 (ignore boohoo message) Can earn up to 3000 NP per day (view video and visit site 1x each)
9. Lucky Charms: Charmed Life: NP Ratio 1.0 Negotiate maze, Collect items, go to exit. 600 points per play = 1800 NP per day. On the later levels, if you die after getting all the points, and replay
it, using all your lives, you can get 1000 NP per play!
10. Race Against Charlotte: NP Ratio 1.0 You can earn up to 2100 NP per day Race Against Charlotte Guide
11. Adver-Video: Watch 5 videos, then spin the wheel for a prize. NP varies, depending on spin. You can win anywhere between 100 NP to 800 NP per spin... for example... if you spin all 100's that's
500NP total and if you manage to spin all 800 (which is pretty hard) you can get 4000NP!
12. Bruno's Backwoods Breakaway: NP Ratio 1.0 Collect the potions, then take them to the cave. Avoid the villigers and the rocks they throw. (take all potions to the cave at once for a better score).
13. Hannah and the Ice Caves: NP Ratio 0.02 50,000 points = 1000 NP Get up to 3000 NP per day. You need to beat the "Snow Beast Level" to get enough points for 1000NP. If you beat the entire game you
can play the last level in under 2 minutes for 56,000 points for 1000NP per play.
14. Itchy Invasion: NP Ratio 1.0 This game is pretty easy. It's a point and shoot game. Itchy Invasion Guide
15. Kiko Match II: NP Ratio 2.0 Get 500 points for 1000NP You can get a total of up to 3000NP per day. Type in ineedmoretime for a bit of added time if needed. Turn animation off so cards flip
16. Sutek's Tomb: NP Ratio 0.20 5000 points = 1000NP You can get up to 3000NP per day. Type in pyramibread to show a match, type in plzsutekcanihavemoretime for more time (pause game, type in all but
the last e, unpause the game, then type in the last e)
17. The Castle of Eliv Thade: NP Ratio 2.0 500 points = 1000NP You can get up to 3000 NP per day. Type in rehaxtint for an extra hint. Go through outer rooms, then to the middle room, then take the
invisible path to the bottom path, go to crypt, you will earn 1000NP (minimum 150 game points + 200 points bonus to enter crypt + 150 send score bonus = 1000NP, for avvie play to 850 points, then
enter crypt + send = 1200 points + avvie. Open in a separate web browser one of these two pages for anagram solvers. They don't always give you the word, so that's when you use the hints.Free
Anagram Solver or Andy's Anagram Solver
18. Turmac Roll: NP Ratio 1.0 1000 points = 1000NP You can get up to 3000NP per day. Time spent depends on how good of berries you get. (It's claimed that if you get up to full speed, then slow to
minimum for a few it increases chances of better berries OR try typing berries at the game mode select screen) This is a rare game. You can replay for a better score, then send once your
satisfied with the score.
19. Zurroball: NP Ratio 5.0 200 points = 1000NP Bounce the ball with the mouse and keep it in the air. It must go over the red line to score. Grounders earn 10 points per click.
20. Bouncy Supreme: NP Ratio 0.70 700-900 points = 1900 points per day. Type in bouncebouncebounce for an extra life.
21. Buzz's Honey O Throw: Ratio 1.0 Toss Cheerios to hit Honey and avoid Bear's reach.
22. Destruct-O-Match: NP Ratio 0.30 At game select screen, type in boohooiwanttheoldgameback and you can play the old version, typ in destroyboulders to get rid of all boulders of a random color.
Extreme is faster to play but gives less points.
23. Double Chocolate Cookie Crisp: NP Ratio 1.0 Pretty easy game. You should easily get 500+ NP per game for 1500 NP per day
24. Dubloon Disaster II: NP Ratio 1.0 Type in scallywags to create a whirlpool that drags in enemies; to save time you can type in scallywag before getting any coins, then when you need the
whirlpool, type in the s. It can drag you in too, so be careful! Type in blackpawkeet to increase chance of high level dubloons.
25. Eye of the Storm: NP Ratio 0.20 reveal tiles to find the Cyodrake's Gaze.
26. Jolly Juglers: NP Ratio 5.0 Try to score 200 points to get 1000NP per play (3000NP per day)
27. Limited Too: Mix and Match: NP Ratio 1.0 This is a matching game like Kiko Match but quicker. 360 points per day = a little more then 1000NP per day
28. Peanut Butter Cookie Crisp: Ski Jump: NP Ratio 1.0 You can earn up to 2100NP per day.
29. Snow Ball Fight: NP Ratio 1.25 Watch out for the fairies! This is a fast paced game.
30. Scourge of the Lab Jellies: NP Ratio 0.20
31. Splat-A-Sloth: NP Ratio 1.0 Try to hit the sock puppet sloth, pretty easy, 300 to 600NP per day.
32. Survey Shack: Majority Rules: There are many NP earning links here for taking surveys, etc. Survey Shack: Majority Rules Guide
33. Typing Terror: NP Ratio 0.25 This is a typing game. The yellow robots = 5pts, Red Robots = 20pts, Broken Red Robots = 100pts. Get lots of broken red robots if you want a really good score. If you
want the avvie or hi-score list you need at least 500pts per round.
34. Ultimate Bullseye: NP Ratio 8.0 If you get 125 points, you get 1000 NP. Type in catapult for an extra bonus shot.
35. Lunchables Marketplace: NP Ratio 1.0
Spinning Wheels:
Scratching Cards:
1. Scratchcard Kiosk: 600 NP per scratchcard. You can buy a scratchcard every 6 hours.
2. Lost Desert Scratchcards: 500 NP per scratchcard. You can buy a scratchcard every 4 hours.
3. Deserted Fairgrounds Scratchcards: 1200 NP per scratchcard. You can buy a scratchcard every 2 hours.
Hidden NP:Search around the Image for the hidden NP. It will come up in red (just move your mouse pointer over the image till you find it). You can only do this once per day.
|
{"url":"http://neopetshintsetc.tripod.com/neopoints.html","timestamp":"2014-04-17T10:36:02Z","content_type":null,"content_length":"59004","record_id":"<urn:uuid:fe136050-7be5-4323-9533-3cac2d845ca1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Poincare, Perelman and the Universe
Poincare, Perelman and the Universe
SCIENCE magazine's 2006 Breakthrough of the Year has been justifiably awarded to Grigori Perelman's proof of the Poincare conjecture.
is very good at writing about maths. Perelman's achievement culminated years of work in isolation, the barriers of skepticism, and others claiming credit when they realised he was right. His work has
great implications for theories of the Universe.
Topology is the study of surfaces that can undergo stretching. It applies to our physical world via surface integrals and boundaries. We learned at a very young age that a donut is Genus One,
topologically the same as a teacup. Henri Poincare, the father of modern topology, conjectured in 1904 that a 3-dimensional space with a "trivial" fundamental group must be a hypersphere, the
boundary of a sphere in 4-dimensional Space. Poincare's conjecture defied proof for nearly a century.
Richard Hamilton proposed a solution based upon a "Ricci Flow." In this theory, regions of high Ricci curvature tensor R_{ab} would diffuse out, in a manner analogous to heat flow, until a surface of
equal curvature (a sphere) was achieved. Extending the Ricci flow to 3 dimensions seemed insurmountable. Perelman met Hamilton while in the US, returned to his native Russia in 1995, then spent 7
years trying to solve the problem.
In November 2002 Perelman began posting outlines of a proposed proof on the internet. Though some mathematicians realised he was on to something, there was some skepticism. For one thing, bits of the
proof were incomplete. Did people ignore Perelman, say he didn't have a theory, and write unfair things about him? Some may have; but Bruce Kleiner, John Lott, John Morgan and Gang Tian aided in
completing and publishing Perelman's work. By Spring 2006 the proof was complete, and the world realised what had been accomplished.
The Poincare Conjecture applies directly to cosmology. It says that the only possible shape for our 3-dimensional Universe is the surface of a 4-dimensional sphere. Einstein calculated that if the
Universe contained enough density, gravity would curve it into such a sphere. This was the Einstein Static Universe, but he quickly realised that the same gravity would cause a sphere to collapse. He
did not consider that expansion would prevent a collapse, for there was little evidence for an expanding Universe. To support this spherical space, Einstein added a "cosmological constant," a purely
hypothetical repulsive force.
Alexander Friedmann and Georges Lemaitre independently found solutions to the Einstein equations for an expanding Universe. Working atop Mount Wilson in California, Edwin Hubble and Milton Humason
found that redshift of galaxies was related to distance, indicating that our Universe was expanding. In a 1931 visit to Mount Wilson, Einstein conferred with Hubble and peered through the telescope.
The world's most famous scientist happily conceded that Lemaitre and Hubble were right. He dropped the cosmological constant, calling it his "greatest blunder." In a 1932 paper with Wilhelm de
Sitter, Einstein expressed preference for an asymptotically expanding Universe of "critical" density (Omega) = 1.
Poincare and Perelman indicate that a sphere is the only possible shape. This shape can only be maintained without collapsing if there is a certain density. That density is not "critical" but the
stable density. If the Universe were not of the stable density , then matter would be created via pair production until that density was reached. That is why baryons are exactly 4.507034% of the
As you have doubtless heard, Perelman has prematurely retired from mathematics and public life. He has left his job at the Steklov, a mathematical institute in St. Petersburg. Though he is qualified
for the Fields Medal and the Clay Prize, he has refused them. He is said to be tired and disappointed by the lack of ethical standards in mathematics. (This year some unnamed mathematicians published
a paper implying that they completed the proof first.) Perelman sounds quite modest and reasonable when quoted in the New Yorker:
"I can't say I'm outraged. Other people do worse. Of course, there are many mathemticians who are more or less honest. But almost all of them are conformists. They are more or less honest, but they
tolerate those who are not honest."
"It is not people who break ethical standards who are regarded as aliens. It is people like me who are isolated."
Grigori Perelman's achievement is immensely important. His behaviour shows that he was motivated by the challenge of solving a problem; not by fame or prizes. It is shameful that petty behaviour of
others has driven a real talent like Perelman out of the field. With the harsh treatment given to new ideas and lone researchers, his actions are completely understandable.
6 Comments:
Kea said...
Happy New Year, Louise. Good to see a post on cool maths! Thanks.
Thanks. Happy Year 007 to you too! I miss your posts on maths, which are much more comprehensive than mine.
Who knows where to download XRumer 5.0 Palladium?
Help, please. All recommend this program to effectively advertise on the Internet, this is the best program!
The other day a friend of mine who is a physicist told me about this guy, Grigori Perelman and his brilliant achievement. I think that astronomy community has to give the credit Perelman deserves
for that breakthrough.
I woild like to know more about this topic because looks interesting.
Smallest permulation Croatian Croatti town is very beautiful no many what's appeared think now buffer Croatti kit.,
Nomall large halfty size Croatti system logic logi croatti log ic cut word listzed ripping incoming.,
inverse verse pemutation number Capture size.,Def infinite Proof on Server Sotec.,co.,ltd.,But Sorry Indust.,
Croatti Banzai!!
Proof authentication Croatia Mathematics Society.,MAC 999999999.,00000000HA.,
Comment in Cate to K ate atemp license figure this in CCC ripping KKK symmetry buffer input two point ripped in outside time space travel ingin over dance Creatures.,Hero four CK Cashes Kashed KC
dejective Proof.,sing in CMS.,Croatia.,
Proof authentication MAC880088000000888888.,.,.,.,.,.,
Kenichi Mori UGM/S/OpenWall.,inc/openwall.com/CCC.,Inc/ccc.de/Microsoft.,co.,ltd.,/President.,/
Berarussia@Berarussia.edu:admin Mori Kenichi Sir .,Dr.,math.,
Seven comment SC commentary proof., law minus all word input output system.,
help all law number of juthic attack unbith corrupt attemption asterisk symbol.,
Systac coming my area in paper fomula service step up Christian Service Called.,
Commentary wordlist induction wordlist Super nanoment Size micro countdown.,
Buffer control wing noid size settle.,Mind Roots.,ID 071-035570-0
Comment inputed Proof Croatian mathematician Authentication ID 081-033300-5
Perelman is Sir Dr of Prof., 1029.,West and Ease Countrer.,
Observer annouce Openwall.,Inc Solar Designer., Alexander Pestrosov.,
Geometric maxtrics rematrix confusion extraction equal wordlist equal all word.,
Proof Wordlist fashion theory in Bible Matthew 41:23: .,Thanks
Links to this post:
|
{"url":"http://riofriospacetime.blogspot.com/2006/12/poincare-perelman-and-universe.html","timestamp":"2014-04-21T01:59:23Z","content_type":null,"content_length":"32713","record_id":"<urn:uuid:7801327a-3a75-4731-a693-03ae214d8fbd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bradd Libby
Fascinating results from an effort by the Gapminder Foundation called ‘The Ignorance Project‘. Gapminder provides a software tool for studying global health statistics and the foundation’s Dr. Hans
Rosling speaks widely on related topics, like demographics and economic development.
Gapminder has conducted a survey in the US, UK and Sweden probing adults’ knowledge of the state of global health.
For example, the survey asked “In the last 30 years the proportion of the World population living in extreme poverty has…” and then gave three options: ‘almost doubled‘, ‘more or less stayed the same
‘, and ‘almost halved‘. (Actually, in the UK the options were ‘increased’ and ‘decreased’ instead of ‘almost doubled’ and ‘almost halved’.)
23% of the people in Sweden knew that global poverty has decreased dramatically. In the UK, 10% of people knew that (12% of UK university graduates). And in the US, only 5% of people knew that.
Results like these were the basis for new articles like: ‘United States Scores Poorly on Global Ignorance Test‘.
But on questions like ‘What is the life expectancy in the world as a whole today?‘, the results were the opposite. In that case, 56% of Americans knew that the answer was about 70 years. 30% of Brits
knew that. And 22% of Swedes knew.
Interestingly, the percentage of university-educated Brits who knew the world average life expectancy was only 20%, lower than the British population as a whole.
With the Olympics starting soon, I thought it would be fun to compare the results to see which of these three countries is actually the best informed and which is the least. Unfortunately, the survey
did not ask all the same questions, or offer the same choice of responses, in all three countries. But five questions were the same, including the two above. Another was, ‘What % of adults in the
world today are literate, i.e. can read and write?‘
OK, first of all, I think if you need the interviewer to define the word ‘literate’ for you, maybe you are not the best person to ask about statistical trends in global demographics. For this
question, the Swedes and Americans were given three options: 40%, 60% or 80%. The Brits were also given the choice of 20%. Twenty-two percent of Americans chose the correct answer (80% literacy). 20%
of Swedes did, and only 8% of Brits did (with only 4% of university-educated Brits knowing the right answer).
Two things to note here: One is that the survey did not talk to every Swede and every Brit – it only asked a little over 1000 of each (including 400 Brits with university educations). So, we do not
know if slightly more Americans actually knew the correct answer than Swedes. With a sample size of about 1000 people, the margin of error is somewhere around 3%. So, it is not fair to give the US
the ‘gold medal’ for this question since, statistically speaking, the Swedes did just as well.
Secondly, the Brits were given 4 options instead of three, so it is understandable that people did not do as well. Nevertheless, a 4% correct response rate from university-educated Brits to me seems
appalling. Even the proverbial chimpanzees throwing darts at a dartboard would select the right answer from four choices 25% of the time.
So, I think it’s fair to compare countries’ responses to Chimpanzees (that is, to how choosing randomly would have performed). For this question, the Chimps would get the Gold medal, the US and
Sweden (who answered about the same) shared a Silver medal, and the UK got the Bronze.
For the questions about Literacy and Extreme Poverty, the Chimps also win the gold medal. And the US wins the gold for the question about Life Expectancy. The other two questions were ‘How many
[babies] do UN experts estimate there will be by the year 2100?‘ where 11% of Swedes (and 33% of Chimps) knew the answer was 2 billion, and ‘What % of total world energy generated comes from solar
and wind power?‘ In that case, 56% of Swedes, 46% of Americans, and 33% of Chimps knew the right answer. (Again, the UK was given more options, so the comparison is not entirely fair, but 30% of
Brits and 37% of university-educated Brits knew the right answer, so I said that Brits and Chimps shared the Bronze on this one.)
Additionally, Americans and Swedes were asked 5 questions that were not posed to the Brits.
One asked how many babies are born to each woman, worldwide, on average. 49% of Americans knew the answer was 2.5, but only 29% of Swedes did. When asked to choose which population distribution was
correct from a set of maps, Americans, Swedes and Chimps all answered about the same.
On the last three questions, How many years of formal education women get worldwide, the percentage of children vaccinated against measles, and the worldwide income distribution, the Chimps all win,
with Americans coming in second place and Swedes last.
Of these ten questions (of which the UK only participated in 5 and were generally given more options than the Americans or Swedes) the finally tally of results looks like:
It turns out that Chimps answered best on 7 of the 10 questions. I would say that the performance in the US and Sweden was essentially the same. And the UK seems to lag, though to be fair they only
participated in half the questions and, even then, had more options to choose from.
Most surprising to me was that, in questions that were posed to Brits, the general population performed better than university graduates on most of them, which makes me think that maybe we should all
be sending our children to Monkey College instead.
Here is an interesting little recipe I found in an old notebook. Unfortunately, I did not write down where I got it from, or if I just thought it up, or what.
Start with “0″. At each step, if a character is a “0″, replace it with “1″. If “1″, replace it with “10″.
So, the results are:
Step 1. 0
Replace the “0″ with “1″, so we get:
Step 2. 1
If a character is a “1″, replace it with “10″, so we get:
Step 3. 10
If a character is a “0″, replace it with “1″. If “1″, replace it with “10″. So, the “1″ becomes “10″ and the “0″ becomes “1″, to give us:
Step 4. 101
(Do you see what happened there? The “1″ was replaced by “10″ and then the “0″ at the end was replaced with a “1″.) Repeat again: “0″ -> “1″ and “1″ -> “10″. This gives us:
Step 4. 10110
Step 5. 10110101
Step 6. 1011010110110
And so on and so on.
Two interesting things: One is that each step (Step 6) is simply the text from the previous step (Step 5) with the text from the step before that (Step 4) added to the end. So:
Step 7. 101101011011010110101
Also, the ratio of 1′s to 0′s in each step approaches phi, the Golden Ratio.
In Step 4 there are three 1′s and two 0′s. 3/2 = 1.5
In Step 5: the ratio is 5/3 = 1.6667
In Step 6: the ratio is 8/5 = 1.6
In Step 7: the ratio is 13/8 = 1.625
These are all Fibonacci Numbers: 2, 3, 5, 8, 13, and the ratio of one Fibonacci Number to the previous one approaches the Golden Ratio, which is the ratio of Adults to Kids in an extended
Lotka-Volterra model of predator-prey dynamics.
It seems exceedingly strange to me that the ratio of 1′s to 0′s approaches an irrational number even though the recipe only involves swapping each “0″ with “1″ and each “1″ with “10″.
Now I really wish I had written down where I got this recipe from.
This post from BoingBoing says that the aspect ratio of human eye’s field is about 16:10 width-to-height (or 1.6, which is maybe why people find the Golden Rectangle aesthetically pleasing). In some
previous photo posts of mine, including this picture of an insect and this one of a car, I used a photo size of 550 x 170, which is 2 times Phi.
Sometimes, focusing on solving one set of problems causes another set to pop up. Call it ‘firefighting’. By treating the worst-off patients, the slightly better off become neglected. By only taking
care of the most severe issues, we draw time and resources away from issues that are becoming severe. Things tend to get better in the short run, but worse in the long run.
Here is a visual representation of a mathematical model of this issue. Each box (or ‘bathtub’) represents the number of issues in each category: some are Non-issues, some are Mild Issues and some are
Severe Issues.
An issue moves from one category to the next by way of one of the pipes that connect the bathtubs. Each of these pipes contains a mathematical equation that governs how quickly issues pass through it
from one bathtub to the next. In each of the green-colored pipes, the rate that issues move is simply a fraction of the number of issues upstream. So, we say for example that 5% of Mild Issues become
Severe Issues each month and, say, 10% of Severe Issues become Mild Issues.
The only pipeline that works differently from this is the orange one – the rate at which Nonissues become Mild Issues. In that case, let us say that rate is some fraction (again, 10% works) of the
number of Nonissues (just like the other flow rates, so far), but multiplied by the number of Mild Issues squared. So, instead of ’10% x Nonissues’, the equation is ’10% x Nonissues x MildIssues^2′.
Some diseases and health problems work this way. If you are a nonsmoker who lives with a smoker, you might have a 10% chance of becoming a smoker each year, but if you live with two smokers, your
chance goes up to 40% each year and if you live with three smokers, it goes up to 90%. The idea is that mild problems recruit new mild problems. Neglecting that slow oil leak in your car’s engine
might mean a 10% chance of an engine problem this month, but neglecting both the oil leak and the dirty filter might make that chance go up to 40%.
We can choose these percentages so that the number of issues in each category is constant – one Non-issue becomes a Mild Issue for each Mild Issue that becomes a Non-issue. What happens to the number
of Severe Issues as we increase the rate at which Severe Issues are moved into the Mild Issue category?
The graph above show what happens to the number of Severe Issues when we increase the fraction of Severe Issues that are turned into Mild Issues each month from 10% to 15% (starting midway through
the first time period). In the short run, the number of Severe Issues drops, since we are treating 15% of them, rather than 10%. But in the long run, this causes the number of Severe Issues to
What is happening here is that, by moving Severe Issues into the Mild Issue category, we are increasing the rate at which Non-issues become Mild Issues. For example, by treating someone’s acute
respiratory infection, we get them back on the street sooner to infect more people.
I am not saying that we should neglect Severe Issues or not treat them at all. One lesson from this model is that by increasing the acute care without following through and also increasing the rate
of Mild Issues becoming Nonissues makes the problem worse in the long run than it would have been by not increasing acute care at all. It would be especially perverse if things played out slowly
enough that the person who made the decision to focus more on treating Severe Issues had gotten all the credit for the short-term improvement and then moved on, or retired, before the next person
stepped into the position to take the blame just as the negative consequences of that decision just started to appear.
It is interesting to see this ‘better-before-worse’ behavior in such a simple model. I might spend some time whenever I can trying to find even simpler structures that show similar behavior.
There’s an old parable about blind men trying to figure out what an elephant looks like. One man feels its trunk and says an elephant is like a snake. Another feels the leg and says an elephant is
like a tree. And so on.
Likewise, a lot of different behavior patterns are commonplace in the world. There’s Exponential Growth and the S-Shaped Plateau, “Peak” Behavior (like Peak Oil) and slow Stagnation. Economists and
ecologists and epidemiologists, etc, have come up with mathematical descriptions for each separately.
All of these, though, I think can be thought of manifestations of the same underlying process – different behaviors of the same elephant. This is what I think that elephant looks like: A
Non-Renewable Resource gets converted into a Usable Good, which can then be used up or destroyed. Visually, this can be represented as:
Think about ‘Crude Oil’ being converted into ‘Gasoline’, or ‘Potential Buyers’ of a product being converted into ‘Current Owners’.
There are actually several ways I have found that S-shaped growth can happen – and ways it can look like Exponential Growth, Stagnation and so forth – and all are derivatives of the diagram above.
When the Non-Renewable Resource is available in essentially unlimited supply and the growth is restricted by some ‘carrying capacity’, then the structure above reduces to the following:
One way to represent ‘effect of remaining capacity’ is simply ’1 – (Yeast Cells / max Yeast Cells)’. The growth rate is some constant (say, 0.05) multiplied by this effect. Thus, when the number of
yeast cells is small, the growth rate is highest (near 5%, in this case) and as the population grows, the growth rate declines.
If cells can die, and the death rate goes up as the population goes up, this also gives S-shaped growth.
This is similar in concept to the birth rate declining as the population grows, but instead affecting the death rate.
A delay in a system can sometimes be thought of as similar to riding on a conveyor belt. You get on the conveyor, ride for a while, and then get off. For example, a house is built in one year,
exists for 30 or 40 or 50 years, and is then pulled down to be replaced by a bigger, newer building.
If there is growth in the number of houses being built, it will take 30 or 40 or 50 years for the number of houses being replaced to catch up. In the intervening time, the number of houses in
existence can be an S-shaped profile.
Finally, when there is a Non-renewable Resource and the Good it generates can decay, the system can exhibit and S-shaped profile. In the example below, the rate of buying (the rate at which
Nonowners become Owners is simply ‘f * Nonowners * Owners’, where ‘f’ is a small fraction.
If Owners can never become former Owners, then the number of Owners will show a S-shaped profile. If a small portion of Owners disappear each time period, then then S-shaped profile will be followed
by a long period of perpetual stagnation, like what is shown in the figure below – a simulation of the model above with f = 0.00005 and ‘becoming former Owners’ being 0.001 of the number of Owners.
In this graph, we can see all of the parts of the elephant: initially Exponential Growth, followed by S-Shaped Growth, followed by a peak, followed by a long period of Stagnation. Modeling the
behavior of the system simply a matter of finding the variation of the models above that most closely resemble the structure of the real system. I think that many physical, social and economic
patterns fit this profile in the long run – stock prices of companies, populations of countries – but we often only see a small selection of the overall behavior profile and mistake the elephant for
being just a trunk or a leg.
|
{"url":"http://braddlibby.wordpress.com/","timestamp":"2014-04-20T11:54:46Z","content_type":null,"content_length":"43744","record_id":"<urn:uuid:430ee1ac-cae2-494b-839f-5b21e3c2a446>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Toward the rectilinear crossing number of Kn: new drawings, upper bounds, and asymptotics
Results 1 - 10 of 16
- Graphs and Combinatorics , 2003
"... We give a new lower bound for the rectilinear crossing number cr(n) of the complete geometric graph Kn. Weprovethatcr(n) ≥ 1 ¥ ¦ ¥ ¦ ¥ ¦ ¥ ¦ n n−1 n−2 n−3 andweextendtheproofof ..."
Cited by 32 (14 self)
Add to MetaCart
We give a new lower bound for the rectilinear crossing number cr(n) of the complete geometric graph Kn. Weprovethatcr(n) ≥ 1 ¥ ¦ ¥ ¦ ¥ ¦ ¥ ¦ n n−1 n−2 n−3 andweextendtheproofof
- Contemporary Mathematics Series, 342, AMS 2004 , 2003
"... Introduction Let S be a set of n points in general position in the plane, i.e. no three points are collinear. Four points in S may or may not form the vertices of a convex quadrilateral; if they
do, we call this subset of 4 elements convex. We are interested in the number of convex 4-element subset ..."
Cited by 18 (0 self)
Add to MetaCart
Introduction Let S be a set of n points in general position in the plane, i.e. no three points are collinear. Four points in S may or may not form the vertices of a convex quadrilateral; if they do,
we call this subset of 4 elements convex. We are interested in the number of convex 4-element subsets. This can of course be as large as , if S is in convex position, but what is its minimum? Another
way of stating the problem is to find the rectilinear crossing number of the complete n-graph K n , i.e., to determine the minimum number of crossings in a drawing of K n in the plane with straight
edges and the nodes in general position. We note here that the rectilinear crossing number of complete graphs also determines the rectilinear crossing number of random graphs (provided the
probability for an edge to appear is at least ln n ), as was shown by Spencer and Toth [13]. It is easy to see that for n = 5 we get at least one convex 4-element subset, from which it follows by
- Geom
"... We provide a new lower bound on the number of ( ≤ k)-edges of a set of n points in the plane in general position. We show that for 0 ≤ k ≤ ⌊ n−2 2 ⌋ the number of ( ≤ k)-edges is at least ( ) k
∑ k + 2 Ek(S) ≥ 3 + (3j − n + 3), 2 j= ⌊ n 3 ⌋ which, for k ≥ ⌊ n 3 ⌋, improves the previous best lower ..."
Cited by 18 (3 self)
Add to MetaCart
We provide a new lower bound on the number of ( ≤ k)-edges of a set of n points in the plane in general position. We show that for 0 ≤ k ≤ ⌊ n−2 2 ⌋ the number of ( ≤ k)-edges is at least ( ) k ∑ k +
2 Ek(S) ≥ 3 + (3j − n + 3), 2 j= ⌊ n 3 ⌋ which, for k ≥ ⌊ n 3 ⌋, improves the previous best lower bound in [7]. As a main consequence, we obtain a new lower bound on the rectilinear crossing number
of the complete graph or, in other words, on the minimum number of convex quadrilaterals determined by n points in the plane in general position. We show that the crossing number is at least
- J. Combin. Theory Ser. A
"... We give a new upper bound for the rectilinear crossing number cr(n) of the complete geometric graph Kn. Weprovethatcr(n) ≤ 0.380559 ¡ ¢ n 3 + Θ(n) by means of a new construction 4 based on an
iterative duplication strategy starting with a set having a certain structure of halving lines. ..."
Cited by 14 (11 self)
Add to MetaCart
We give a new upper bound for the rectilinear crossing number cr(n) of the complete geometric graph Kn. Weprovethatcr(n) ≤ 0.380559 ¡ ¢ n 3 + Θ(n) by means of a new construction 4 based on an
iterative duplication strategy starting with a set having a certain structure of halving lines.
- IN GRAPH DRAWING 2006; LECTURE NOTES IN COMPUTER SCIENCE 4372 , 2007
"... Tree decompositions of graphs are of fundamental importance in structural and algorithmic graph theory. Planar decompositions generalise tree decompositions by allowing an arbitrary planar graph
to index the decomposition. We prove that every graph that excludes a fixed graph as a minor has a planar ..."
Cited by 14 (1 self)
Add to MetaCart
Tree decompositions of graphs are of fundamental importance in structural and algorithmic graph theory. Planar decompositions generalise tree decompositions by allowing an arbitrary planar graph to
index the decomposition. We prove that every graph that excludes a fixed graph as a minor has a planar decomposition with bounded width and a linear number of bags. The crossing number of a graph is
the minimum number of crossings in a drawing of the graph in the plane. We prove that planar decompositions are intimately related to the crossing number. In particular, a graph with bounded degree
has linear crossing number if and only if it has a planar decomposition with bounded width and linear order. It follows from the above result about planar decompositions that every graph with bounded
degree and an excluded minor has linear crossing number. Analogous results are proved for the convex and rectilinear crossing numbers. In particular, every graph with bounded degree and bounded
tree-width has linear convex crossing number, and every K3,3-minor-free graph with bounded degree has linear rectilinear crossing number.
- Electron. J. Combin., 8(1):Research Paper , 2000
"... The rectilinear crossing number of a graph G is the minimum number of edge crossings that can occur in any drawing of G in which the edges are straight line segments and no three vertices are
collinear. This number has been known for G = K n if n # 9. Using a combinatorial argument we show that fo ..."
Cited by 11 (0 self)
Add to MetaCart
The rectilinear crossing number of a graph G is the minimum number of edge crossings that can occur in any drawing of G in which the edges are straight line segments and no three vertices are
collinear. This number has been known for G = K n if n # 9. Using a combinatorial argument we show that for n =10the number is 62.
- of Kn, Discr. Comput. Geom , 2004
"... We use circular sequences to give an improved lower bound on the minimum number of (# k)-- sets in a set of points in general position. We then use this to show that if S is a set of n points in
general position, then the number #(S) of convex quadrilaterals determined by the points in S is at leas ..."
Cited by 11 (9 self)
Add to MetaCart
We use circular sequences to give an improved lower bound on the minimum number of (# k)-- sets in a set of points in general position. We then use this to show that if S is a set of n points in
general position, then the number #(S) of convex quadrilaterals determined by the points in S is at least 0.37533 ). This in turn implies that the rectilinear crossing number cr(Kn ) of the complete
graph Kn is at least 0.37533 ), and that Sylvester's Four Point Problem Constant is at least 0.37533. These improved bounds refine results recently obtained by Abrego and Fernandez--Merchant, and by
Lovasz, Vesztergombi, Wagner and Welzl.
- Proc. 14th ACM-SIAM Sympos. Discr. Alg , 2003
"... We prove a lower bound of 0:3288 for the rectilinear crossing number cr(Kn ) of a complete graph on n vertices, or in other words, for the minimum number of convex quadrilaterals in any set of n
points in general position in the Euclidean plane. As we see it, the main contribution of this pa ..."
Cited by 8 (1 self)
Add to MetaCart
We prove a lower bound of 0:3288 for the rectilinear crossing number cr(Kn ) of a complete graph on n vertices, or in other words, for the minimum number of convex quadrilaterals in any set of n
points in general position in the Euclidean plane. As we see it, the main contribution of this paper is not so much the concrete numerical improvement over earlier bounds, as the novel method of
proof, which is not based on bounding cr(Kn ) for some small n.
, 2006
"... It is known that every generalized configuration with n points has at least 3 ` k+2 2 ´ ( ≤ k)–pseudoedges, and that this bound is tight for k ≤ n/3 − 1. Here we show that this bound is no
longer tight for (any) k> n/3 − 1. As a corollary, we prove that the usual and the pseudolinear (and hence the ..."
Cited by 6 (5 self)
Add to MetaCart
It is known that every generalized configuration with n points has at least 3 ` k+2 2 ´ ( ≤ k)–pseudoedges, and that this bound is tight for k ≤ n/3 − 1. Here we show that this bound is no longer
tight for (any) k> n/3 − 1. As a corollary, we prove that the usual and the pseudolinear (and hence the rectilinear) crossing numbers of the complete graph Kn are different for every n ≥ 10. It has
been noted that all known optimal rectilinear drawings of Kn share a triangular–like property, which we abstract into the concept of 3–decomposability. We give a lower bound for the crossing numbers
of all pseudolinear drawings of Kn that satisfy this property. This bound coincides with the best general lower bound known for the rectilinear crossing number of Kn, established recently in a
groundbreaking work by Aichholzer, García, Orden, and Ramos. We finally use these results to calculate the pseudolinear (which happen to coincide with the rectilinear) crossing numbers of Kn for n ≤
12 and n = 15.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=196015","timestamp":"2014-04-17T13:49:32Z","content_type":null,"content_length":"36554","record_id":"<urn:uuid:a1d4207d-8da8-4276-9cef-ec7bd8548de2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Part 1 A 4.5 Kg Particle Moves At A Constant Speed ... | Chegg.com
Part 1
A 4.5 kg particle moves at a constant speed of
1 m/s around a circle of radius 5 m.
What is its angular momentum about the
center of the circle?
Answer in units of kg · m2
Part 2
What is its moment of inertia about an axis
through the center of the circle and perpendicular to the plane of the motion?
Answer in units of kg · m
Part 3
What is the angular speed of the particle?
Answer in units of rad/s
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/part-1-45-kg-particle-moves-constant-speed-1-m-s-around-circle-radius-5-m-angular-momentum-q2434301","timestamp":"2014-04-24T15:31:49Z","content_type":null,"content_length":"21311","record_id":"<urn:uuid:57e99cb6-e160-479f-9ef7-bcc766719c92>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nth Order Fixed Memory Adaptive Polynomial System
The n^th Order Adaptive Polynomial Next Bar's Forecast Price, Velocity & Acceleration Systems v3.
This package contains the advanced mathematical technique and noise filter called the n
Order Fixed Memory Polynomial that calculates the next bar's forecast price, velocity and acceleration. This advanced mathematical technique is currently used in today's space-age missile and
satellites applications and is applied here to stock and futures trading.
The n^th Order Fixed Memory Adaptive Polynomial System.
Using fast advanced mathematical rocket science algorithms that use discrete orthogonal polynomials, the price series is modeled using an nth order polynomial of the form:
price(t) = a[0]+a[1]*t+a[2]*t^2+a[3]*t^3+a[4]*t^4+...+a[n]*t^n
Where all the a
coefficients are recalculated with each new price bar. These fast and efficient algorithms use discrete orthogonal polynomials which avoid ill-conditioned matrix inversion and floating point overflow
errors associated with the slow matrix inversion algorithms currently used by many to calculate the polynomial coefficients. The polynomial coefficients are computed at each new price bar and give
the polynomial's
1 bar ahead
prediction for the
price, velocity and acceleration
. The Next Bar's Forecast price system follows the 1 bar ahead generated curve and issues buy and sell signals based upon the curve turning up or down by a fixed percentage from a curve bottom or
top. The Next Bar's forecast Velocity and Acceleration Systems follows the 1 bar ahead generated velocity and acceleration curves and issue buy and sell signals when the next bar's forecast velocity
and acceleration cross some noise threshold. These type of polynomial systems are best illustrated in the following working papers that are available on the
Working Papers page
Please note that hypothetical Out-Of-Sample past performance is no guarantee of future results
1. Walk Forward Analysis Using The Acceleration System on E-Mini 1min Bars 27 pages
2. The British Pound Cubed - Daily Bars. 31 pages
3. IBM Cubed - Daily Bars. 22 pages
4. 4^th Order Polynomial System on SP500 5min Bar Futures. 11 pages
5. Trading The Least Squares Curve On IBM 5min Bars. 10 pages
6. The Polynomial Velocity System Applied To E-Mini 1min Bars Sep/2005 12 pages,
7. The Polynomial Velocity System Applied To Japanese Yen FX 1min Bars Dec/2005 12 pages
8. The n^th Order Adaptive Polynomial Acceleration System Applied To Japanese Yen 1min Bars June/2006 11 pages
9. Out Of Sample Analysis Applied To The n^th Order Adaptive Polynomial Velocity Strategy and Eurodollor Futures April/2008 16 pages
Although a 2
, 3
and 4
order polynomial were used in these papers, any polynomial order up to the 6
polynomial power can be implemented with this system. In the papers above, the Nth Order Fixed Memory Polynomial system was applied to daily bars of the British Pound future, daily bars of IBM, 5min
intraday bars of IBM, 5min intraday bars of the S&P500 futures and 1min bars of the E-Mini. While the some of the working papers used matrix inversion techniques to compute the polynomial
coefficients, this software uses fast and efficient orthogonal polynomials to compute the next bar's forecasted price, velocity and acceleration.
Product Description
All systems are orientated to short term trading in all bar ranges from 1 tic, to 1 min bars to daily bars. These systems can even be used on P&F charts. All of the systems have been walk forward
tested either on daily data or intraday data.
For TradeStation and MultiCharts all of the EasyLanguage™ strategy and indicator codes are directly importable into your choice of TS or MC and are fully disclosed. There are no locks of any kind on
the EasyLanguage source code. The C++ DLL code is not disclosed. The Input parameters to the strategies and indicators are changeable and optimizable so that the user can develop his own parameter
set on his price series and time frame of interest. Although the system results will give parameters for the intraday or daily futures the system was tested on, the user can easily use this system on
any tradeable or on any time frame.
For NeuroShell Trader/DayTrader Pro, the Trading Strategies and Indicators are directly imported into NeuroShell via a special setup exe file and are fully disclosed in the Manual, Indicator wizard
"MA_FixmXVAn" category and in the Trading Strategy Wizard "MA_FixmXVAn" directory. The C++ DLL code is not disclosed. The Input parameters to the strategies and indicators are changeable and
optimizable so that the user can develop his own parameter set on his price series and time frame of interest. Although the strategy results will give parameters for the intraday or daily futures the
strategy was tested on, the user can easily use this system on any tradeable or on any time frame.
accompanying manual
consists of:
• A short tutorial on the details of performing walk forward optimization with out-of-sample testing using TradeStation and how I look for the "best" parameters in a TS combinatorial optimization
run (available in the TS Manual only).
• A complete description of each system, it's derivation and it's input parameters.
• The walk forward optimization method used and a table of the walk forward results for each system.
• The input parameter test ranges
• How to setup a chart using the Strategies and Indicators in TradeStation or NeuroShell.
• An EasyLanguage Strategy and Indicator code printout (TradeStation only)
• A chart printout with the Strategy & it's associated Indicator with all the system buy and sell signals displayed on the chart.
• Performance Summaries for the test period and the out-of sample period segments.
In addition each system has its exact duplicate in indicator form which is displayable on the price chart and in the charts printout, so that the user can visually see how the buy and sell signals
For TradeStation and MultiCharts The n^th Order Fixed Memory Adaptive Polynomial System v2 package consists of a manual with tutorial as described above, Strategies, and Indicators in an ELD file,
and a DLL file. This product is being offered through Meyers Analytics L.L.C. for $395. Shipping is via Email and consists of the Manual in Adobe PDF format, ELD file and DLL file.. The n^th Order
Adaptive Polynomial System v2 DLL file has a "Key Licence" that only allows it to be installed on three computers.
For NeuroShell Trader/DayTrader Pro, The n^th Order Fixed Memory Adaptive Polynomial System v2 package consists of a manual as described above and a special setup exe file that installs all the
Trading Strategies, Indicators, and DLL into NeuroShell. This product is being offered through Meyers Analytics L.L.C. for $395. Shipping is via Email and consists of a zip file containing the Manual
in Adobe PDF format and the MA-FixmXVAnSetup.exe setup file. The n^th Order Adaptive Polynomial System v2 DLL file has a "Key Licence" that only allows it to be installed on three computers.
How To Order
To order online
click Order Online
. To order via Fax or mail using a Visa or Master Card please fill out the order form on the
Order Form
page and
it to the telephone number on the order form or mail it to the address on the order form. If you would like to talk to me about the product, please call me at
(312) 280-1687
M-F 12pm to 5pm CST. All E-mail queries can be sent to
Thank you for your Interest....Dennis Meyers
|
{"url":"http://www.meyersanalytics.com/fixm.php","timestamp":"2014-04-20T03:27:51Z","content_type":null,"content_length":"18079","record_id":"<urn:uuid:41278dbb-1657-4641-8854-bb0a34da0d5e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I’m Iron Man. (no, I’m not)
I finally saw the movie Iron Man. It was good. I feel that I am qualified to evaluate the movie. When I was in high school, I was totally into comic books. Mostly Spider-man, but I still have a
significant collection of Iron Man comics. Ok, now you know I am not an Iron Man attacker. I will now attack the movie. Sorry, it’s what I do (remember, I already said I liked it). There are several
things I could comment on, in fact I recall some other blog talking about the physics of Iron Man.
My attack will center on the scene where Tony Stark (Iron Man) escapes from captivity with his home made iron man suit. He uses some type of rocket boots to fly away. But alas, the rockets fail
leaving Tony Stark to go plummeting towards the ground. The problem I would like to look at is when Tony Stark crashes into the sand. I question whether he could have survived the landing even with
the suit on. What does the suit do? Maybe it would prevent broken bones and provide an evenly distributed force. However, Tony would still have a large acceleration. This large acceleration could
cause internal damage. So, the goal is to estimate his acceleration when he crashes into the sand. I am going to start with him at his highest point.
How high did he go?
This is a tough question. I am just going to estimate this one. From the above image, it seems like he went pretty high. I am going to use 500 meter. There is a good possibility that in real life in
the movie he went much higher. Hopefully this will be a low estimate. Note that after reaching the highest point, Tony was essentially in free fall. This brings me to the next question.
Was air resistance significant in this fall?
The best way to answer this question is to run a numerical calculation (in vpython – my favorite numerical calculation tool). I need to make some estimates:
• Mass: 300 kg. I am not sure what metal he was using, but I will guess it was steel (or something close to that). Steel has a density of around 7800 kg/m^3. If he had an average of 2 cm thick suit
on him (that includes all the spaces for stuff) and I estimate a human to have a surface area of 2 x 2 x 0.2 m^2 (the first two was for front and back) = 0.8 m^2. This would give a mass of the
suit of m = (0.8 m^2)(0.02 m)(7800 kg/m^3) = 124.8 kg. Ok, so with the man, I will call the total mass 185 kg.
• Surface area – I already did this – I will use 0.4 m^2.
• Coefficient of drag. Wikipedia lists the drag coefficient for a man in the upright position as 1.0 – 1.3. I will use 1.5 because Iron Man is bigger than a man.
Putting these parameters into my numerical calculation, I get a final speed of 84 m/s. This is similar to the speed he would have without air resistance (99 m/s) – so, it really doesn’t too much
which one I use. Anyway, he hits the sand. I want to calculate the average acceleration he would have. One way to do this would be to use the work-energy principle. As a general rule, if you are
looking at motion and you know the time that a force is applied for a certain time, then use the momentum principle:
If you know the distance a force is applied, you can use the work-energy principle:
In this case, I can estimate the distance that the force from the sand is acting on iron man, so I will use work-energy. (Note, you could also do this purely from a kinematics stand point, but I just
like work energy, it’s cool) So, how far did he move while being stopped? Here is a shot after he landed.
From that image, it kind of looks like he landed feet first and his feet are around 1 to 1.5 meters deep. I will call it 1.5 meters (to be conservative). Now, I am ready to do the calculation (I will
do this in a general sense so that if you complain about one of my assumptions – like the starting height, you can easily recalculate). Here are my starting values:
• y[1] = 500 meters. This is the initial height he starts his fall from.
• d = 1.5 meters. This is the depth he falls into the sand, or the distance the force of the sand exerts on him.
• m = 185 kg
I am going to approach this in as few steps as possible. In the work-energy principle, you need to pick a starting and an ending point. I am going to pick iron man at the top of his fall for the
starting point, and iron man in the sand for the ending point. I also need to pick my system. I am going to take JUST iron man as my system. This means that there will not be any gravitational
potential energy, but there WILL be work done by the gravitational force. I will have to break the work done into two parts since the sand does not always exert a force on him. I like diagrams, so
here is one:
The reason I choose these two positions is first – this will include the force of the sand (needed). Second, the change in energy for that case is zero. He both starts and ends at rest and there is
no potential energy. And here is the work-energy principle for this situation:
Note the dropping of vector notation due to ultimate laziness. The work done by gravity is in the same direction as the displacement. Also notice that I used y[1] + d as the distance for the work
done by gravity (just to be complete). Even as iron man is slowing down in the sand, gravity is still acting on him. For the work done by the sand, it is a negative value since the force is in the
opposite direction as the displacement. I want the acceleration of iron man during this time, so I can use Newton’s second law:
The net force is only in the y-direction, so solving for the acceleration in the y-direction:
I need the net force during that time, so that is the force from the sand plus gravity (plus in the vector-sense). So, plugging in:
Yes, that looks confusing. First, it can look a little better by canceling the masses – which do in fact cancel.
If y[1] is much greater than d, this simplifies even further, but I will leave it at this. Note that I am assuming no air resistance (which is mostly ok). Now to plug in my values, here is where if
you disagree, you can plug in your own numbers.
Granted, this is a big acceleration – but is it too big? Who knows? NASA knows. Yes, they have data – via wikipedia about what kind of accelerations the body can take. I listed this before when I
talked about professor splash’s jump into 1 foot of water. Here is the whole data table from wikipedia:
Note that the above table is in unit of “g’s” where 1 g = 9.8 m/s^2. Iron Man’s acceleration of 3267 m/s^2 is 333 g’s. If Iron Man landed the way indicated by his final position, he would accelerate
“+Gz – blood towards the feet”. NASA lists this direction with a max acceleration of 18 g’s for less than 0.01 seconds. The iron man suit doesn’t decrease the acceleration of Tony Stark’s internal
organs even if it does give him super strength and a built in cell phone. (actually, it’s probably a built in satellite phone). Even if he only fell from 100 meters, he would have an acceleration of
65 g’s.
Ok – stop for a second. Think before you act.
I am not asking for the Science and Entertainment Exchange to intervene and make Tony dead in this scene. That would make for a boring moving.
Other Ways
Yes, there are other ways to solve the problem above to find the acceleration. If you know the speed of Iron Man right before he hits the sand and you know how far it takes him to stop, you could use
the following kinematic equation:
But, I like my way better.
Other things I could have complained about:
• This energy source thing that he wears.
• Using an electromagnet to prevent shrapnel from getting to his heart (couldn’t they just surgically remove this later? Also, is most shrapnel ferromagnetic?)
• So, say this energy thing has tons of energy stored in it. How does this make thrust for flying? It looks like he has rocket boots. But rocket boots have to shoot something out to make it go.
• Momentum is not conserved when he uses his repulsor hand shooter things. He moves a car back, but he is just staying there.
Notice that I could have complained about these things, but I didn’t. Really, I thought it was a good representation of the comic books.
1. #1 Uncle Al January 3, 2009
The energy source – the necessary palladium ring – is cold fusion. Respect the film for its delicate handling of the absurd. A repulsar field’s push is anchored against Heisenberg uncertainty
vacuum fluctuations, 10^90 g/cm^2 equivalent. It’s the stuff that covariantly sources the Casimir effect, Lamb shift, Purcell effect, Rabi vacuum oscillation, electron anomalous g-factor…
If you can believe string theory has 10^(10^5) acceptable vacuum solutions you can hardly criticize a mere 126 minutes of gadget porn. Ask yourself, “how did Iron Man take a whiz?” Now we shall
see if the movie is true to itself – “They say the best weapon is one you never have to fire. I prefer the weapon you only need to fire once.”
2. #2 Frank Stallone January 3, 2009
Since his repulsor hand shooter things are powered by the arc reactor thing which would have a gradual power output falloff curve and not some liquid fuel source that would have an abrupt
boundary between “on” and “off”, it’s plausible (within the movie’s rule set) that they were able to act as retro-rockets, retarding his fall, even if they couldn’t prevent it.
3. #3 beth January 4, 2009
While we watch the film my brother and I wondered why he didn’t look for more uses for the anti-momentum device he had also invented. I figure it was in his pocket or something. It is clearly
also in use when he is testing his suit and he slams into the ceiling and back again. Tony was just too modest to mention it.
4. #4 Anonymous Coward January 4, 2009
I liked your analysis, but I question the assumptions that went into your value for “d”. Given typical impact crater formation, I think it would be safe to assume the local sand height was
significantly higher pre-impact than it is post-impact.
I think Professor Jones’s trip in the refrigerator (in Indiana Jones 4) is a more egregious example of that kind of bad movie physics. Or maybe I’m just saying that because I liked Iron Man more.
5. #5 jacob swink January 7, 2009
your just ruining the movie so shut your silly mouth
6. #6 Unistrut January 7, 2009
Amusingly enough I was able to handwave all the physics problems. It was only when the tank hit him in mid air that my mind went “Wait a minute…”.
7. #7 scidog May 11, 2010
like the move ad many years ago said–”it’s only a movie-it’s only a movie-it’s only a movie”..just no guys with chain saws to freak you out in this one..just close your eyes and say “it’s only a
movie–it’s only a movie”—
8. #8 Powerball May 12, 2010
You killed my Iron Man
9. #9 Merle May 13, 2010
These two would-be nitpicks are actually the same:
# So, say this energy thing has tons of energy stored in it. How does this make thrust for flying? It looks like he has rocket boots. But rocket boots have to shoot something out to make it go.
# Momentum is not conserved when he uses his repulsor hand shooter things. He moves a car back, but he is just staying there.
The repulsors are inertialess (or nearly inertialess) thrusters. The handwavium that lets them work both serves to propel the suit (without reaction mass) and lets him blast things around without
getting pushed back himself.
It blatantly violates real-world motion, but it’s the major suspension of disbelief in the Iron Man story.
10. #10 Rhett Allain May 14, 2010
Matt at Built on Facts did a post on the rockets
|
{"url":"http://scienceblogs.com/dotphysics/2009/01/03/im-iron-man-no-im-not/","timestamp":"2014-04-19T20:12:57Z","content_type":null,"content_length":"76206","record_id":"<urn:uuid:ac30fbdf-b9e3-4dc7-9bbe-d1515fe203ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Picking out an odd subset
up vote 5 down vote favorite
I want to know whether the following property holds:
There exists a constant C such that for any big positive integer N and any nonempty subset M of {1,...,N} there exists a positive integer p(M)$\lt$ logN such that some residue class modulo p(M)
contains odd number of elements from M.
A weaker form of the same property is also useful:
There exists a constant C such that for any big positive integer N and any disjoint (but not simultaneously empty!) subsets M, M' of {1,...,N} there exists a positive integer p(M)$\lt$C logN such
that some residue class modulo p(M) contains different numbers of elements from M and M'.
nt.number-theory prime-numbers
Presumably the first "$p(M) < \log N\phantom.$" should be "$p(M) < C \log N\phantom.$" also? – Noam D. Elkies Dec 16 '11 at 6:09
It is not clear why the second property is weaker than the first. The first looks false. Indeed, fix $x$ small compared with $N$, and for each $p \leq x$ impose the condition that every congruence
3 class mod $p$ arise an even number of times in $M$. if $M\phantom.$ is regarded in the usual way as a vector in $({\bf Z}/2{\bf Z})^N$, each of our conditions mod $p$ is a homogeneous linear
constraint, and there are about $x^2/2$ such constraints, so as long as $x^2/2 < N\phantom.$ they can be satisfied simultaneously by a nonzero vector, i.e. a nonempty $M$. So $C \log N$ is much
too small. – Noam D. Elkies Dec 16 '11 at 6:15
is the obvious $C\sqrt{N}$ conjecture true? Seems like the constraints look independent except for the fact that they all mandate that the total number of elements in $M$ is even. – Will Sawin Dec
16 '11 at 8:54
Perhaps ideas connected with large sieve will help here. If $M(p,h)$ denotes the number of elts of $M$ congruent to $h$ mod $p$ then the assumption that $M(p,h)$ is always even should force the
square mean $\sum_h |M(p,h) - |M|/p|^2$ to be, say, twice the size that one might expect for a positive fraction of $p$s. I'm not claiming to have an actual argument, though! – Ben Green Dec 16
'11 at 10:32
The second question is hardly any better than the first. Consider all subsets. For each of them consider the cardinalities of its intersections with $C\log^2 N$ residue classes involved. We have
1 $2^N$ subsets and just $N^\{Clog^2N}=e^{O(\log^3 N)}$ options. So two different sets have the same vector of cardinalities. If the corresponding sets intersect, just remove the common part. $P(M)
<C\sqrt N$ is plausible but I'd like to know if it will be of any use for the OP before giving it any thought. – fedja Dec 16 '11 at 14:13
show 2 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged nt.number-theory prime-numbers or ask your own question.
|
{"url":"http://mathoverflow.net/questions/83595/picking-out-an-odd-subset","timestamp":"2014-04-18T14:06:30Z","content_type":null,"content_length":"53595","record_id":"<urn:uuid:3f24b004-eec7-42bc-8ac7-4a887d7dec5c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Symbols Definition
The set of signs of flows in output queue
The mth flow in output queue, represents the number of packet belonging to ,
The set of signs of mth flow in output queue,
The nth packet of mth flow
The length of
The length of maximum transmission unit
The set of groups of coding flows, represents corresponding number of groups
The set of signs of groups of coding flows, represents corresponding number of signs
The kth group of coding flows, represents the number of flows in ,
The set of signs of kth group of coding flows, represents corresponding number of signs,
The set of signs of nonempty flows belonging to kth group of coding flows, represents corresponding number of signs,
The set of flows that can not be encoded, represents corresponding number of flows
The set of signs of flows that cannot be encoded
The set of signs of nonempty flows in , represents corresponding number of signs
Set to be 1 if network coding conditions are met and is set to be 0 otherwise
|
{"url":"http://www.hindawi.com/journals/ijdsn/2012/565604/tab1/","timestamp":"2014-04-20T09:41:29Z","content_type":null,"content_length":"40845","record_id":"<urn:uuid:3dde33e6-e0f2-42ac-b81e-ad605e0f175a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Counting problem
December 20th 2013, 07:22 AM #1
Counting problem
Hi! It's me again with another counting problem.
I am trying to find the order of the following set (where $n \in \mathbb{Z}$ is fixed): $G_1 = \{ \sigma \in S_n | \sigma (1) = 1 \}$, the stabilizer of 1 in $S_n$.
Now, I have calculated 4 of these:
n = 2: $G_1 = \{ e \}$ so $|G_1| = 1$
n = 3: $G_1 = \{ e, (23) \}$ so $|G_1|= 2$
n = 4: $G_1 = \{ e, (234), (243), (23), (24), (34) \}$ so $|G_1|= 6$
I also did n = 5 which gives: $|G_1| = 24$
The pattern seems to be that $|G_1| = (n - 1)!$, but the examples don't help me to understand the counting in general, which was the intent of doing the examples in the first place.
Any thoughts?
Re: Counting problem
My only thought is, $|S_n| = n!$ since it is the number of ways to permute $n$ items. Hence, $|G_1| = (n-1)!$ since it is the number of ways to permute $n-1$ items (if you have $n$ items, but
stabilize 1, then you are only permuting $n-1$ items).
Re: Counting problem
Yes, it is as you say. (sighs) I was hoping to get back before someone answered. If I had calculated this with the nth position (the stabilizer of the highest place) I would have seen it right
away. I was thinking about it way too hard and I got lost in it.
Re: Counting problem
Sometimes this is called the "fixed group" (or isotropy group) of 1, that is: the elements of S[n] that fix 1. It should be obvious this is, in fact a subgroup of S[n], because it is closed under
composition (the multiplication of S[n]).
There are a couple of way to approach this:
the first is to show that the mapping:
$f:\text{Stab}(1) \to S_{n-1}$ given by:
$(f(\sigma))(k) = (\sigma(k+1)) - 1$, for k = 1,2,...,n-1 is an isomorphism.
The second is to use the orbit-stabilizer theorem:
$|S_n| = |S_n(1)|\ast|\text{Stab}(1)|$
Now because we have the n-1 transpositions (1 k), along with the identity map, it is clear that |S[n](1)| = n (the orbit of 1 has size n, because S[n] acts TRANSITIVELY on the n letters (the set
of cardinality n...in this case {1,2,...,n}) it permutes). Thus:
|Stab(1)| = n!/n = (n-1)!.
December 20th 2013, 08:52 AM #2
MHF Contributor
Nov 2010
December 20th 2013, 10:04 AM #3
December 20th 2013, 10:13 AM #4
MHF Contributor
Mar 2011
|
{"url":"http://mathhelpforum.com/advanced-algebra/225175-counting-problem.html","timestamp":"2014-04-20T12:39:34Z","content_type":null,"content_length":"48550","record_id":"<urn:uuid:ce0431cf-8d81-4e30-b681-a0252a72704f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of ordinal-number
In set theory, an ordinal number, or just ordinal, is the order type of a well-ordered set. They are usually identified with hereditarily transitive sets. Ordinals are an extension of the natural
numbers different from integers and from cardinals. Like other kinds of numbers, ordinals can be added, multiplied, and exponentiated. The finite ordinals (and the finite cardinals) are the natural
numbers: 0, 1, 2, …, since any two total orderings of a finite set are order isomorphic. The least infinite ordinal is ω which is identified with the cardinal number $aleph_0$. However in the
transfinite case, beyond ω, ordinals draw a finer distinction than cardinals on account of their order information. Whereas there is only one countably infinite cardinal, namely $aleph_0$ itself,
there are uncountably many countably infinite ordinals, namely ω, ω + 1, ω + 2, …, ω·2, ω·2 + 1, …, ω^2, …, ω^3, …, ω^ω, …, ω^ωω, …, ε[0], …. Here addition and multiplication are not commutative: in
particular 1 + ω is ω rather than ω + 1, while 2·ω is ω rather than ω·2. The set of all countable ordinals constitutes the first uncountable ordinal ω[1] which is identified with the cardinal
$aleph_1$ (next cardinal after $aleph_0$). Well-ordered cardinals are identified with their initial ordinals, i.e. the smallest ordinal of that cardinality. The cardinality of an ordinal defines a
many to one association from ordinals to cardinals.
Ordinals were introduced by Georg Cantor in 1897 to accommodate infinite sequences and to classify sets with certain kinds of order structures on them.
In general, each ordinal α is the order type of the set of ordinals strictly less than α itself. This property permits every ordinal to be represented as the set of all ordinals less than it.
Ordinals may be categorized as: zero, successor ordinals, and limit ordinals (of various cofinalities). Given a class of ordinals, one can identify the α-th member of that class, i.e. one can index
(count) them. A class is closed and unbounded if its indexing function is continuous and never stops. The Cantor normal form uniquely represents each ordinal as a finite sum of ordinal powers of ω.
However, this cannot form the basis of a universal ordinal notation due to such self-referential representations as $epsilon_0 = omega^{epsilon_0}$. Larger and larger ordinals can be defined, but
they become more and more difficult to describe. Any ordinal number can be made into a topological space by endowing it with the order topology; this topology is discrete if and only if the ordinal
is a countable cardinal, i.e. at most ω. A subset of ω + 1 is open in the order topology if and only if either it is cofinite or it does not contain ω as an element.
Ordinals extend the natural numbers
natural number
(which, in this context, includes the number
) can be used for two purposes: to describe the
of a
, or to describe the
of an element in a sequence. When restricted to finite sets these two concepts coincide; there is only one way to put a finite set into a linear sequence, up to isomorphism. When dealing with
infinite sets one has to distinguish between the notion of size, which leads to
cardinal numbers
, and the notion of position, which is generalized by the ordinal numbers described here. This is because, while any set has only one size (its
), there are many nonisomorphic well-orderings of any infinite set, as explained below.
Whereas the notion of cardinal number is associated to a set with no particular structure on it, the ordinals are intimately linked with the special kind of sets which are called well-ordered (so
intimately linked, in fact, that some mathematicians make no distinction between the two concepts). A well-ordered set is a totally ordered set (given any two elements one defines a smaller and a
larger one in a coherent way) in which there is no infinite decreasing sequence (however, there may be infinite increasing sequences); equivalently, every non-empty subset of the set has a least
element. Ordinals may be used to label the elements of any given well-ordered set (the smallest element being labeled 0, the one after that 1, the next one 2, "and so on") and to measure the "length"
of the whole set by the least ordinal which is not a label for an element of the set. This "length" is called the order type of the set.
Any ordinal is defined by the set of ordinals that precede it: in fact, the most common definition of ordinals identifies each ordinal as the set of ordinals that precede it. For example, the ordinal
42 is the order type of the ordinals less than it, i.e., the ordinals from 0 (the smallest of all ordinals) to 41 (the immediate predecessor of 42), and it is generally identified as the set
{0,1,2,…,41}. Conversely, any set of ordinals which is downward-closed—meaning that any ordinal less than an ordinal in the set is also in the set—is (or can be identified with) an ordinal.
So far we have mentioned only finite ordinals, which are the natural numbers. But there are infinite ones as well: the smallest infinite ordinal is ω, which is the order type of the natural numbers
(finite ordinals) and which can even be identified with the set of natural numbers (indeed, the set of natural numbers is well-ordered—as is any set of ordinals—and since it is downward closed it can
be identified with the ordinal associated to it, which is exactly how we define ω).
Perhaps a clearer intuition of ordinals can be formed by examining a first few of them: as mentioned above, they start with the natural numbers, 0, 1, 2, 3, 4, 5, … After all natural numbers comes
the first infinite ordinal, ω, and after that come ω+1, ω+2, ω+3, and so on. (Exactly what addition means will be defined later on: just consider them as names.) After all of these come ω·2 (which is
ω+ω), ω·2+1, ω·2+2, and so on, then ω·3, and then later on ω·4. Now the set of ordinals which we form in this way (the ω·m+n, where m and n are natural numbers) must itself have an ordinal associated
to it: and that is ω^2. Further on, there will be ω^3, then ω^4, and so on, and ω^ω, then ω^ω², and much later on ε[0] (epsilon nought) (to give a few examples of relatively
small—countable—ordinals). We can go on in this way indefinitely far ("indefinitely far" is exactly what ordinals are good at: basically every time one says "and so on" when enumerating ordinals, it
defines a larger ordinal). The smallest uncountable ordinal is the set of all countable ordinals, expressed as ω[1].
Well-ordered sets
A well-ordered set is an ordered set in which every non-empty subset has a least element: this is equivalent (at least in the presence of the axiom of dependent choice) to just saying that the set is
totally ordered and there is no infinite decreasing sequence, something which is perhaps easier to visualize. In practice, the importance of well-ordering is justified by the possibility of applying
transfinite induction, which says, essentially, that any property that passes on from the predecessors of an element to that element itself must be true of all elements (of the given well-ordered
set). If the states of a computation (computer program or game) can be well-ordered in such a way that each step is followed by a "lower" step, then you can be sure that the computation will
Now we don't want to distinguish between two well-ordered sets if they only differ in the "labeling of their elements", or more formally: if we can pair off the elements of the first set with the
elements of the second set such that if one element is smaller than another in the first set, then the partner of the first element is smaller than the partner of the second element in the second
set, and vice versa. Such a one-to-one correspondence is called an order isomorphism and the two well-ordered sets are said to be order-isomorphic, or similar (obviously this is an equivalence
relation). Provided there exists an order isomorphism between two well-ordered sets, the order isomorphism is unique: this makes it quite justifiable to consider the sets as essentially identical,
and to seek a "canonical" representative of the isomorphism type (class). This is exactly what the ordinals provide, and it also provides a canonical labeling of the elements of any well-ordered set.
So we essentially wish to define an ordinal as an isomorphism class of well-ordered sets: that is, as an equivalence class for the equivalence relation of "being order-isomorphic". There is a
technical difficulty involved, however, in the fact that the equivalence class is too large to be a set in the usual Zermelo–Fraenkel (ZF) formalization of set theory. But this is not a serious
difficulty. We will say that the ordinal is the order type of any set in the class.
Definition of an ordinal as an equivalence class
The original definition of ordinal number, found for example in Principia Mathematica, defines the order type of a well-ordering as the set of all well-orderings similar (order-isomorphic) to that
well-ordering: in other words, an ordinal number is genuinely an equivalence class of well-ordered sets. This definition must be abandoned in ZF and related systems of axiomatic set theory because
these equivalence classes are too large to form a set. However, this definition still can be used in type theory and in Quine's set theory New Foundations and related systems (where it affords a
rather surprising alternative solution to the Burali-Forti paradox of the largest ordinal).
Von Neumann definition of ordinals
Rather than defining an ordinal as an equivalence class of well-ordered sets, we will define it as a particular well-ordered set which (canonically) represents the class. Thus, an ordinal number will
be a well-ordered set; and every well-ordered set will be order-isomorphic to exactly one ordinal number.
The standard definition, suggested by John von Neumann, is: each ordinal is the well-ordered set of all smaller ordinals. In symbols, λ = [0,λ). Formally:
A set S is an ordinal if and only if S is strictly well-ordered with respect to set membership and every element of S is also a subset of S.
Note that the natural numbers are ordinals by this definition. For instance, 2 is an element of 4 = {0, 1, 2, 3}, and 2 is equal to {0, 1} and so it is a subset of {0, 1, 2, 3}.
It can be shown by transfinite induction that every well-ordered set is order-isomorphic to exactly one of these ordinals, that is, there is an order preserving bijective function between them.
Furthermore, the elements of every ordinal are ordinals themselves. Whenever you have two ordinals S and T, S is an element of T if and only if S is a proper subset of T. Moreover, either S is an
element of T, or T is an element of S, or they are equal. So every set of ordinals is totally ordered. Further, every set of ordinals is well-ordered. This generalizes the fact that every set of
natural numbers is well-ordered.
Consequently, every ordinal S is a set having as elements precisely the ordinals smaller than S. For example, every set of ordinals has a supremum, the ordinal obtained by taking the union of all the
ordinals in the set. This union exists regardless of the set's size, by the axiom of union).
The class of all ordinals is not a set. If it were a set, one could show that it was an ordinal and thus a member of itself which would contradicts its strict ordering by membership. This is the
Burali-Forti paradox. The class of all ordinals is variously called "Ord", "ON", or "∞".
An ordinal is finite if and only if the opposite order is also well-ordered, which is the case if and only if each of its subsets has a maximum.
Other definitions
There are other modern formulations of the definition of ordinal. For example, assuming the axiom of regularity, the following are equivalent for a set x:
• x is an ordinal,
• x is a transitive set, and set membership is trichotomous on x,
• x is a transitive set totally ordered by set inclusion,
• x is a transitive set of transitive sets.
These definitions cannot be used in non-well-founded set theories. In set theories with urelements, one has to further make sure that the definition excludes urelements from appearing in ordinals.
Transfinite sequence
If α is a limit ordinal and
is a set, an α-indexed sequence of elements of
is a function from α to
. This concept, a
transfinite sequence
ordinal-indexed sequence
, is a generalization of the concept of a
. An ordinary sequence corresponds to the case α = ω.
Transfinite induction
What is transfinite induction?
Transfinite induction holds in any well-ordered set, but it is so important in relation to ordinals that it is worth restating here.
Any property which passes from the set of ordinals smaller than a given ordinal α to α itself, is true of all ordinals.
That is, if P(α) is true whenever P(β) is true for all β<α, then P(α) is true for all α. Or, more practically: in order to prove a property P for all ordinals α, one can assume that it is already
known for all smaller β<α.
Transfinite recursion
Transfinite induction can be used not only to prove things, but also to define them. Such a definition is normally said to be by transfinite recursion – the proof that the result is well-defined uses
transfinite induction. Let F denote a (class) function F to be defined on the ordinals. The idea now is that, in defining F(α) for an unspecified ordinal α, one may assume that F(β) is already
defined for all and thus give a formula for F(α) in terms of these F(β). It then follows by transfinite induction that there is one and only one function satisfying the recursion formula up to and
including α.
Here is an example of definition by transfinite recursion on the ordinals (more will be given later): define function F by letting F(α) be the smallest ordinal not in the class {F(β) | β < α}, that
is, the class consisting of all F(β) for . This definition assumes the F(β) known in the very process of defining F; this apparent vicious circle is exactly what definition by transfinite recursion
permits. In fact, F(0) makes sense since there is no ordinal , and the class {F(β) | β < 0} is empty. So F(0) is equal to 0 (the smallest ordinal of all). Now that F(0) is known, the definition
applied to F(1) makes sense (it is the smallest ordinal not in the singleton class {{nowrap|1={F(0)} = {0}}}), and so on (the and so on is exactly transfinite induction). It turns out that this
example is not very exciting, since provably for all ordinals α, which can be shown, precisely, by transfinite induction.
Successor and limit ordinals
Any nonzero ordinal has the minimum element, zero. It may or may not have a maximum element. For example, 42 has maximum 41 and ω+6 has maximum ω+5. On the other hand, ω does not have a maximum since
there is no largest natural number. If an ordinal has a maximum α, then it is the next ordinal after α, and it is called a
successor ordinal
, namely the successor of α, written α+1. In the von Neumann definition of ordinals, the successor of α is
since its elements are those of α and α itself.
A nonzero ordinal which is not a successor is called a limit ordinal. One justification for this term is that a limit ordinal is indeed the limit in a topological sense of all smaller ordinals (under
the order topology).
When $langle alpha_{iota} | iota < gamma rangle$ is an ordinal-indexed sequence, indexed by a limit γ and the sequence is increasing, i.e. $alpha_{iota} < alpha_{rho}!$ whenever $iota < rho,!$ we
define its limit to be the least upper bound of the set ${ alpha_{iota} | iota < gamma },!$ that is, the smallest ordinal (it always exists) greater than any term of the sequence. In this sense, a
limit ordinal is the limit of all smaller ordinals (indexed by itself). Put more directly, it is the supremum of the set of smaller ordinals.
Another way of defining a limit ordinal is to say that α is a limit ordinal if and only if:
There is an ordinal less than α and whenever ζ is an ordinal less than α, then there exists an ordinal ξ such that ζ < ξ < α.
So in the following sequence:
0, 1, 2, ... , ω, ω+1
ω is a limit ordinal because for any ordinal (in this example, a natural number) we can find another ordinal (natural number) larger than it, but still less than ω.
Thus, every ordinal is either zero, or a successor (of a well-defined predecessor), or a limit. This distinction is important, because many definitions by transfinite induction rely upon it. Very
often, when defining a function F by transfinite induction on all ordinals, one defines F(0), and F(α+1) assuming F(α) is defined, and then, for limit ordinals δ one defines F(δ) as the limit of the
F(β) for all β<δ (either in the sense of ordinal limits, as we have just explained, or for some other notion of limit if F does not take ordinal values). Thus, the interesting step in the definition
is the successor step, not the limit ordinals. Such functions (especially for F nondecreasing and taking ordinal values) are called continuous. We will see that ordinal addition, multiplication and
exponentiation are continuous as functions of their second argument.
Indexing classes of ordinals
We have mentioned that any well-ordered set is similar (order-isomorphic) to a unique ordinal number $alpha$, or, in other words, that its elements can be indexed in increasing fashion by the
ordinals less than $alpha$. This applies, in particular, to any set of ordinals: any set of ordinals is naturally indexed by the ordinals less than some $alpha$. The same holds, with a slight
modification, for classes of ordinals (a collection of ordinals, possibly too large to form a set, defined by some property): any class of ordinals can be indexed by ordinals (and, when the class is
unbounded in the class of all ordinals, this puts it in class-bijection with the class of all ordinals). So we can freely speak of the $gamma$-th element in the class (with the convention that the
“0-th” is the smallest, the “1-th” is the next smallest, and so on). Formally, the definition is by transfinite induction: the $gamma$-th element of the class is defined (provided it has already been
defined for all $beta), as the smallest element greater than the beta-th element for all beta.We can apply this, for example, to the class of limit ordinals: the gamma-th ordinal which is either a
limit or zero is omegacdotgamma (see ordinal arithmetic for the definition of multiplication of ordinals). Similarly, we can consider additively indecomposable ordinals (meaning a nonzero ordinal
which is not the sum of two strictly smaller ordinals): the gamma-th additively indecomposable ordinal is indexed as omega^gamma. The technique of indexing classes of ordinals is often useful in the
context of fixed points: for example, the gamma-th ordinal alpha such that omega^alpha=alpha is written varepsilon_gamma. These are called the "epsilon numbers". Closed unbounded sets and classes A
class of ordinals is said to be unbounded, or cofinal, when given any ordinal, there is always some element of the class greater than it (then the class must be a proper class, i.e., it cannot be a
set). It is said to be closed when the limit of a sequence of ordinals in the class is again in the class: or, equivalently, when the indexing (class-)function F is continuous in the sense that, for
delta a limit ordinal, F(delta) (the delta-th ordinal in the class) is the limit of all F(gamma) for gamma; this is also the same as being closed, in the topological sense, for the order topology (to
avoid talking of topology on proper classes, one can demand that the intersection of the class with any given ordinal is closed for the order topology on that ordinal, this is again equivalent).Of
particular importance are those classes of ordinals which are closed and unbounded, sometimes called clubs. For example, the class of all limit ordinals is closed and unbounded: this translates the
fact that there is always a limit ordinal greater than a given ordinal, and that a limit of limit ordinals is a limit ordinal (a fortunate fact if the terminology is to make any sense at all!). The
class of additively indecomposable ordinals, or the class of varepsilon_cdot ordinals, or the class of cardinals, are all closed unbounded; the set of regular cardinals, however, is unbounded but not
closed, and any finite set of ordinals is closed but not unbounded.A class is stationary if it has a nonempty intersection with every closed unbounded class. All superclasses of closed unbounded
classes are stationary and stationary classes are unbounded, but there are stationary classes which are not closed and there are stationary classes which have no closed unbounded subclass (such as
the class of all limit ordinals with countable cofinality). Since the intersection of two closed unbounded classes is closed and unbounded, the intersection of a stationary class and a closed
unbounded class is stationary. But the intersection of two stationary classes may be empty, e.g. the class of ordinals with cofinality ω with the class of ordinals with uncountable cofinality.Rather
than formulating these definitions for (proper) classes of ordinals, we can formulate them for sets of ordinals below a given ordinal alpha: A subset of a limit ordinal alpha is said to be unbounded
(or cofinal) under alpha provided any ordinal less than alpha is less than some ordinal in the set. More generally, we can call a subset of any ordinal alpha cofinal in alpha provided every ordinal
less than alpha is less than or equal to some ordinal in the set. The subset is said to be closed under alpha provided it is closed for the order topology in alpha, i.e. a limit of ordinals in the
set is either in the set or equal to alpha itself. Arithmetic of ordinals There are three usual operations on ordinals: addition, multiplication, and (ordinal) exponentiation. Each can be defined in
essentially two different ways: either by constructing an explicit well-ordered set which represents the operation or by using transfinite recursion. Cantor normal form provides a standardized way of
writing ordinals. The so-called "natural" arithmetical operations retain commutativity at the expense of continuity. Ordinals and cardinals Initial ordinal of a cardinal Each ordinal has an
associated cardinal, its cardinality, obtained by simply forgetting the order. Any well-ordered set having that ordinal as its order-type has the same cardinality. The smallest ordinal having a given
cardinal as its cardinality is called the initial ordinal of that cardinal. Every finite ordinal (natural number) is initial, but most infinite ordinals are not initial. The axiom of choice is
equivalent to the statement that every set can be well-ordered, i.e. that every cardinal has an initial ordinal. In this case, it is traditional to identify the cardinal number with its initial
ordinal, and we say that the initial ordinal is a cardinal.The α-th infinite initial ordinal is written omega_alpha. Its cardinality is written aleph_alpha. For example, the cardinality of ω0 = ω is
aleph_0, which is also the cardinality of ω² or ε0 (all are countable ordinals). So (assuming the axiom of choice) we identify ω with aleph_0, except that the notation aleph_0 is used when writing
cardinals, and ω when writing ordinals (this is important since aleph_0^2=aleph_0 whereas omega^2>omega). Also, omega_1 is the smallest uncountable ordinal (to see that it exists, consider the set of
equivalence classes of well-orderings of the natural numbers: each such well-ordering defines a countable ordinal, and omega_1 is the order type of that set), omega_2 is the smallest ordinal whose
cardinality is greater than aleph_1, and so on, and omega_omega is the limit of the omega_n for natural numbers n (any limit of cardinals is a cardinal, so this limit is indeed the first cardinal
after all the omega_n).See also Von Neumann cardinal assignment. Cofinality The cofinality of an ordinal alpha is the smallest ordinal delta which is the order type of a cofinal subset of alpha.
Notice that a number of authors define confinality or use it only for limit ordinals. The cofinality of a set of ordinals or any other well ordered set is the cofinality of the order type of that
set.Thus for a limit ordinal, there exists a delta-indexed strictly increasing sequence with limit alpha. For example, the cofinality of ω² is ω, because the sequence ω·m (where m ranges over the
natural numbers) tends to ω²; but, more generally, any countable limit ordinal has cofinality ω. An uncountable limit ordinal may have either cofinality ω as does omega_omega or an uncountable
cofinality.The cofinality of 0 is 0. And the cofinality of any successor ordinal is 1. The cofinality of any limit ordinal is at least omega.An ordinal which is equal to its cofinality is called
regular and it is always an initial ordinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial even if it is not regular which it usually is not. If the Axiom of
Choice, then omega_{alpha+1} is regular for each α. In this case, the ordinals 0, 1, omega, omega_1, and omega_2 are regular, whereas 2, 3, omega_omega, and ωω·2 are initial ordinals which are not
regular.The cofinality of any ordinal α is a regular ordinal, i.e. the cofinality of the cofinality of α is the same as the cofinality of α. So the cofinality operation is idempotent. Some “large”
countable ordinals We have already mentioned (see Cantor normal form) the ordinal ε0, which is the smallest satisfying the equation omega^alpha = alpha, so it is the limit of the sequence 0, 1,
omega, omega^omega, omega^{omega^omega}, etc. Many ordinals can be defined in such a manner as fixed points of certain ordinal functions (the iota-th ordinal such that omega^alpha = alpha is called
varepsilon_iota, then we could go on trying to find the iota-th ordinal such that varepsilon_alpha = alpha, “and so on”, but all the subtlety lies in the “and so on”). We can try to do this
systematically, but no matter what system is used to define and construct ordinals, there is always an ordinal that lies just above all the ordinals constructed by the system. Perhaps the most
important ordinal which limits in this manner a system of construction is the Church-Kleene ordinal, omega_1^{mathrm{CK}} (despite the omega_1 in the name, this ordinal is countable), which is the
smallest ordinal which cannot in any way be represented by a computable function (this can be made rigorous, of course). Considerably large ordinals can be defined below omega_1^{mathrm{CK}},
however, which measure the “proof-theoretic strength” of certain formal systems (for example, varepsilon_0 measures the strength of Peano arithmetic). Large ordinals can also be defined above the
Church-Kleene ordinal, which are of interest in various parts of logic. Topology and ordinals Any ordinal can be made into a topological space in a natural way by endowing it with the order topology.
See the Topology and ordinals section of the "Order topology" article. Downward closed sets of ordinals A set is downward closed if anything less than an element of the set is also in the set. If a
set of ordinals is downward closed, then that set is an ordinal—the least ordinal not in the set. Examples:The set of ordinals less than 3 is 3 = { 0, 1, 2 }, the smallest ordinal not less than 3.
The set of finite ordinals is infinite, the smallest infinite ordinal: ω. The set of countable ordinals is uncountable, the smallest uncountable ordinal: ω1.See alsoCountingNotes References Conway,
J. H. and Guy, R. K. "Cantor"s Ordinal Numbers." In The Book of Numbers. New York: Springer-Verlag, pp. 266-267 and 274, 1996. Hamilton, A. G. (1982). Numbers, Sets, and Axioms : the Apparatus of
Mathematics. New York: Cambridge University Press. ISBN 0521245095. See Ch. 6, "Ordinal and cardinal numbers" Reprinted 2002, Dover. ISBN 0-486-42079-5 * Sierpiński, W. (1965). Cardinal and Ordinal
Numbers (2nd ed.). Warszawa: Państwowe Wydawnictwo Naukowe. Also defines ordinal operations in terms of the Cantor Normal Form. Patrick Suppes, Axiomatic Set Theory, D.Van Nostrand Company Inc.,
1960, ISBN 0-486-61630-4External links Ordinals at ProvenMath Beitraege zur Begruendung der transfiniten Mengenlehre Cantor"s original paper published in Mathematische Annalen 49(2), 1897$
|
{"url":"http://www.reference.com/browse/ordinal-number","timestamp":"2014-04-20T09:48:44Z","content_type":null,"content_length":"114940","record_id":"<urn:uuid:8cf3dbbd-3f9e-49b8-b8d4-ff1e662baf97>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Board Feet from a Log
Date: 03/19/2003 at 07:44:43
From: Jeff
Subject: Board feet from a log.
How do you compute board feet from a log?
What is the board feet of a 10-foot log if the diameter is 16 inches
at one end and 14 inches at the other end?
I read in a book that the board feet can be computed by multiplying
the length by the square of the diameter and then dividing by 24.
board feet = (ld^2)divided by 24
The book said to use the diameter of the small end of the log.
Using this I compute the board feet to be about 82.
However, a student of mine looked up the board feet on a chart off
the internet, which said 63 board feet.
Please help.
Date: 03/19/2003 at 12:06:30
From: Doctor Peterson
Subject: Re: Board feet from a log.
Hi, Jeff.
The problem is that there are various ways to estimate the number of
board feet in a log, none of which is really accurate. It is not a
precise calculation of volume, since it takes into account losses due
to kerf and making straight boards, and generally assumes rather than
measuring the taper of a log. I did a Google search to find formulas
for these rules, and the fullest discussion is at
Natural Resource Biometrics
This gives several rules, none of them as simple as what you used.
The most accurate, the International Rule, with your example is
0.44 D^2 - 1.20 D - 0.30 = 69.14 board feet
with D=14 inches for an 8 foot log. Since there is no rule given for
a 10-foot log, we can scale this up by multiplying by 10/8 and get
The widely used Doyle Rule is
(D - 4)^2 * L/16 = 62.5
for D=14 inches and L=10 feet.
These two answers are similar to the two that you and your student
got; yours is presumably more accurate.
Here are a couple other pages I found with other details:
Understanding Log Scales and Log Rules (PDF file)
Volume and weight of harvested trees, and log rules (PDF file)
The first of these compares the rules and shows how they are used.
The second gives a brief summary and lists pros and cons, such as
which rules tend to over- or underestimate.
Just to see how far off we would be if we calculated the actual volume
of the log, let's use the formula for volume of a frustum:
V = Pi(R^2 + rR + r^2)h/3
You have R=8 in, r=7 in, and L=10 ft. A board-foot is a foot^2-inch;
so we can convert L to 120 inches and divide the volume we get by 144:
V = pi(8^2 + 8*7 + 7^2)*120/3 = 21,237 in^3 = 147.5 board-feet
If I did that right, a lot is lost when you cut the log into boards!
The extra 60 board feet is a lot of sawdust and scraps.
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum
Date: 04/15/2009 at 12:50:08
From: David
Subject: Board Feet in a log discussion
Hi, I saw your dialog about board feet in a log and had a thought
or two coming at the discussion from the Logger/woodworker (I have
some experience logging, managing wood lots and working in wood)
perspective in case you are interested.
I would have guessed that the loss in milling the logs would be 40
or 45% but I think your calculations indicating a even higher percentage
could be close, too. There are a few reasons that may be of interest.
1) kerf size can be anywhere between 3/16” to 3/8” or more depending
on the type of blade.
2) you cannot simply slice up a log and square off the edges to make
quality lumber although many mills that make poor lumber do. For example,
you will never see the center rings of a log in the end grain of a #1
quality piece of lumber as such a board will warp excessively and move
a lot over time. So, you should always reject such boards at Home Depot
or Lowes when buying larger dimensions of lumber.
3) there are other demands from the market that create more waste such
as the width of most lumber is in even increments.
4) most lumber is planed after it is milled creating more waste. That
is one of the reasons that a 2x4 is really more like a 1 3/8 x 3 ½.
Hope all of this is helpful or at least interesting. Peace to you.
Date: 04/15/2009 at 15:04:39
From: Doctor Peterson
Subject: Re: Board Feet in a log discussion
Hi, David.
Thanks for writing.
The waste I calculated in the example was something like 147.5 -
86.4 = 61.1; as a percentage that is 61.1/147.5 = 41%, which fits
right in your estimated ballpark. The lower estimate, 62.5, gives
waste of about (147.5 - 62.5)/147.5 = 58%, which would be higher.
I always like to know the reasons for a formula, and your details
help to show what is involved in what I called "a lot of sawdust and
scraps"! In particular, this demonstrates the fact that, although
math gives nice accurate numbers in theory, reality trims off a lot
of the precision and makes rough estimates much more valuable than
"exact" calculations.
- Doctor Peterson, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/62471.html","timestamp":"2014-04-16T08:13:38Z","content_type":null,"content_length":"10565","record_id":"<urn:uuid:78a9c527-423d-43a1-ad82-3f95134bc30e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thomas Industrial Library
EXAMPLE 9-4
Comparison of Kinetostatic Torques and Forces Among Three Alternate Designs of the Same Cam.
Given: A translating roller follower as shown in Figure 9-1a (p. 214) is driven by a force-closed radial plate cam that has the following program:
Design 1
Segment 1: Rise 1 inch in 90 ° double harmonic displacement
Segment 2: Fall 1 inch in 90 ° double harmonic displacement
Segment 3: Dwell for 180 °
Design 2
Segment 1: Rise 1 inch in 90 ° cycloidal displacement
Segment 2: Fall 1 inch in 90 ° cycloidal displacement
Segment 3: Dwell for 180 °
Design 3
Segment 1: Rise 1 inch in 90 ° and fall 1 inch in 90 ° with polynomial displacement
Segment 2: Dwell for 180 °
Camshaft velocity is 15 rad/sec (143.24 rpm); Follower effective mass is
0.0738 lb-sec2/in (blobs); Damping is 10% of critical ( æ = 0.10)
Problem: Find the dynamic force and torque functions for the cam. Compare their peak magnitudes for the same prime circle radius.
1 Calculate the kinematic data (follower displacement, velocity, acceleration, and jerk) for each of the specified cam designs. See Chapter 8 to review this procedure.
2 Calculate the radius of curvature and pressure angle for trial values of prime circle radius, and size the cam to control these values. A prime circle radius of 3 in gives acceptable pressure
angles and radii of curvature. See Chapter 7 to review these calculations.
3 With the kinematics of the cam defined, we can address its dynamics. To solve equation 9.1a (p. 215) for the cam force, we will assume a value of 50 lb/in for the spring constant k and adjust the
preload Fpl for each design to obtain a minimum dynamic force of about 10 lb. For design 1, this requires a spring preload of 28 lb; for design 2, 13 lb; and for design 3, 10 lb.
4 The value of damping c is calculated from equation 9.2i (p. 218). The kinematic parameters x , v, and a are known from the prior analysis.
5 Program DYNACAM will do these computations for you. The dynamic forces that result from each design are shown in Figure 9-18 and the torques in Figure 9-19. Note that the force is largest for
design 1 at 82 lb peak and least for design 3 at 53 lb peak. The same ranking holds for the torques, which range from 96 lb-in for design 1 to 52 lb-in for design 3. These represent reductions of 35%
and 46% in the dynamic loading due to a change in the kinematic design. Not surprisingly, the sixth-degree polynomial design, which had the lowest acceleration, also has the lowest forces and torques
and is the clear winner. Open the files E09-04a.cam, E09-04b.cam, and E09-04c.cam in program DYNACAM to see these results.
Copyright 2004, Industrial Press, Inc., New York, NY
|
{"url":"http://www.thomasglobal.com/library/UI/Cams/Cam%20Design%20and%20Manufacturing%20Handbook/Kinetostatic%20Camshaft%20Torque_2/2/default.aspx","timestamp":"2014-04-19T22:15:13Z","content_type":null,"content_length":"133716","record_id":"<urn:uuid:98730947-9a82-4929-99fd-01c393c8a295>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
De moivres theorem
January 28th 2009, 10:57 AM
De moivres theorem
Simplify the following
A) (cosx - isinx)^5
B) (sinx - icosx)^4
for A i get
cos5x - isin5x
I think this is right...?
not sure if B is done the same or not. any help?
January 28th 2009, 11:03 AM
Here is what you can do :)
De Moivre's formula says : $(\cos(x)+i\sin(x))^n=\cos(nx)+i \sin(nx)$
Remember that the sine function is odd and the cosine function is even.
So $(\cos(x)-i\sin(x))^n=(\cos(-x)+i \sin(-x))^n=$$\cos(-nx)+i \sin(-nx)=\cos(nx)-i \sin(nx)$ (Wink)
January 28th 2009, 11:05 AM
so did i do it right :S
is it different when cosine and sine are switched round?
January 28th 2009, 11:09 AM
January 28th 2009, 11:21 AM
hi thanks for helping me. its actualyy (sinx - icosx) does this change things??
January 28th 2009, 11:25 AM
Yes :
$\sin(x)-i \cos(x)$
Multiply and divide by i :
$\frac 1i \left(i [\sin(x)-i\cos(x)]\right)=\frac 1i (\cos(x)+i \sin(x))$
So $(\sin(x)-i \cos(x))^n=\frac{1}{i^n} (\cos(nx)+i \sin(nx))$
|
{"url":"http://mathhelpforum.com/trigonometry/70409-de-moivres-theorem-print.html","timestamp":"2014-04-19T01:55:30Z","content_type":null,"content_length":"8740","record_id":"<urn:uuid:044791a7-ee0e-4f3e-9940-57a58490ccb6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dalworthington Gardens, TX Prealgebra Tutor
Find a Dalworthington Gardens, TX Prealgebra Tutor
...I take the responsibilities of a tutor seriously and respect the investment of time and money which is made by students and their parents. I am dependable and professional in my appearance and
conduct, yet I teach with an attitude of encouragement and appreciation for a student's efforts. In my...
19 Subjects: including prealgebra, reading, writing, English
I am a recently retired (2013) high school math teacher with 30 years of classroom experience. I have taught all maths from 7th grade through AP Calculus. I like to focus on a constructivist
style of teaching/learning which gets the student to a conceptual understanding of mathematical topics.
12 Subjects: including prealgebra, calculus, geometry, statistics
...I can help with query strategies and with syntax of most query statements. I am a certified tutor in all math topics covered by the ASVAB. In my career I have used and taught many mathematical
15 Subjects: including prealgebra, chemistry, calculus, ASVAB
...I ask that the student have their textbook and/or notes available for me to use as a resource. It is also important that we have plenty of paper and sharpened pencils ready. I will bring my
iPad with me to show how to access additional resources.
8 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...My goal is to meet your child where he/she is and help him/her learn in the way that they do best. Please feel free to ask me any questions you may have. I have taught and counseled many ADD/
ADHAD children.
22 Subjects: including prealgebra, English, elementary math, grammar
Related Dalworthington Gardens, TX Tutors
Dalworthington Gardens, TX Accounting Tutors
Dalworthington Gardens, TX ACT Tutors
Dalworthington Gardens, TX Algebra Tutors
Dalworthington Gardens, TX Algebra 2 Tutors
Dalworthington Gardens, TX Calculus Tutors
Dalworthington Gardens, TX Geometry Tutors
Dalworthington Gardens, TX Math Tutors
Dalworthington Gardens, TX Prealgebra Tutors
Dalworthington Gardens, TX Precalculus Tutors
Dalworthington Gardens, TX SAT Tutors
Dalworthington Gardens, TX SAT Math Tutors
Dalworthington Gardens, TX Science Tutors
Dalworthington Gardens, TX Statistics Tutors
Dalworthington Gardens, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Dalworthington_Gardens_TX_prealgebra_tutors.php","timestamp":"2014-04-16T13:48:04Z","content_type":null,"content_length":"24581","record_id":"<urn:uuid:a659e770-9a83-4397-899e-a1c02759f6c5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Guess the Wsox pitchers. [Archive] - White Sox Interactive Forums
07-12-2007, 04:02 PM
32 pitchers have won a game for the sox 04-07
The Last names initial is given, along with amount of wins by that pitcher
Guess the pitchers names
this is 2004 thru the first half of 07
No fair looking it up.
post your guesses for all initials
not one at a time
A-2 A-2
B-48 B-1
C-41 C-9
D-2 D-5
G-2 G-52 G-41
H-9 H-2 H-1
J-2 J-6
L-9 L-1
M-10 M-7 M-1 M-2 M-2
R-1 R-1
T-6 T-7
V-6 V-16
|
{"url":"http://www.whitesoxinteractive.com/vbulletin/archive/index.php/t-90242.html","timestamp":"2014-04-17T15:40:55Z","content_type":null,"content_length":"12319","record_id":"<urn:uuid:de39ac46-d6ee-4242-9ac3-e77fd98e92b2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
, the
of a
ring R
identity element
is defined to be the smallest positive
integer n
such that
= 0 (where
is defined as 1
+ ... + 1
summands). If no such
exists, we say that the characteristic of
is 0. The characteristic of
is often denoted char(
The characteristic of the ring R may be equivalently defined as the unique natural number n such that nZ is the kernel of the unique ring homomorphism from Z to R which sends 1 to 1[R]. And yet
another equivalent definition: the characteristic of R is the unique natural number n such that R contains a subring isomorphic to the factor ring Z/nZ.
Examples and notes
• If R and S are rings and there exists a ring homomorphism R -> S, then the characteristic of S divides the characteristic of R. This can sometimes be used to exclude the possibility of certain
ring homomorphisms.
• The only ring with characteristic 1 is the trivial ring which has only a single element 0=1.
• If the non-trivial ring R does not have any zero divisors, then its characteristic is either 0 or prime. In particular, this applies to all fields, to all integral domains, and to all division
• For any ordered field (for example, the rationals or the reals) the characteristic is 0.
• The ring Z/nZ of integers modulo n has characteristic n.
• If R is a subring of S, then R and S have the same characteristic. For instance, if q(X) is a prime polynomial with coefficients in the field Z/pZ where p is prime, then the factor ring (Z/pZ)[X]
/(q(X)) is a field of characteristic p. Since the complex numbers contain the rationals, their characteristic is 0.
• Any ring of characteristic 0 is infinite. The finite field GF(p^n) has characteristic p.
• There exist infinite fields of prime characteristic. For example, the field of all rational functions over Z/pZ is one such. The algebraic closure of Z/pZ is another example.
• The size of any finite ring of prime characteristic p is a power of p. Since in that case it must contain Z/pZ it must also be a vector space over that field and from linear algebra we know that
the sizes of finite vector spaces over finite fields are a power of the size of the field.
• This also shows that the size of any finite vector space is a prime power. (It is a vector space over a finite field, which we have shown to be of size p^n. So its size is (p^n)^m = p^nm.)
• If a commutative ring R has prime characteristic p, then we have (x + y)^p = x^p + y^p for all elements x and y in R. The map f(x) = x^p defines an injective ring homomorphism R -> R. It is
called the Frobenius homomorphism.
The term "characteristic" is also used in several other unrelated mathematical contexts: Characteristic is also sometimes used as a piece of jargon in discussions of universals in metaphysics, often
in the phrase 'distinguishing characteristics'.
|
{"url":"http://july.fixedreference.org/en/20040724/wikipedia/Characteristic","timestamp":"2014-04-18T16:46:00Z","content_type":null,"content_length":"7856","record_id":"<urn:uuid:c5c1cb2e-f8b9-4c82-a307-c29b489ce927>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maxwell's demon can use quantum information to generate work
In the quantum information heat engine, the system is attached to the reservoir, and is measured and controlled by the memory consisting of a diatomic molecule with two entangled states. Credit:
Park, et al. ©2013 American Physical Society
(Phys.org) —In theory, Maxwell's demon can decrease the entropy of a system by opening and closing a door at appropriate times to separate hot and cold gas molecules. But as physicist Leó Szilárd
pointed out in 1929, entropy does not decrease in such a situation because the demon's measurement process requires information, which is a form of entropy. Szilárd's so-called information heat
engine, now called the Szilárd engine (SZE), demonstrates how work can be generated by using information.
In the SZE and other heat engines devised since then, the information that is used to generate work has always been classical information. Now for the first time, physicists have theoretically
demonstrated that a heat engine can generate work using purely quantum mechanical information.
The researchers, Jung Jun Park and Sang Wook Kim from Pusan National University in Busan, Korea; Kang-Hwan Kim from the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, Korea;
and Takahiro Sagawa from Kyoto University in Kyoto, Japan, have published their paper on their work in a recent issue of Physical Review Letters.
"It is known that classical information can be used to extract work, which is important because this saves the second law of thermodynamics!" Sang Wook Kim told Phys.org. "The mathematical expression
of such work is given as the mutual information between the system and the measurement device multiplied by kT. Now for the first time we show that quantum information can also be used to extract
work, and its mathematical expression is discord."
As the physicists explain, in many ways the new quantum heat engine is similar to the SZE. Both engines begin in thermodynamic equilibrium, and both have some kind of wall that separates the
reservoir of molecules into two spaces. While the wall in the SZE is "perfect," in the quantum heat engine it is semi-permeable.
The demon and the type of molecules used are also different in the quantum heat engine. Here, the demon consists of not one but two memories, which the scientists physically demonstrate using a
molecule containing two atoms. Each atom has two internal states that are physically equivalent to each atom's two spin states.
Before measurements are performed, the two atoms are prepared in a maximally entangled quantum state, and are also classically correlated. But as measurements are performed to separate the molecules,
the quantum entanglement between the two atoms decreases while the classical correlation does not. It is this quantum entanglement, expressed here as quantum discord, that is used to generate work.
The quantum discord measures the amount of quantum mechanical correlation compared to the total correlation (both quantum and classical).
Like the SZE, the quantum heat engine does not violate the second law of thermodynamics because entropy, in the form of information, never decreases in these systems. The physicists also note that
the quantum heat engine is not cyclic, so the memory does not return to its initial maximally entangled state. In other words, the quantum information is not free, and work must be done to recover
the initial state.
The researchers plan to further investigate the potential of generating work from quantum information in the future.
"First we are considering how to realize our engine in experiments," Sang Wook Kim said. "The example that we presented in our paper is just a gedanken experiment, implying that it is extremely hard
to directly realize it. Second, we are studying the quantum information heat engine far from equilibrium. This is related to constructing a quantum fluctuation theorem with feedback control."
More information: Jung Jun Park, et al. "Heat Engine Driven by Purely Quantum Information." PRL 111, 230402 (2013). DOI: 10.1103/PhysRevLett.111.230402
5 / 5 (2) Dec 18, 2013
This seem sto put us on a path where we can be able to compute the maximum possible energy contained in an entangled system. Entangled state batteries anyone? I wonder if this will come out to a
potentially low or extremely high energy density. Gut feeling would indicate the latter.
5 / 5 (2) Dec 19, 2013
Who says scientists don't dabble in fiction eh? Maxwell's 'finite being' thought experiement certainly caused ripple...but of course way back then they weren't aware of 'entatngled particles'.
@antialias_physorg Entangled state batteries anyone?' One of several possibilities I think for such a system and if memory serves, laser and photon cooling systems experiments were being constructed
last decade.
Okay, so the second law at the end is safe but look what reasearches have found during the 'journey'. Amazing...great stuff
not rated yet Dec 19, 2013
I cheer whenever a loose stone is found in the foundations of physics. Will this bring down the entire edifice? I certainly hope so.
"Extraordinary claims require extraordinary evidence" is a load of codswallop.
Just one example is sufficient, especially at the venerable foundations that are touted as being unassailable.
The problem is that science advances one death at a time but we do not have time on our side.
The beginning is near.
5 / 5 (1) Dec 19, 2013
Will this bring down the entire edifice?
Probably not - as it isn't a lose stone (thermodynamics isn't violated).
It does shine an intruguing new light on things.
Scientif theories that have withstood the test of time rarely 'fall down'. They get replaced by a fresh look on things (e.g. Newtonian physics isn't outright false. It wasn't 'brought down' by
Einstein. It was just shown to be incomplete)
The problem is that science advances one death at a time
That might have been the case 100 years ago. But that time has long since passed. Since the early 1900's when people started to have access to good information dissemination methods (frequent
journals and then the internet) and the ability to recreate experiments much more easily the "science by authority" approach has all but vanished.
|
{"url":"http://phys.org/news/2013-12-maxwell-demon-quantum.html","timestamp":"2014-04-20T01:49:26Z","content_type":null,"content_length":"78330","record_id":"<urn:uuid:1e4ae420-0da3-4501-ab7a-a946e29fba2d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The 200 Most Brilliant Advertisements Of 2012
Brilliant Ads of 2012 (1)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (2)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (3)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (4)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (5)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (6)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (7)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (8)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (9)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (10)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (11)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (12)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (13)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (14)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (15)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (16)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (17)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (18)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (19)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (20)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (21)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (22)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (23)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (24)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (25)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (26)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (27)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (28)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (29)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (30)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (31)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (32)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (33)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (34)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (35)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (36)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (37)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (38)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (39)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (40)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (41)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (42)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (43)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (44)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (45)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (46)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (47)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (48)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (49)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (50)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (51)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (52)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (53)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (54)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (55)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (56)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (57)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (58)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (59)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (60)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (61)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (62)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (63)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (64)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (65)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (66)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (67)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (68)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (69)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (70)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (71)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (72)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (73)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (74)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (75)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (76)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (77)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (78)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (79)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (80)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (81)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (82)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (83)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (84)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (85)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (86)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (87)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (88)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (89)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (90)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (91)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (92)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (93)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (94)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (95)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (96)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (97)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (98)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (99)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (100)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (101)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (102)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (103)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (104)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (105)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (106)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (107)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (108)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (109)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (110)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (111)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (112)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (113)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (114)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (115)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (116)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (117)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (118)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (119)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (120)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (121)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (122)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (123)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (124)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (125)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (126)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (127)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (128)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (129)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (130)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (131)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (132)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (133)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (134)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (135)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (136)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (137)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (138)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (139)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (140)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (141)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (142)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (143)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (144)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (145)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (146)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (147)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (148)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (149)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (150)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (151)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (152)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (153)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (154)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (155)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (156)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (157)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (158)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (159)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (160)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (161)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (162)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (163)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (164)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (165)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (166)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (167)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (168)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (169)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (170)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (171)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (172)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (173)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (174)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (175)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (176)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (177)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (178)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (179)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (180)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (181)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (182)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (183)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (184)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (185)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (186)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (187)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (188)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (189)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (190)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (191)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (192)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (193)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (194)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (195)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (196)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (197)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (198)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (199)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (200)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Brilliant Ads of 2012 (1)
Some of our biggest stories of 2012 we're brilliant advertisement compilations. Since the year is almost over we've decided to pull together the best, most creative ads of 2012.
Some of our biggest stories of 2012 we’re brilliant advertisement compilations. Since the year is almost over we’ve decided to pull together the best, most creative ads of 2012.
From The Web
Tags: advertisments
|
{"url":"http://www.rsvlts.com/2012/12/16/the-200-most-brilliant-advertisements-of-2012/","timestamp":"2014-04-21T12:36:22Z","content_type":null,"content_length":"227153","record_id":"<urn:uuid:04a69070-6d66-42fe-a331-9ccbb6995833>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
complex {base}
Basic functions which support complex arithmetic in R.
complex(length.out = 0, real = numeric(), imaginary = numeric(),
modulus = 1, argument = 0)
as.complex(x, ...)
numeric. Desired length of the output vector, inputs being recycled as needed.
numeric vector.
numeric vector.
numeric vector.
numeric vector.
an object, probably of mode complex.
an object of mode complex, or one of a class for which a methods has been defined.
further arguments passed to or from other methods.
Complex vectors can be created with complex. The vector can be specified either by giving its length, its real and imaginary parts, or modulus and argument. (Giving just the length generates a vector
of complex zeroes.)
as.complex attempts to coerce its argument to be of complex type: like as.vector it strips attributes including names. All forms of NA and NaN are coerced to a complex NA, for which both the real and
imaginary parts are NA.
Note that is.complex and is.numeric are never both TRUE.
The functions Re, Im, Mod, Arg and Conj have their usual interpretation as returning the real part, imaginary part, modulus, argument and complex conjugate for complex values. The modulus and
argument are also called the polar coordinates. If z = x + i y with real x and y, for r = Mod(z) = √(x^2 + y^2), and φ = Arg(z), x = r*cos(φ) and y = r*sin(φ). They are all internal generic primitive
functions: methods can be defined for them individually or via the Complex group generic.
In addition, the elementary trigonometric, logarithmic, exponential, square root and hyperbolic functions are implemented for complex values.
Internally, complex numbers are stored as a pair of double precision numbers, either or both of which can be NaN or plus or minus infinity.
S4 methods
as.complex is primitive and can have S4 methods set.
Re, Im, Mod, Arg and Conj constitute the S4 group generic Complex and so S4 methods can be set for them individually or via the group generic.
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
0i ^ (-3:3)
matrix(1i^ (-6:5), nrow = 4) #- all columns are the same
0 ^ 1i # a complex NaN
## create a complex normal vector
z <- complex(real = stats::rnorm(100), imaginary = stats::rnorm(100))
## or also (less efficiently):
z2 <- 1:2 + 1i*(8:9)
## The Arg(.) is an angle:
zz <- (rep(1:4, len = 9) + 1i*(9:1))/10
zz.shift <- complex(modulus = Mod(zz), argument = Arg(zz) + pi)
plot(zz, xlim = c(-1,1), ylim = c(-1,1), col = "red", asp = 1,
main = expression(paste("Rotation by "," ", pi == 180^o)))
abline(h = 0, v = 0, col = "blue", lty = 3)
points(zz.shift, col = "orange")
Documentation reproduced from R 3.0.2. License: GPL-2.
|
{"url":"http://www.inside-r.org/r-doc/base/Re","timestamp":"2014-04-19T22:28:30Z","content_type":null,"content_length":"26542","record_id":"<urn:uuid:7fdbebda-8600-4fa5-be97-952fde2f7b6d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Learning longterm dependencies with gradient descent is difficult
Results 1 - 10 of 214
- Proceedings of the IEEE , 1998
"... Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradientbased learning technique. Given an appropriate network architecture,
gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify hi ..."
Cited by 731 (58 self)
Add to MetaCart
Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradientbased learning technique. Given an appropriate network architecture,
gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This
paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically
designed to deal with the variability of two dimensional (2-D) shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including
field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN’s), allows such multimodule systems to be trained globally using
gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and
the flexibility of graph transformer networks. A graph transformer network for reading a bank check is also described. It uses convolutional neural network character recognizers combined with global
training techniques to provide record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.
, 1995
"... "Recurrent backprop" for learning to store information over extended time intervals takes too long. The main reason is insufficient, decaying error back flow. We briefly review Hochreiter's 1991
analysis of this problem. Then we overcome it by introducing a novel, efficient method called "Long Sho ..."
Cited by 244 (55 self)
Add to MetaCart
"Recurrent backprop" for learning to store information over extended time intervals takes too long. The main reason is insufficient, decaying error back flow. We briefly review Hochreiter's 1991
analysis of this problem. Then we overcome it by introducing a novel, efficient method called "Long Short Term Memory" (LSTM). LSTM can learn to bridge minimal time lags in excess of 1000 time steps
by enforcing constant error flow through internal states of special units. Multiplicative gate units learn to open and close access to constant error flow. LSTM's update
- IEEE TRANSACTIONS ON NEURAL NETWORKS , 1998
"... A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor
structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive ..."
Cited by 117 (46 self)
Add to MetaCart
A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor
structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured
information. In particular, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. The general framework proposed in this
paper can be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem
of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden statespace representation. We introduce a graphical
formalism for r...
, 1999
"... Motivation: Predicting the secondary structure of a protein (alpha-helix, beta-sheet, coil) is an important step towards elucidating its three dimensional structure, as well as its function.
Presently, the best predictors are based on machine learning approaches, in particular neural network archite ..."
Cited by 116 (22 self)
Add to MetaCart
Motivation: Predicting the secondary structure of a protein (alpha-helix, beta-sheet, coil) is an important step towards elucidating its three dimensional structure, as well as its function.
Presently, the best predictors are based on machine learning approaches, in particular neural network architectures with a fixed, and relatively short, input window of amino acids, centered at the
prediction site. Although a fixed small window avoids overfitting problems, it does not permit to capture variable long-ranged information. Results: We introduce a family of novel architectures which
can learn to make predictions based on variable ranges of dependencies. These architectures extend recurrent neural networks, introducing non-causal bidirectional dynamics to capture both upstream
and downstream information. The prediction algorithm is completed by the use of mixtures of estimators that leverage evolutionary information, expressed in terms of multiple alignments, both at the
input and output levels. While our system currently achieves an overall performance close to 76% correct prediction---at least comparable to the best existing systems---the main emphasis here is on
the development of new algorithmic ideas. Availability: The executable program for predicting protein secondary structure is available from the authors free of charge. Contact: pfbaldi@ics.uci.edu,
gpollast@ics.uci.edu, brunak@cbs.dtu.dk, paolo@dsi.unifi.it. 1
- ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS , 1995
"... We introduce a recurrent architecture having a modular structure and we formulate a training procedure based on the EM algorithm. The resulting model has similarities to hidden Markov models,
but supports recurrent networks processing style and allows to exploit the supervised learning paradigm ..."
Cited by 108 (15 self)
Add to MetaCart
We introduce a recurrent architecture having a modular structure and we formulate a training procedure based on the EM algorithm. The resulting model has similarities to hidden Markov models, but
supports recurrent networks processing style and allows to exploit the supervised learning paradigm while using maximum likelihood estimation.
- IEEE Transactions on Neural Networks , 1996
"... We consider problems of sequence processing and propose a solution based on a discrete state model in order to represent past context. Weintroduce a recurrent connectionist architecture having a
modular structure that associates a subnetwork to each state. The model has a statistical interpretation ..."
Cited by 98 (12 self)
Add to MetaCart
We consider problems of sequence processing and propose a solution based on a discrete state model in order to represent past context. Weintroduce a recurrent connectionist architecture having a
modular structure that associates a subnetwork to each state. The model has a statistical interpretation we call Input/Output Hidden Markov Model (IOHMM). It can be trained by the EM or GEM
algorithms, considering state trajectories as missing data, which decouples temporal credit assignment and actual parameter estimation. The model presents similarities to hidden Markov models (HMMs),
but allows us to map input se-quences to output sequences, using the same processing style as recurrent neural networks. IOHMMs are trained using a more discriminant learning paradigm than HMMs,
while potentially taking advantage of the EM algorithm. We demonstrate that IOHMMs are well suited for solving grammatical inference problems on a benchmark problem. Experimental results are
presented for the seven Tomita grammars, showing that these adaptive models can attain excellent generalization.
- IEEE Transactions on Systems, Man, and Cybernetics , 1996
"... This paper discusses how a behavior-based robot can construct a “symbolic process” that accounts for its deliberative thinking processes using models of the environment. The paper focuses on two
essential problems; one is the symbol grounding problem and the other is how the internal symbolic proces ..."
Cited by 80 (20 self)
Add to MetaCart
This paper discusses how a behavior-based robot can construct a “symbolic process” that accounts for its deliberative thinking processes using models of the environment. The paper focuses on two
essential problems; one is the symbol grounding problem and the other is how the internal symbolic processes can be situated with respect to the behavioral contexts. We investigate these problems by
applying a dynamical system’s approach to the robot navigation learning problem. Our formulation, based on a forward modeling scheme using recurrent neural learning, shows that the robot is capable
of learning grammatical structure hidden in the geometry of the workspace from the local sensory inputs through its navigational experiences. Furthermore, the robot is capable of generating diverse
action plans to reach an arbitrary goal using the acquired forward model which incorporates chaotic dynamics. The essential claim is that the internal symbolic process, being embedded in the
attractor, is grounded since it is self-organized solely through interaction with the physical world. It is also shown that structural stability arises in the interaction between the neural dynamics
and the environmental dynamics, which accounts for the situatedness of the internal symbolic process. The experimental results using a mobile robot, equipped with a local sensor consisting of a laser
range finder, verify our claims. 1 1
- COGNITION , 1999
"... It is commonly assumed that innate linguistic constraints are necessary to learn a natural language, based on the apparent lack of explicit negative evidence provided to children and on Gold's
proof that, under assumptions of virtually arbitrary positive presentation, most interesting classes of ..."
Cited by 68 (6 self)
Add to MetaCart
It is commonly assumed that innate linguistic constraints are necessary to learn a natural language, based on the apparent lack of explicit negative evidence provided to children and on Gold's proof
that, under assumptions of virtually arbitrary positive presentation, most interesting classes of languages are not learnable. However, Gold's results do not apply under the rather common assumption
that language presentation may be modeled as a stochastic process. Indeed, Elman (Elman, J.L., 1993. Learning and development in neural networks: the importance of starting small. Cognition 48,
71--99) demonstrated that a simple recurrent connectionist network could learn an artificial grammar with some of the complexities of English, including embedded clauses, based on performing a word
prediction task within a stochastic environment. However, the network was successful only when either embedded sentences were initially withheld and only later introduced gradually, or when the
network itself was given initially limited memory which only gradually improved. This finding has been taken as support for Newport's `less is more' proposal, that child language acquisition may be
aided rather than hindered by limited cognitive resources. The current article reports on connectionist simulations which indicate, to the contrary, that starting with simplified inputs or limited
memory is not necessary in training recurrent networks to learn pseudonatural languages; in fact, such restrictions hinder acquisition as the languages are made more English-like by the introduction
of semantic as well as syntactic constraints. We suggest that, under a statistical model of the language environment, Gold's theorem and the possible lack of explicit negative evidence do not
implicate i...
, 1996
"... The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of
information between symbolic and connectionist knowledge representations. The focas of this paper is on t ..."
Cited by 61 (15 self)
Add to MetaCart
The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of
information between symbolic and connectionist knowledge representations. The focas of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time
recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic
finite-state automata (DFAs) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a
training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the
consistent DFAs the model which best approximates the learned regular grammar.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=128029","timestamp":"2014-04-18T12:27:47Z","content_type":null,"content_length":"40788","record_id":"<urn:uuid:57cd8d83-7c56-47e7-ab37-c362ac98841a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dickinson, TX Algebra 2 Tutor
Find a Dickinson, TX Algebra 2 Tutor
...I try as much as possible to work in the comfort of your own home at a schedule convenient to you. I operate my business with the highest ethical standards and have consented to a background
check if you would like one.I have taught flute and clarinet lessons since the mid-'80s with many student...
35 Subjects: including algebra 2, chemistry, physics, calculus
...I was a member of the TRIO Upward Bound Program (college prep) where I was entrusted with the tutorial of 50 + kids. My unique methods has made my students scholars! May it be an upcoming
test, that one class you just cannot seem to get a grasp on or refreshing what you have learned, I am willing to assist.
6 Subjects: including algebra 2, reading, algebra 1, geometry
...I am on the President's Honor Roll and have a GPA of 3.73. I am graduating in May and pursuing a Masters degree. I have had three chemical engineering internships through which I have gained
experiences in the field.
22 Subjects: including algebra 2, chemistry, calculus, physics
I have lived in Texas all my life. I’ve moved from Houston to Austin and back again. I attended the University of Texas at Austin where I pursued a degree in sound engineering and recording
technology with an emphasis in Pre-Med and Pre-Law.
38 Subjects: including algebra 2, chemistry, reading, physics
...I completed my teacher training with the prestigious Teach for America program and I currently work with them as an Instructional Coach to train new teachers in the classroom. I have 3 years
of tutoring experience, both with a company and as an independent contractor. I graduated with honors from Texas A&M University with a degree in Biomedical Science and Psychology.
24 Subjects: including algebra 2, English, chemistry, algebra 1
Related Dickinson, TX Tutors
Dickinson, TX Accounting Tutors
Dickinson, TX ACT Tutors
Dickinson, TX Algebra Tutors
Dickinson, TX Algebra 2 Tutors
Dickinson, TX Calculus Tutors
Dickinson, TX Geometry Tutors
Dickinson, TX Math Tutors
Dickinson, TX Prealgebra Tutors
Dickinson, TX Precalculus Tutors
Dickinson, TX SAT Tutors
Dickinson, TX SAT Math Tutors
Dickinson, TX Science Tutors
Dickinson, TX Statistics Tutors
Dickinson, TX Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Alvin, TX algebra 2 Tutors
Bacliff algebra 2 Tutors
Beach City, TX algebra 2 Tutors
El Lago, TX algebra 2 Tutors
Hitchcock, TX algebra 2 Tutors
Kemah algebra 2 Tutors
La Marque algebra 2 Tutors
League City algebra 2 Tutors
Manvel, TX algebra 2 Tutors
Nassau Bay, TX algebra 2 Tutors
Santa Fe, TX algebra 2 Tutors
Seabrook, TX algebra 2 Tutors
Taylor Lake Village, TX algebra 2 Tutors
Texas City algebra 2 Tutors
Webster, TX algebra 2 Tutors
|
{"url":"http://www.purplemath.com/dickinson_tx_algebra_2_tutors.php","timestamp":"2014-04-17T13:11:00Z","content_type":null,"content_length":"24136","record_id":"<urn:uuid:c9b90c00-6b1e-49a0-b0f5-eed2a5f40d6f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
9 May 01:50 2011
Hosmer-Lemeshow 'goodness of fit'
viostorm <rob.schutt <at> gmail.com>
2011-05-08 23:50:39 GMT
I'm trying to do a Hosmer-Lemeshow 'goodness of fit' test on my logistic
regression model.
I found some code here:
The R code is above is a little complicated for me but I'm having trouble
with my answer:
Hosmer-Lemeshow: p=0.6163585
le Cessie and Houwelingen test (Design library): p=0.2843620
The above link indicated they should be approximately equal which in my case
they are not, any suggestions or is there a package function people would
recommend in R for use with a logistic regression model?
Thanks in advance,
-Rob Schutt
Robert Schutt, MD, MCS
Resident - Department of Internal Medicine
University of Virginia, Charlottesville, Virginia
# Compute the Hosmer-Lemeshow 'goodness-of-fit' test
(Continue reading)
|
{"url":"http://comments.gmane.org/gmane.comp.lang.r.general/228704","timestamp":"2014-04-19T06:55:21Z","content_type":null,"content_length":"18837","record_id":"<urn:uuid:5345a91e-16e1-4463-a742-fbb7d05c2d95>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Armiento, Rickard - Department of Physics, Royal Institute of Technology (KTH)
• Electrical response of molecular chains in density functional theory: Ultranonlocal response from a semilocal functional
• Electronic surface error in the Si interstitial formation energy Ann E. Mattsson,1,* Ryan R. Wixom,2, and Rickard Armiento3,
• Thesis for the degree of Teknologie licentiat Subsystem Functionals in
• An exchange potential functional is constructed from semi-local quanti-
• How to Tell an Atom From an Electron Gas: A Semi-Local Index of Density Inhomogeneity
• Nonequivalence of the generalized gradient approximations PBE and PW91 Ann E. Mattsson,1,
• Alternative separation of exchange and correlation in density-functional theory R. Armiento*
• Master's Thesis Density Functional Theory for
• Royal Institute of Technology, AlbaNova University Center Rickard Armiento,
• Functional designed to include surface effects in self-consistent density functional theory R. Armiento1,
• Numerical integration of functions originating from quantum mechanics
• Subsystem functionals in density-functional theory: Investigating the exchange energy per particle R. Armiento*
• Comment on ``Restoring the Density-Gradient Expansion for Exchange in Solids and Surfaces''
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/27/512.html","timestamp":"2014-04-20T19:11:05Z","content_type":null,"content_length":"8641","record_id":"<urn:uuid:5cd8becd-1a69-487d-bc6f-010de903da66>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to properly format fractions in InDesign | lynda.com tutorial
InDesign Secrets: How to properly format fractions
Published by David Blatner | Thursday, June 21st, 2012
In this week’s InDesign Secrets episode, David Blatner unravels the mysteries (and hassles) of making fractions in InDesign text.
And we’re talking real fractions—not those regular-size numbers, both sitting on the baseline, separated by a common slash fake fractions like the one seen below left. David’s talking about properly
scaled, baseline-shifted numerator, divided by a properly tilted fraction bar real fractions like the one seen below right:
As David points out in the video tutorial, if you’re using an Open Type font, creating a properly scaled fraction is simply a matter of selecting the type and choosing Open Type > Fractions from the
Control Panel menu. Of course, if your document is rife with fractions you’ll want a more efficient way to change all of your fractions at once, and for that, you’ll need to fearlessly tread into the
world of GREP styles.
GREP styles search for a particular pattern in text—in this case “digit-slash-digit” (or, translated into GREP, that’s “\d+/\d+”)—to apply a specific style denoted by you (in this case Open Type >
Fractions). You can see in the video how to use this handy GREP feature to change all your fractions at the same time. David also shows you how to use another GREP style-replacement maneuver
to remove unwanted spaces between your whole number and your fraction after you’ve properly scaled your fractions (these spaces will be there for fractions that have whole numbers associated with the
fraction. For example, with a number like 18 3/4, the previously disproportioned “fake” fractions needed a space between the whole number, 18, and the fraction, 3/4).
Of course, this GREP automation relies on the use of an Open Type font. For cases where you don’t have the luxury, or desire, to use an Open Type font, David shows you how to manually create your own
non-Open Type font proper fraction using Horizontal Scaling, Vertical Scaling, and offsets. By the time you’re through watching David’s less-than-nine-minute movie, you’ll never need to rely on an
inelegant fake fraction again.
Meanwhile, for members of lynda.com, David’s partner in InDesign Secrecy, Anne-Marie Concepcion, has another member-exclusive video called Fixing unwanted hyperlinks in an imported Word file that
offers a handy way to deal with what can be a maddening InDesign situation.
David and Anne-Marie will be back in two weeks with more InDesign Secrets!
Interested in more?
• The entire InDesign Secrets bi-weekly series
• Courses by David Blatner and Anne-Marie Concepcion on lynda.com
• All lynda.com InDesign courses
Suggested courses to watch next:
• InDesign CS6 New Features
• InDesign CS6 Essential Training
• InDesign Styles in Depth
Tags: Anne-Marie Concepción, David Blatner, InDesign, InDesign Secrets
This is a good look at formatting fractions. As always, there are multiple ways to do things. I know David wanted to focus on InDesign’s native tools for handling fractions, but if you deal with a
lot of fractions and want to save time (and have the ability to set custom kerning amounts for specific numbers before or after the slash) be sure to check out my Proper Fraction Pro at
danrodney.com. It can handle OpenType fonts as well as non-OpenType fonts. It can automatically add strokes to numbers if horizontally scaling, and more. Sorry for the shameless plug, but for those
that need it, having options is always a good thing.
Great article. Really useful to me, thanks!
• Thanks for the feedback, Alex. Glad to hear you found this tutorial helpful.
I would like to ask you if it is possible to format mathematical fractions with a vertical line instead of a slash.
• Not sure what you mean, Zenia? A straight up and down vertical (pipe) line? I’m curious about your example.
□ A horizontal line with vertical alignment, stacking the numerator directly above the denominator instead of on an angle. Is it possible in InDesign?
Hello, This was a great video, thanks very much for sharing. I’d like to know if I can use this, but also leave some numbers in the ‘fake’ format. eg: 50/60 Hz, 0/12 VDC
I tried to say “If the ‘numerator’ is a zero, don’t format it”, by adding this rule AFTER the one you demonstrated above:
Apply Style: [None]
To Text: [0]/\d+
This didn’t work as I hoped. (I don’t mind creating multiple rules, if you can show me how to do one, there are only a few instances that I would have to look out for)
Thanks very much for any help you can offer.
|
{"url":"http://blog.lynda.com/2012/06/21/indesign-secrets-how-to-properly-format-fractions/","timestamp":"2014-04-16T04:11:59Z","content_type":null,"content_length":"45449","record_id":"<urn:uuid:1bcc1f2c-3bca-447a-86a0-26b59ff480e9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Removing Regional Trends in Microgravity in Complex Environments: Testing on 3D Model and Field Investigations in the Eastern Dead Sea Coast (Jordan)
International Journal of Geophysics
Volume 2013 (2013), Article ID 341797, 13 pages
Review Article
Removing Regional Trends in Microgravity in Complex Environments: Testing on 3D Model and Field Investigations in the Eastern Dead Sea Coast (Jordan)
^1Al-Balqa Applied University, Salt 19117, Jordan
^2Department of Geophysical, Atmospheric and Planetary Sciences, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel
^3Geophysical Institute of Israel, P.O. Box 182, Lod 71100, Israel
Received 29 September 2012; Accepted 18 January 2013
Academic Editor: Umberta Tinivella
Copyright © 2013 A. Al-Zoubi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Microgravity investigations are now recognized as a powerful tool for subsurface imaging and especially for the localization of underground karsts. However numerous natural (geological), technical,
and environmental factors interfere with microgravity survey processing and interpretation. One of natural factors that causes the most disturbance in complex geological environments is the influence
of regional trends. In the Dead Sea coastal areas the influence of regional trends can exceed residual gravity effects by some tenfold. Many widely applied methods are unable to remove regional
trends with sufficient accuracy. We tested number of transformation methods (including computing gravity field derivatives, self-adjusting and adaptive filtering, Fourier series, wavelet, and other
procedures) on a 3D model (complicated by randomly distributed noise), and field investigations were carried out in Ghor Al-Haditha (the eastern side of the Dead Sea in Jordan). We show that the most
effective methods for regional trend removal (at least for the theoretical and field cases here) are the bilinear saddle and local polynomial regressions. Application of these methods made it
possible to detect the anomalous gravity effect from buried targets in the theoretical model and to extract the local gravity anomaly at the Ghor Al-Haditha site. The local anomaly was utilized for
3D gravity modeling to construct a physical-geological model (PGM).
1. Introduction
The development of new modern gravimetric and variometric (gradientometric) equipment, which makes it possible to record small previously inaccessible anomalies, has enhanced observational
methodology as well as new gravity data processing methods and interpretation. These advances have triggered the rapid rise in the number of microgravity methodology applications in environmental and
economic minerals geophysics.
Microgravity is now recognized as an effective tool for the analysis of a whole range of geological subsurface inhomogeneities, the monitoring of volcanic activity, and prospecting for useful
minerals (e.g., [3–36]).
At the same time different kinds of noise of different origin complicate analysis of microgravity data. For removing (elimination) the noise components numerous procedures and methodologies were
developed. We will analyze in this paper a problem of regional trend removing under complex geological-geophysical environments. Such a problem is highly essential by delineation of weak anomalies
from buried karst terranes in the Dead Sea Basin where regional horizontal gravity gradients may exceed values of 10mGal/km.
2. A Brief Review of Microgravity Investigations in Subsurface Studies
Colley [4] apparently was the first to apply the gravity method for cave delineation. Despite the fact that the accuracy of gravity observations at that time was not sufficiently precise, he
presented some examples of typical negative gravity anomalies in large caverns in Iraq.
Fajklewicz [6] examined the vertical gravity gradient () over underground galleries. He was probably the first to note a significant difference between the physically measured and this value obtained
by transformation. Interesting examples of microgravity anomalies from archaeological targets are presented in Blížkovský [7]. Butler [9] showed that microgravity measurements could be used to detect
and delineate the main components of complex underground cavity systems. He computed the second and third derivatives of the gravity potential and polynomial surface to develop the initial
physical-geological models (PGMs). Butler [10] surveyed gravity and gravity-gradient determination concepts and their corresponding interpretative microgravity procedures.
A nonconventional attempt to use microgravity observations for weight determination of stockpiled ore was reported by Sjostrom and Butler [37] who estimated the mass of many chromite and other ore
bodies noninvasively.
Crawford [14] employed microgravity to detect sinkhole collapses under highways in the USA. Elawadi et al. [38] showed that the application of well-known neural network procedures could increase the
assessment effectiveness of the depth and radius of subsurface cavities revealed by microgravity data. Rybakov et al.’s [17] work triggered the use of microgravity to find sinkholes in the complex
geological conditions of the Dead Sea coastal plain.
Types of noise (disturbances) arising in microgravity investigations were studied in detail in Debeglia and Dupont [39]. Styles et al. [20] discussed the key problems related to the removal of noise
components in microgravity in complex environments.
The need for additional computation of the surrounding terrain relief by 3D gravity modeling in ore deposits occurring in the very complex topography of the Greater Caucasus was discussed in
Eppelbaum and Khesin [19].
Abad et al. [25] carried out an assessment of a buried rainwater cistern in a Carthusian monastery (Valencia, Spain) by 2D microgravity modeling. Microgravity monitoring is one of the most widely
used geophysical techniques for predicting volcanic activity; for instance, Carbone and Greco [40] described in detail their microgravity monitoring of Mt. Etna.
Advanced methods in magnetic prospecting can be adapted to quantitative analysis of microgravity anomalies in complex environments [27]. Eppelbaum et al. [3] described various transformation methods
to identify buried sinkholes including 3D gravity modeling to develop a PGM of Nahal Never South in the western Dead Sea coast.
Deroussi et al. [29] applied precise gravity investigations for delineating cavities and large fractured zones by planning road construction in lava flow after recent volcano eruption in Réunion
island. Microgravity combined with absolute gravity measurements has also been used to study water storage variations in a karst aquifer on the Larzac Plateau (France) [41]. Castiello et al. [42]
reported a microgravity studying an ancient underground cavity in the complex urban environment of Naples.
Types of noise associated with microgravity studies of shallow karst cavities in areas of developed infrastructure are presented in detail in Leucci and Georgi [32]. Porzucek [43] discusses the
advantages and disadvantages of using the Euler deconvolution in microgravity studies. A new method for the simultaneous, nonlinear inversion of gravity changes and surface deformation using bodies
with a free geometry was proposed by Camacho et al. [33].
The importance of gravity field observations at different levels as well as the precise calculation of topographic effects in intermediate and distant zones was analyzed in Eppelbaum [34]. Dolgal and
Sharkhimullin [44] suggested using a “localization function” to enhance the quality of PGMs and reduce the ambiguity of the results in high-precise gravity.
Kaufmann et al. [45] successfully employed microgravity to identify subsurface voids in the Unicorn cave in the Harz Mountains (Germany). Hajian et al. [35] applied locally linear neurofuzzy
microgravity modeling to the three most common shapes of subsurface cavities: sphere, vertical cylinder, and horizontal cylinder. The authors showed that their method can estimate cavity parameters
more accurately than least-squares minimization or multilayer perceptron methods.
Panisova et al. [46] fruitfully applied a new modification of close range photogrammetry for calculation of building corrections in the microgravity survey for karst delineation in the area of
historical edifice (Slovakia).
3. Different Kinds of Noise in Microgravity Surveys
A microgravity survey is the geophysical method most affected by corrections and reductions caused by different kinds of noise (disturbances). A chart showing the different types of noise typical to
microgravity studies is presented in Figure 1.
These types of noise are described in more detail below.
3.1. Artificial (Man-Made) Noise
The industrial component of noise mainly comes from surface and underground constructions, garbage dumps, transportation and communications lines, and so forth. The instrumental component is
associated with the technical properties of gravimeters (e.g., shift zero) and gradientometers. Human error, obviously, can accompany geophysical observations at any time. Finally, undocumented (
poorly documented) results of previous surveys can distort preliminary PAM development.
3.2. Natural Disturbances
Nonstationary noise includes, for instance, known tidal effects. Meteorological conditions (rain, lightning, snow, hurricanes, etc.) can also affect gravimeter readings. Corrections for the
atmosphere deserve special attention in microgravity investigations, since the air layer attraction is different at various levels over and below the m.s.l. Soil-vegetation factors associated with
certain soil types (e.g., swampy soil or loose ground in deserts) and dense vegetation, which sometimes hampers movement along the profile, also need to be taken into account.
3.3. Geological-Geophysical and Environmental Factors
These constitute the most important physical-geological disturbances. The application of any geophysical method depends primarily on the existence of physical properties contrast between the objects
under study and the surrounding medium. The physical limitation of method application assesses the measurable density contrast properties between the anomalous targets and the host media.
3.4. Spatial Coordinates and Normal Gravity Field Determination
Spatial coordinates and normal gravity field determination are also crucial to precise gravity studies and any inaccuracies here may lead to significant errors in subsequent analyses.
3.5. Uneven Terrain Relief
Uneven terrain relief can hamper the movement of equipment and restrict gravity data acquisition. Physically, the gravity field is affected by the form and density of the topographic features
composing the relief, as well as variations in the distance from the point of measurement to the hidden target [34]. Calculations for the surrounding terrain relief (sometimes for radii up to 200km)
are also of great importance [47, 48].
3.6. Earthquake Damage
Earthquake damage zones are widely spread over the Eastern Mediterranean, especially in the regions near the Dead Sea Transform (DST) Zone [49]. These zones may significantly complicate microgravity
data analysis.
3.7. The Variety of Anomalous Sources
The variety of anomalous sources is composed of two factors: the variable surrounding medium and the variety of anomalous targets. Both these factors are crucial and greatly complicate the
interpretation of magnetic data.
3.8. Variable Subsurface
Variable subsurface can make it difficult to determine the correct densities of bodies occurring close to the earth’s surface.
3.9. Local and Regional Trends
Local and regional trends (linear, parabolic, or other types) often mask the target gravity effects considerably (e.g., [2, 47, 48, 50]). Sometimes regional gravity trend effects may exceed local
desired anomalies by some tenfold.
Let us consider the last disturbing factor in detail. The correct removal (elimination) of regional trends is not a trivial task (e.g., [47]). Below we present two examples showing disturbing trend
effects in detailed gravity investigations. Figure 2 shows two cases of nonhorizontal gravity observations with the presence of an anomalous body. The distorting effect of a nonhorizontal observation
line occurs when the target object differs from the host medium by a contrast density and produces an anomalous vertical gradient. Comparing the anomalies from the local body observed on the inclined
and horizontal relief indicates that the gravity effects in these situations are different (Figure 2). Despite the fact that all the necessary corrections were applied to the observations on the
inclined relief, the computed Bouguer anomaly is characterized by small negative values (minimum) in the downward direction of the relief, whereas the anomaly on the horizontal profile has no
negative values (this kind of noise is described in Section 3.5). Thus, applying all conventional corrections does not eliminate this trend because the observation point for the anomalous object was
different [2]. Hence a special methodology is required for gravimetric quantitative anomaly interpretation in conditions of inclined relief [34].
Sometimes even simple computing of the first and second derivatives of the gravity field and (second and third derivatives of the gravity potential, resp.) is enough to locate local bodies against a
disturbing field background. One such example is presented in Figure 3 where the Bouguer gravity is practically impossible to interpret, whereas the calculation of was informative regarding the
geometry of two closely occurring sinkholes. Finally, the behavior of the graph clearly reflects the location of the vertical boundaries of two closely occurring objects with a small negative
interval (surrounding medium) between them.
The area under study—Ghor Al-Haditha—is situated in the eastern coastal plain of the Dead Sea (Jordan) in conditions of very complex regional gravity pattern (Figure 4). The satellite gravity data
shown in this figure were obtained from the World Gravity DB as retracked from Geosat and ERS-1 altimetry [51]. These observations were made with regular global 1-minute grids that can differentiate
these data from previous odd surface and airborne gravity measurements. This complex gravity field distribution in the vicinity of the area under study is caused mainly by the strong negative effect
of the low density sedimentary associations and salt layers accumulated in the DST and also several other factors.
4. Computation of the 3D Gravity Effect from Models of Sinkholes and the Dead Sea Transform
To test methods of regional trend elimination, two theoretical PGMs—sinkhole PGM and DST PGM—were developed. The computed gravity effects from these PGMs were also artificially complicated by
randomly distributed noise.
4.1. Computation of the 3D Gravity Effect from the Sinkhole PGM
To calculate the 3D gravity field, 12 parallel profiles with a distance between them of 5m were applied (Figure 5). For the PGM a two layer (kg/m^3 and kg/m^3, resp.) PGM with two types of
ellipsoidal sinkholes was constructed (Figure 6). The center of the first large sinkhole was located at a depth of −60m below the earth’s surface in the second layer, with a contrast density of
−900kg/m^3. The center of the second small sinkhole was located at a depth of −20m below the earth’s surface in the first layer, with a contrast density of −2000kg/m^3. Profile 6 was selected as
the central one, and the left and right ends of sinkhole 1 were defined as −30 and +30m, and for sinkhole 2 as −12 and +12m, respectively. For the 3D gravity field modeling of this and the
following examples, mainly the GSFC program [19] software was employed. The number of computation points along the sinkholes PGM was chosen to be 200, that is, every 2.5m.
The compiled gravity map for the 12 profiles for the sinkhole PGM is shown in Figure 7. As can be seen from this map, the anomaly from sinkhole 2 is narrower than sinkhole 1 but is characterized by
comparatively high amplitude.
4.2. Computation of the 3D Gravity Effect from the DST
The simplified PGM of the DST for its deepest part (Figure 8) was constructed from data presented in Ginzburg and Ben-Avraham [52], Weber et al. [53], and the authors’ computations. The location of
the sinkhole 500m profile in the upper right section of the model is shown. The PGM of the DST was computed as the same for all 12 profiles. The computed gravity effect from the DST was added to the
gravity field to account for the sinkhole PGM (Figure 9). As can be seen from this figure, the anomaly from sinkhole 2 can be visually detected, but the anomaly from sinkhole 1 is practically
undetectable against the regional trend produced by the DST.
4.3. Noise Added by Random Number Generation
Given that the geological medium is usually more complex than presented in the models in Figures 6 and 8 we used a random number generator to introduce a noise factor into the calculations.
Algorithms developed by Bichara et al. [8] and Wichura [54] were applied. The parameters of this randomly distributed noise—the mean values and the standard deviations along 12 profiles—are listed in
Table 1. In other words, the randomly distributed nonrecurrent noise was added to 200 computation points for each of 12 profiles.
Figure 10 shows a gravity map compiled on the basis of randomly distributed noise (from Table 1). The combined gravity effects from (1) the sinkhole PGM, (2) the DST PGM, and (3) randomly distributed
noise were used to compute the integrated gravity map that sums the effects of these three factors (Figure 11). It should be noted that in the map (Figure 11) there are no visual signatures of the
negative anomalies from sinkholes 1 and 2.
4.4. Results of the Different Algorithms to Eliminate Regional Trends
To remove the regional trends, different algorithms and methods were applied: the first and second derivatives, self-adjusting and adaptive filtering, Fourier series, wavelet decomposition, principal
component analysis, inverse probability, and other methods were applied (altogether more than 30 different procedures).
Examples of applications of (1) the entropy parameter using a moving window with self-adapting size, (2) gradient sounding, and (3) power estimation by the Morlet transformation are presented in
Figures 12(a), 12(b), and 12(c), respectively. Computing the entropy with the moving window (Figure 12(a)) revealed a clear ring anomaly from sinkhole 2; the anomaly from sinkhole 1 was difficult to
locate. At the same time the boundary effect at the map edges (Figure 12(a)) complicated image reading. The results of gradient sounding (Figure 12(b)) suggested the presence of an anomaly from
sinkhole 2. A power estimation based on a Morlet transformation (Figure 12(c)) very clearly indicates the location of sinkhole 2. However, a superposition of computed gravity anomalies and noise
effects gives a false weak anomaly (located at 105–108m) of sinkhole 1.
Regression analysis is now considered one of the most powerful methods for removing trends of different kinds (e.g., [55–57]). Two regression methods were selected. Figure 13 shows the residual
gravity map after subtracting a bilinear saddle regression. The negative gravity anomaly from sinkhole 1 in the area of 160m (see Figures 6 and 9) is clearly detected, whereas the negative anomaly
from sinkhole 2 in the area of 340m is small and could not be reliably detected.
The gravity map after subtracting a local polynomial regression is presented in Figure 14. Here the negative anomaly from sinkhole 1 was weak and was difficult to detect, but the anomaly from
sinkhole 2 was unmistakable. These findings suggest that there are advantages to using a combination of methods.
5. Removing Regional Gravity Trend in the Area of Ghor Al-Haditha, on the Eastern Coastal Plain of the Dead Sea (Jordan)
The Ghor Al-Haditha area is located south-east of the northern Dead Sea basin (see Figure 4). Alluvial fan deposits from Wadi Ibn Hammad cover the southern part of this area. Borehole sections
indicate that the geological material of the shallow subsurface consists of laminated sand interbedded with layers of calcareous silts and possibly clay or marl. The sinkholes at the eastern coast of
the Dead Sea can be dated to the mid-1980s [58].
The observed gravity map (Figure 15) shows the strong influence of the negative gravity effect due to the DST (and possibly other geological factors). Computing the first and second derivatives,
self-adjusting filtering, gradient directional filtering, Fourier series, principal component analysis, and other methods were less successful than the bilinear saddle and local polynomial
Figure 16 displays results of the gradient sounding. After regional trend removal two local anomalies were found: one complex in the center of the area and the other near the western border. Clearly,
however, this type of analysis is only valid for target qualitative delineation.
A visual comparison of the residual maps (Figures 17 and 18, resp.) shows the great similarity between the two regression methods. A negative anomaly in the center of the map with amplitude of
0.6-0.7mGal is very visible. An important advantage of the residual maps is that these maps can be used both for qualitative and quantitative analysis.
The gravity profiles are constructed along the same line (A–B in Figure 17) and (A′–B′ in Figure 18) demonstrate (Figure 19) that there are some small differences, mainly in the amplitude value from
the anomalous object with a negative density contrast.
3D modeling indicates that such a gravity anomaly may have been produced by a sinkhole (similar to model 2 in Figure 6, but enlarged roughly twice) with its upper edge occurring at a depth of 4m
below the earth’s surface (Figure 20). The location of this sinkhole and its size are consistent with the available geological data [59]. The disparity between the observed and computed in the right
part of the profile may have been caused by the presence of an additional small underground cavity with an irregular shape.
6. Conclusion
The different kinds of noise affecting microgravity investigations amply illustrate the need for careful calculation of each of these disturbing factors. In particular, the influence of regional
trends often masks the target local microgravity anomalies. The 3D theoretical PGM of sinkholes combined with the gravity effect from the DST (producing a strong regional trend) as well as the
randomly distributed noise (introducing some geological medium complexity) was constructed. Comparison of different methodologies to remove regional trends revealed that the most effective algorithms
are the bilinear saddle and local polynomial regressions. The use of these methods to analyze gravity data observed in the complex geological environments of the Ghor Al-Haditha site (eastern
coastline of the Dead Sea, Jordan) successfully removed the regional gradient and localized the negative anomaly possibly produced by a subsurface sinkhole. The 3D gravity field modeling led to
identification of the parameters of this PGM.
The authors would like to thank anonymous reviewers, who thoroughly reviewed this paper, and their critical comments and valuable suggestions were very helpful in preparing this paper. This
publication was made possible through support provided by the U.S. Agency for International Development (USAID) and the MERC Program under terms of Award No. M27-050.
1. L. V. Eppelbaum, “Archaeological geophysics in Israel: past, present and future,” Advances in Geosciences, vol. 24, pp. 45–68, 2010.
2. B. E. Khesin, V. V. Alexeyev, and L. V. Eppelbaum, Interpretation of Geophysical Fields in Complicated Environments, Advanced Approaches in Geophysics, Kluwer Academic, Dordrecht, The
Netherlands, 1996.
3. L. V. Eppelbaum, M. G. Ezersky, A. S. Al-Zoubi, V. I. Goldshmidt, and A. Legchenko, “Study of the factors affecting the karst volume assessment in the Dead Sea sinkhole problem using microgravity
field analysis and 3-D modeling,” Advances in Geosciences, vol. 19, pp. 97–115, 2008. View at Scopus
4. G. C. Colley, “The detection of caves by gravity measurements,” Geophysical Prospecting, vol. 11, no. 1, pp. 1–9, 1963.
5. Arzi, “Microgravimetry for engineering applications,” Geophysical Prospecting, vol. 23, no. 3, pp. 408–425, 1975. View at Scopus
6. Z. J. Fajklewicz, “Gravity vertical gradient measurements for the detection of small geologic and anthropogenic forms,” Geophysics, vol. 41, no. 5, pp. 1016–1030, 1976. View at Scopus
7. M. Blížkovský, “Processing and applications in microgravity surveys,” Geophysical Prospecting, vol. 27, no. 4, pp. 848–861, 1979.
8. M. Bichara, J. C. Erling, and J. Lakshmanan, “Technique de mesure et d’interpretation minimisant les erreurs de mesure en microgravimetrie,” Geophysical Prospecting, vol. 29, pp. 782–789, 1981.
9. D. K. Butler, “Interval gravity-gradient determination concepts,” Geophysics, vol. 49, no. 6, pp. 828–832, 1984. View at Scopus
10. D. K. Butler, “Microgravimetric and gravity-gradient techniques for detection of subsurface cavities,” Geophysics, vol. 49, no. 7, pp. 1084–1096, 1984.
11. B. E. Khesin, V. V. Alexeyev, and L. V. Eppelbaum, “Investigation of geophysical fields in pyrite deposits under mountainous conditions,” Journal of Applied Geophysics, vol. 30, no. 3, pp.
187–204, 1993. View at Scopus
12. D. Patterson, J. C. Davey, A. H. Cooper, and J. K. Ferris, “The investigation of dissolution subsidence incorporating microgravity geophysics at Ripon, Yorkshire,” Quarterly Journal of
Engineering Geology, vol. 28, no. 1, pp. 83–94, 1995. View at Scopus
13. D. E. Yule, M. K. Sharp, and D. K. Butler, “Microgravity investigations of foundation conditions,” Geophysics, vol. 63, no. 1, pp. 95–103, 1998. View at Scopus
14. N. C. Crawford, “Microgravity investigations of sinkhole collapses under highway,” in Proceedings of the 1st SAGEEP Conference, vol. 1, pp. 1–13, St. Louis, Mo, USA, 2000.
15. M. Beres, M. Luetscher, and R. Olivier, “Integration of ground-penetrating radar and microgravimetric methods to map shallow caves,” Journal of Applied Geophysics, vol. 46, no. 4, pp. 249–262,
2001. View at Publisher · View at Google Scholar · View at Scopus
16. D. K. Butler, “Potential fields methods for location of unexploded ordnance,” Leading Edge, vol. 20, no. 8, pp. 890–895, 2001. View at Scopus
17. M. Rybakov, V. Goldshmidt, L. Fleischer, and Y. Rotstein, “Cave detection and 4-D monitoring: a microgravity case history near the Dead Sea,” Leading Edge, vol. 20, no. 8, pp. 896–900, 2001. View
at Scopus
18. T. Hunt, M. Sugihara, T. Sato, and T. Takemura, “Measurement and use of the vertical gravity gradient in correcting repeat microgravity measurements for the effects of ground subsidence in
geothermal systems,” Geothermics, vol. 31, no. 5, pp. 525–543, 2002. View at Publisher · View at Google Scholar · View at Scopus
19. L. V. Eppelbaum and B. E. Khesin, “Advanced 3D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: a case study from the Greater Caucasus,” First Break, vol. 22, no. 11,
pp. 53–56, 2004. View at Scopus
20. P. Styles, S. Toon, E. Thomas, and M. Skittrall, “Microgravity as a tool for the detection, characterization and prediction of geohazard posed by abandoned mining cavities,” First Break, vol. 24,
no. 5, pp. 51–60, 2006. View at Scopus
21. D. K. Butler, Ed., Near-Surface Geophysics, no. 13 of Investigations in Geophysics, Society of Exploration Geophysicists, 2005.
22. J. S. da Silva and F. J. F. Ferreira, “Gravimetry applied to water resources and risk management in karst areas: a case study in Parana state, Brazil,” in Proceedings of the Transactions of the
23th FIG Congress, p. 14, Munich, Germany, 2006.
23. M. W. Branston and P. Styles, “Site characterization and assessment using the microgravity technique: a case history,” Near Surface Geophysics, vol. 4, no. 6, pp. 377–385, 2006. View at Scopus
24. N. Debeglia, A. Bitri, and P. Thierry, “Karst investigations using microgravity and MASW; application to Orléans, France,” Near Surface Geophysics, vol. 4, no. 4, pp. 215–225, 2006. View at
25. I. R. Abad, F. G. García, I. R. Abad et al., “Non-destructive assessment of a buried rainwater cistern at the Carthusian Monastery “Vall de Crist” (Spain, 14th century) derived by
microgravimetric 2D modelling,” Journal of Cultural Heritage, vol. 8, no. 2, pp. 197–201, 2007. View at Publisher · View at Google Scholar · View at Scopus
26. C. C. Bradley, M. Y. Ali, I. Shawky, A. Levannier, and M. A. Dawoud, “Microgravity investigation of an aquifer storage and recovery site in Abu Dhabi,” First Break, vol. 25, no. 11, pp. 63–69,
2007. View at Scopus
27. L. V. Eppelbaum, “Revealing of subterranean karst using modern analysis of potential and quasi-potential fields,” in Proceedings of the SAGEEP Conference, vol. 20, pp. 797–810, Denver, Colo, USA,
28. T. Mochales, A. M. Casas, E. L. Pueyo et al., “Detection of underground cavities by combining gravity, magnetic and ground penetrating radar surveys: a case study from the Zaragoza area, NE
Spain,” Environmental Geology, vol. 53, no. 5, pp. 1067–1077, 2008. View at Publisher · View at Google Scholar · View at Scopus
29. S. Deroussi, M. Diament, J. B. Feret, T. Nebut, and T. Staudacher, “Localization of cavities in a thick lava flow by microgravimetry,” Journal of Volcanology and Geothermal Research, vol. 184,
no. 1-2, pp. 193–198, 2009. View at Publisher · View at Google Scholar · View at Scopus
30. M. Ezersky, A. Legchenko, C. Camerlynck et al., “The Dead Sea sinkhole hazard—new findings based on a multidisciplinary geophysical study,” Zeitschrift fur Geomorphologie, vol. 54, no. 2, pp.
69–90, 2010. View at Publisher · View at Google Scholar · View at Scopus
31. F. Greco, G. Currenti, C. Del Negro et al., “Spatiotemporal gravity variations to look deep into the Southern flank of Etna volcano,” Journal of Geophysical Research B, vol. 115, no. 11, Article
ID B11411, 2010. View at Publisher · View at Google Scholar · View at Scopus
32. G. Leucci and L. de Giorgi, “Microgravimetric and ground penetrating radar geophysical methods to map the shallow karstic cavities network in a coastal area (Marina Di Capilungo, Lecce, Italy),”
Exploration Geophysics, vol. 41, no. 2, pp. 178–188, 2010. View at Publisher · View at Google Scholar · View at Scopus
33. A. G. Camacho, P. J. González, J. Fernández, and G. Berrino, “Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: application to
deforming calderas,” Journal of Geophysical Research, vol. 116, no. B10, 2011. View at Publisher · View at Google Scholar
34. L. V. Eppelbaum, “Review of environmental and geological microgravity applications and feasibility of their implementation at archaeological sites in Israel,” International Journal of Geophysics,
vol. 2011, Article ID 927080, 9 pages, 2011. View at Publisher · View at Google Scholar
35. A. Hajian, H. Zomorrodian, P. Styles, F. Greco, and C. Lucas, “Depth estimation of cavities from microgravity data using a new approach: the local linear model tree (LOLIMOT),” Near Surface
Geophysics, vol. 10, pp. 221–234, 2012.
36. L. V. Eppelbaum, “Application of microgravity at archaeological sites in Israel: some estimation derived from 3D modeling and quantitative analysis of gravity field,” in Proceedings of the
Symposium on the Application of Geophysics to Engineering and Environmental Problems Conference (SAGEEP), vol. 22, pp. 434–446, Fort Wort, Tex, USA, 2009. View at Publisher · View at Google
37. K. J. Sjostrom and D. K. Butler, “Noninvasive weight determination of stockpiled ore through microgravity measurements,” Report of the US Army Corps of Engineers, Paper GL-96-24, 1996.
38. E. Elawadi, A. Salem, and K. Ushijima, “Detection of cavities and tunnels from gravity data using a neural network,” Exploration Geophysics, vol. 32, no. 4, pp. 204–208, 2001. View at Publisher ·
View at Google Scholar
39. N. Debeglia and F. Dupont, “Some critical factors for engineering and environmental microgravity investigations,” Journal of Applied Geophysics, vol. 50, no. 4, pp. 435–454, 2002. View at
Publisher · View at Google Scholar · View at Scopus
40. D. Carbone and F. Greco, “Review of microgravity observations at Mt. Etna: a powerful tool to monitor and study active volcanoes,” Pure and Applied Geophysics, vol. 164, no. 1, pp. 1–22, 2007.
View at Publisher · View at Google Scholar
41. T. Jacob, J. Chery, R. Bayer et al., “Time-lapse surface to depth gravity measurements on a karst system reveal the dominant role of the epikarst as a water storage entity,” Geophysical Journal
International, vol. 177, no. 2, pp. 347–360, 2009. View at Publisher · View at Google Scholar · View at Scopus
42. G. Castiello, G. Florio, M. Grimaldi, and M. Fedi, “Enhanced methods for interpreting microgravity anomalies in urban areas,” First Break, vol. 28, no. 8, pp. 93–98, 2010.
43. S. Porzucek, “Some Applicability problems of Euler deconvolution to the interpretation of the results of microgravity survey,” in Proceedings of the Transactions of the Near Surface EAGE
Conference, P55, pp. 1–5, Zurich, Switzerland, 2010.
44. A. C. Dolgal and A. F. Sharkhimullin, “Increasing accuracy of monogenic gravity anomaly interpretation,” Geoinformatics, vol. 4, pp. 49–56, 2011 (Russian).
45. G. Kaufmann, D. Romanov, and R. Nielbock, “Cave detection using multiple geophysical methods: unicorn cave, Harz Mountains, Germany,” Geophysics, vol. 76, no. 3, pp. B71–B77, 2011. View at
Publisher · View at Google Scholar · View at Scopus
46. J. Panisova, R. Pašteka, J. Papco, and M. Fraštia, “The calculation of building corrections in microgravity surveys using close range photogrammetry,” Near Surface Geophysics, vol. 10, pp.
391–399, 2012.
47. W. M. Telford, L. P. Geldart, and R. E. Sheriff, Applied Geophysics, Cambridge University Press, Cambridge, UK, 1990.
48. L. V. Eppelbaum and B. E. Khesin, Geophysical Studies in the Caucasus, Springer, Heidelberg, Germany, 2012.
49. L. V. Eppelbaum, B. E. Khesin, and S. E. Itkis, “Archaeological geophysics in arid environments: examples from Israel,” Journal of Arid Environments, vol. 74, no. 7, pp. 849–860, 2010. View at
Publisher · View at Google Scholar · View at Scopus
50. D. S. Parasnis, Principles of Applied Geophysics, Chapman & Hall, London, UK, 4th edition, 1986.
51. D. T. Sandwell and W. H. F. Smith, “Global marine gravity from retracked Geosat and ERS-1 altimetry: ridge segmentation versus spreading rate,” Journal of Geophysical Research B, vol. 114, no. 1,
Article ID B01411, 2009. View at Publisher · View at Google Scholar · View at Scopus
52. A. Ginzburg and Z. Ben-Avraham, “A seismic refraction study of the north basin of the Dead Sea, Israel,” Geophysical Research Letters, vol. 24, no. 16, pp. 2063–2066, 1997. View at Scopus
53. M. Weber, K. Abu-Ayyash, A. Abueladas, et al., “Anatomy of the Dead Sea transform from lithospheric to microscopic scale,” Reviews of Geophysics, vol. 47, no. 2, 2010. View at Publisher · View at
Google Scholar
54. M. J. Wichura, “Algorithm AS 241: the percentage points of the normal distribution,” Applied Statistics, vol. 37, no. 3, pp. 477–484, 1988.
55. S. Shatterjee and A. S. Sadi, Regression Analysis by Example, John Wiley & Sons, New York, NY, USA, 1996.
56. J. O. Rawlings, S. G. Pantula, and D. A. Dickey, Applied Regression Analysis: A Research Tool, Springer, New York, NY, USA, 2nd edition, 1998.
57. M. H. Bingham and J. M. Fry, Regression: Linear Models in Statistics, Undergraduate Math Series, Springer, London, UK, 2010.
58. S. A. Taqieddin, N. S. Abderahman, and M. Atallah, “Sinkhole hazards along the eastern Dead Sea shoreline area, Jordan: a geological and geotechnical consideration,” Environmental Geology, vol.
39, no. 11, pp. 1237–1253, 2000. View at Publisher · View at Google Scholar · View at Scopus
59. A. Al-Zoubi, A. Abueadas, A. Akkawwi, L. Eppelbaum, E. Levi, and M. Ezersky, “Use of microgravity survey in the Dead Sea areas affected by the sinkholes hazard,” in Proceedings of the
Transactions of the 8th EUG Meeting, Geophysical Research Abstracts, vol. 14 of EGU2012-1982, Vienna, Austria, 2012.
|
{"url":"http://www.hindawi.com/journals/ijge/2013/341797/","timestamp":"2014-04-17T00:10:24Z","content_type":null,"content_length":"97427","record_id":"<urn:uuid:b2d4cbed-aa19-4a60-96c2-37ec0f164a66>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Triangles and Petals
Copyright © University of Cambridge. All rights reserved.
'Triangles and Petals' printed from http://nrich.maths.org/
Look at the equilateral triangle rotating around the equilateral triangle. It produces a flower with three petals:
Can you work out the perimeter of the flower's petals?
Now consider a flower made by the triangle rotating about a square - what is the perimeter of the petals now?
This text is usually replaced by the Flash movie.
What is the perimeter when the centre of the flower is a regular pentagon, hexagon, heptagon...?
What can you say about the increase in perimeter as the number of sides of the centre shape increases?
Can you explain this increase?
What would be the perimeter of a flower whose centre is a regular $100$-sided polygon with side length $r$?
It may help to work in terms of $\pi$ throughout this problem.
|
{"url":"http://nrich.maths.org/2095/index?nomenu=1","timestamp":"2014-04-20T19:30:41Z","content_type":null,"content_length":"5281","record_id":"<urn:uuid:bb2911b1-5b00-4960-98f5-faf90dfc8545>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help understanding why this is the answer to this equation
February 5th 2013, 10:49 AM
Need help understanding why this is the answer to this equation
Hi there, would really appreciate some help in understanding why this is the answer to this equation.
Find the value of 2/3 (4+3q) when q = 4/5.
The answer given in the back of the book im working from is 4 4/15.
Now when i type the equation into my calculator i get 26/5.
I have been trying to work out why the answer is 4 4/15 and cant seem to quite get it.
Thanks Olly
February 5th 2013, 11:46 AM
Re: Need help understanding why this is the answer to this equation
Correction made on original post
February 5th 2013, 12:08 PM
Re: Need help understanding why this is the answer to this equation
I typed it in mine and i got 4 4/15. I think that your calculator does not do order of operations?
First thing you do is 3*(4/5)
Then you add 4 to it
Finally you multiply the sum you get which is 6 2/5 by 2/3 and you get 4 4/15
February 5th 2013, 01:19 PM
Re: Need help understanding why this is the answer to this equation
Thank you for your reply sakonpure6
Ok so i think i got my head around it, without a calculator:
3*(4/5) = 3/1 x 4/5 = 12/5
12/5 = 2 2/5
2 2/5 + 4 = 6 2/5
6 2/5 = 32/5
32/5 x 2/3 = 64/15 = 4 4/15
Now is there a simpler way to do this without a calc or do you have to do all these steps.
Also my calculator is a Casio fx-991es, im not sure if it does order of operations. My calculator gives the final answer 64/15, should it give the answer as a simplified mixed number i.e 4 4/5?
February 5th 2013, 02:57 PM
Re: Need help understanding why this is the answer to this equation
this is how i would type this in my calculator: 2/3(4+3*4/5) and it works :D
|
{"url":"http://mathhelpforum.com/algebra/212620-need-help-understanding-why-answer-equation-print.html","timestamp":"2014-04-21T13:24:58Z","content_type":null,"content_length":"5844","record_id":"<urn:uuid:76f4aef0-0d02-4e45-ad87-75b958905196>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Permanent URL to this publication: http://dx.doi.org/10.5167/uzh-21773
Buch, A; Kresch, A; Tamvakis, H (2004). Littlewood-Richardson rules for Grassmannians. Advances in Mathematics, 185(1):80-90.
Submitted Version
The classical Littlewood-Richardson rule (LR) describes the structure constants obtained when the cup product of two Schubert classes in the cohomology ring of a complex Grassmannian is written as a
linear combination of Schubert classes. It also gives a rule for decomposing the tensor product of two irreducible polynomial representations of the general linear group into irreducibles, or
equivalently, for expanding the product of two Schur S-functions in the basis of Schur S-functions. In this paper we give a short and self-contained argument which shows that this rule is a direct
consequence of Pieri's formula (P) for the product of a Schubert class with a special Schubert class. There is an analogous Littlewood-Richardson rule for the Grassmannians which parametrize maximal
isotropic subspaces of Cn, equipped with a symplectic or orthogonal form. The precise formulation of this rule is due to Stembridge (St), working in the context of Schur's Q-functions (S); the
connection to geometry was shown by Hiller and Boe (HB) and Pragacz (Pr). The argument here for the type A rule works equally well in these more difficult cases and givesa simple derivation of
Stembridge's rule from the Pieri formula of (HB). Currently there are many proofs available for the classical Littlewood-Richardson rule, some of them quite short. The proof of Remmel and Shimozono
(RS) is also based on the Pieri rule; see the recent survey of van Leeuwen (vL) for alternatives. In contrast, we know of only two prior approaches to Stembridge's rule (described in (St, HH) and
(Sh), respectively), both of which are rather involved. The argument presented here proceeds by defining an abelian group H with a basis of Schubert symbols, and a bilinear product on H with
structure constants coming from the Littlewood-Richardson rule in each case. Since this rule is com- patible with the Pieri products, it suffices to show thatH is an associative algebra. The proof of
associativity is based on Schutzenberger slides in type A, and uses the more general slides for marked shifted tableaux due to Worley (W) and Sagan (Sa) in the other Lie types. In each case, we need
only basic properties of these operations which are easily verified from the definitions. Our paper is self-contained, once the Pieri rules are granted. The work on this article was completed during
a fruitful visit to the Mathematisches Forschungsinstitut Oberwolfach, as part of the Research in Pairs program. It is a pleasure to thank the Institut for its hospitality and stimulating atmosphere.
Users (please log in): suggest update or correction for this item
Repository Staff Only: item control page
|
{"url":"http://www.zora.uzh.ch/21773/","timestamp":"2014-04-19T03:08:31Z","content_type":null,"content_length":"29187","record_id":"<urn:uuid:bf16abec-5a25-402d-8ffd-7e2c896595ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
No. 2579:
Today, UH Math Professor Krešo Josić talks about music and mathematics. The University of Houston presents this series about the machines that make our civilization run, and the people whose
ingenuity created them.
The history of musical instruments goes back tens of thousands of years. Fragments of bone flutes have been found at Neanderthal sites. Early instruments show that humans have long produced
pitched sound — sound containing predominantly a single frequency. Finger holes on ancient flutes indicate that prehistoric musicians had some concept of a musical scale.
Music is also the most mathematical art. Already the Pythagoreans of ancient Greece studied the mathematics of musical instruments. According to legend, Pythagoras was listening to blacksmiths at
work. He noticed that some of their hammers produced a pleasing combination of sounds when struck together. When he examined these hammers he found that one weighed 12 pounds and another 6
pounds. The first hammer weighed exactly twice as much as the second.
Pythagoras had discovered a fundamental principle of harmony. Most Western instruments produce sound by causing air to vibrate a certain number of times per second. When two instruments cause air
to vibrate at frequencies whose ratios form simple fractions — 2:1 or 3:2, say — we perceive a pleasing combination of sounds. The consonance between the blacksmith’s hammers caught Pythagoras’
attention. For instance, the 6 pound hammer produced vibrations exactly twice as fast as those of the 12 pound hammer. This two-to-one relationship between notes is called an octave, and appears
in nearly all music produced by humankind.
But, the problem of translating Pythagoras’ observations to the design a musical instruments has bedeviled some of the brightest minds of millennia past. How do we tune a piano, or any ensemble
instrument, so it sounds good when it’s played with other instruments?
Along with the principles of harmony, Pythagoreans also recognized a primary problem of instrument design. The notes used in Western music can be obtained by starting with a frequency and
multiplying it by 3/2, and then by 3/2 again and again. This produces a sequence of notes - spanning different octaves - called the circle of fifths. All notes in the Western chromatic scale can
be obtained this way — but only approximately. The problem is that if we keep multiplying a frequency by 3/2, we never get frequencies that are exactly separated by octaves.
Hundreds of different tunings have been proposed to work around this problem. Bach wrote his Well-Tempered Clavier to demonstrate the advantage of a slightly imperfect tuning he favored. With
past tunings some music sounded very pleasing, while other compositions could sound woefully out of tune. In nearly all music today we use equal temperament — a compromise that effectively makes
all our music sound equally pleasing, or equally out of tune – depending on your perspective.
The next time you consider the cold rationality of mathematics, also remember that math is behind the sound of violins, pianos, and song. The beauty of mathematics shines through in the music it
helps us create.
I’m Krešimir Josić, at the University of Houston, where we’re interested in the way inventive minds work.
(Theme music)
Numerous websites explain the basics of harmony. Here is a very simple explanation that includes an example with sound http://www.aboutscotland.co.uk/harmony/prop.html.
You can find more about the mathematics of musical instruments in the following article that appear in the American Mathematical Monthly: http://www.math.uh.edu/~josic/myweb/research/papers/
Although idiosyncratic, I like Bill Sethares’ approach to tuning. He shows how modern technology allows us to easily create scales with any number of notes. There are some pretty interesting
examples on his webpage http://eceserv0.ece.wisc.edu/~sethares/.
The story of the Neanderthal flute has been discussed previously in this series http://www.uh.edu/engines/epi1232.htm. However, the finding remains very controversial http://en.wikipedia.org/wiki
All images are in the Creative Commons.
The Engines of Our Ingenuity is Copyright © 1988-2010 by John H. Lienhard.
|
{"url":"http://uh.edu/engines/epi2579.htm","timestamp":"2014-04-17T09:43:57Z","content_type":null,"content_length":"7425","record_id":"<urn:uuid:bdb9e57b-3a21-4586-9ba1-7195a5f5694b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
|
edHelper.com logic puzzle Logic Worksheet
1. The Christmas tree with twenty-eight ornaments is not the tree with twenty-three candy canes.
2. There are no more than forty-eight ornaments on Olivia's Christmas tree.
3. There are no more than thirty-four ornaments on Taylor's Christmas tree.
4. There are no more than forty-eight ornaments on Justin's Christmas tree.
5. Olivia's Christmas tree has four more ornaments than the number of candy canes.
6. Taylor's Christmas tree has the fewest number of ornaments.
7. Ryan's Christmas tree is not the tree with twenty-four candy canes. His tree is also not the one with twenty-three candy canes.
8. The Christmas tree with thirty-eight ornaments is not the tree with twenty-seven candy canes.
9. The Christmas tree with forty-nine ornaments has twenty-eight candy canes.
10. Shelby's Christmas tree has thirty-one more ornaments than the number of candy canes.
11. Justin's Christmas tree has eight more ornaments than the number of candy canes.
12. Nicole's Christmas tree has the most number of candy canes.
13. The Christmas tree with twenty-five ornaments is not the tree with twenty-four candy canes.
|
{"url":"http://www.edhelper.com/logic/Logic72.htm","timestamp":"2014-04-19T15:00:19Z","content_type":null,"content_length":"5767","record_id":"<urn:uuid:4f1a212e-acf4-4f4e-bae7-b331557dae4d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonlinear phase FIR filter design according to the l2 norm with constraints for the complex error
- IEEE Trans. on Signal Processing , 1995
"... We consider the design of digital filters and discuss the inclusion of explicitly specified transition bands in the frequency domain design of FIR filters. We put forth the notion that
explicitly specified transition bands have been introduced in the filter design literature as an indirect and often ..."
Cited by 14 (1 self)
Add to MetaCart
We consider the design of digital filters and discuss the inclusion of explicitly specified transition bands in the frequency domain design of FIR filters. We put forth the notion that explicitly
specified transition bands have been introduced in the filter design literature as an indirect and often inadequate approach for dealing with discontinuities in the desired frequency response. We
also present a rapidly converging, robust, simple algorithm for the design of optimal peak constrained least square lowpass FIR filters that does not require the use of transition bands. This
versatile algorithm will design linear and minimum phase FIR filters and gives the best L2 filter and a continuum of Chebyshev filters as special cases. 1. INTRODUCTION We consider the definition of
optimality for digital filter design and conclude that a constrained least squared error criterion with no transition band is often the best approximation measure for many physical filtering
problems. This comes fro...
- IEEE Trans. Signal Processing , 1998
"... Abstract — We presented the basic concepts for peakconstrained least-squares (PCLS) optimization in previous papers. We present advanced PCLS optimization concepts in this paper. I. ..."
Cited by 9 (0 self)
Add to MetaCart
Abstract — We presented the basic concepts for peakconstrained least-squares (PCLS) optimization in previous papers. We present advanced PCLS optimization concepts in this paper. I.
- IEEE Int. Symp. Circ. Syst , 2001
"... A method for the design of Finite Precision Coefficient (FPC) Peak Constrained Least Squares (PCLS) Finite duration Impulse Response (FIR) digital filters based on Adams ’ optimality criterion
and an efficient local search method is presented. Simple quantization of the infinite precision filter coe ..."
Cited by 3 (2 self)
Add to MetaCart
A method for the design of Finite Precision Coefficient (FPC) Peak Constrained Least Squares (PCLS) Finite duration Impulse Response (FIR) digital filters based on Adams ’ optimality criterion and an
efficient local search method is presented. Simple quantization of the infinite precision filter coefficients typically leads to filter designs that fail to meet the frequency response and Passband
to Stopband energy Ratio (PSR) specifications. It is shown that it is possible to implement computationally efficient filters (with reduced filter FPC wordlengths) that meet the passband and stopband
attenuation specifications at the expense of a lower PSR energy ratio. 1.
, 1994
"... 2-band paraunitary FIR filter banks can be used to generate a multiresolution analysis with compactly supported orthonormal (ON) wavelets. The filter design problem is formulated and solved (a)
as a constrained L1 optimization problem and (b) as a constrained L2 optimization problem which allows arb ..."
Cited by 2 (1 self)
Add to MetaCart
2-band paraunitary FIR filter banks can be used to generate a multiresolution analysis with compactly supported orthonormal (ON) wavelets. The filter design problem is formulated and solved (a) as a
constrained L1 optimization problem and (b) as a constrained L2 optimization problem which allows arbitrary compromises between an L2 and an L1 approach with both of them as special cases. Additional
flatness constraints can also be easily included. The L2 and the L1 design are based on the Kuhn-Tucker (KT) conditions and the alternation theorem, respectively. Therefore, optimality of the
solution is guaranteed. The method (a) is a simpler alternative to a known method. The method (b) solves a more general problem than the approaches known in the literature including all of them as
special cases. 1. INTRODUCTION As is well-known [2] 2-band paraunitary FIR filter banks can be used to generate a multiresolution analysis with compactly supported orthonormal (ON) wavelets. The
realvalued FIR ...
- In International Conference on Digital Signal Processing , 1995
"... In this paper, several modifications of the Parks-McClellan (PM) program are described that treat the band edges differently than does the PM program. The first exchange algorithm we describe
allows (1) the explicit specification of ffi p and ffi s and (2) the specification of the half-magnitude fre ..."
Cited by 1 (1 self)
Add to MetaCart
In this paper, several modifications of the Parks-McClellan (PM) program are described that treat the band edges differently than does the PM program. The first exchange algorithm we describe allows
(1) the explicit specification of ffi p and ffi s and (2) the specification of the half-magnitude frequency, !o . The set of lowpass filters obtained with this algorithm is the same as the set of
lowpass filters produced by the PM algorithm. We also find that if passband monotonicity is desired in the design of filters having very flat passbands it is also desirable to modify the usual way of
treating the band edges. The second multiple exchange algorithm we describe produces filters having a specified ffi p and ffi s but also includes a measure of the integral square error. 1
Introduction In this paper, several modifications of the Parks-McClellan (PM) program [11, 13, 17] are described. Recall that in their approach to the design of digital filters, the band edges are
specified and the ...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2175893","timestamp":"2014-04-21T16:55:52Z","content_type":null,"content_length":"24431","record_id":"<urn:uuid:67f306da-43ba-4a5e-ba73-6944f6d26cc8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Confused Mind
In a recent New York Times article, Hal Varian — a respected mainstream economist and textbook author — describes the contributions of Nobel Laureate John Nash:
So what did John Nash actually do? Viewers of the Oscar-winning film A Beautiful Mind might come away thinking he devised a new strategy to pick up girls.
Mr. Nash's contribution was far more important than the somewhat contrived analysis about whether or not to approach the most beautiful girl in the bar.
What he discovered was a way to predict the outcome of virtually any kind of strategic interaction. Today, the idea of a "Nash equilibrium" is a central concept in game theory.
Before proceeding, I must endorse Varian's critique of the bar scene from the movie. For those who have seen it, be assured that the (implausible) strategizing — in which Russell Crowe instructs his
friends that the only way to success is for them all to ignore the pretty girl and focus instead on her plainer friends — does not constitute a true Nash equilibrium. The situation is comparable to
that faced by OPEC countries bargaining on restrictions in output: Even if all the boys would be better off if all ignored the pretty blonde, there would still be an incentive for each one to deviate
from the pact and approach her. (In any event, she seemed rather stuck-up to this viewer.)
Having said this, I must disagree with the rest of Varian's analysis. Nash equilibrium is defined as a situation in which each player has chosen an optimal strategy, given the choices of the others.
Such a situation constitutes an equilibrium because, if reached, there would be no reason for anyone to change strategy; it would be a stable resting point.
On the face of it, there is nothing objectionable with this definition, assuming one wants to model strategic interactions in the formal manner of game theorists. Yet even so, there is no reason that
in any game the economist should predict that players will pick strategies to form a Nash equilibrium. The players are out to maximize utility; they care nothing for the stability of the outcome.
This distinction underscores the flaws in Varian's subsequent commentary.
Varian states that game theory assumes full rationality among all players, and admits that this assumption is far from reality. With this I am in complete agreement. However, Varian seems to think
that this explains the failure of human players to reach Nash equilibria in experimental settings. As I shall argue, however, it is not the assumption of rationality, but the obsession with
equilibrium analysis, that has baffled mainstream economists in these games. This confusion is epitomized in Varian's illustrative example:
Consider a simple example: several players are each asked to pick a number ranging from zero to 100. The player who comes closest to the number that is half the average of what everyone[1] … says
wins a prize. Before you read further, think about what number you would choose.
Now consider the game theorist's analysis. If everyone is equally rational, everyone should pick the same number. But there is only one number that is equal to half of itself — zero.
This analysis is logical, but it isn't a good description of how real people behave when they play this game: almost no one chooses zero.
Varian — like most game theorists — here confuses rationality with omniscience. It is one thing to say players should perform correct calculations of probabilities and payoffs; it is quite another to
say they should be able to know the moves of their opponents beforehand. Moreover, Varian simply asserts that if everyone is equally rational, everyone should pick the same number; I personally do
not see why this should be true.
But the fundamental weakness in Varian's analysis is his criticism of real-life players who fail to conform to Nash equilibrium. Varian believes this failure demonstrates their irrationality, when in
fact it only proves the limitations of his equilibrium concept.
To see this, imagine that there are only two players, John and Jane, playing the game Varian describes. Suppose that John picks the number 3 while Jane picks the number 1. One-half the average of
their picks is 1, and therefore Jane wins the prize; she has played perfectly.
According to Varian's logic,[2] however, Jane's play would be irrational; rather than playing the perfect move of 1, she should have opted for the "rational" choice of zero.
It is clear then that the word "rational" means much more to the typical game theorist than simply "flawless calculation." Ultimately, "rational" describes a player who analyzes games in the way that
typical game theorists do. We have seen that a player may choose a number different from zero in this simple game and still win, and so it is difficult to argue that a group of perfectly rational
players should all assume everyone else is going to play zero.
The other game that Varian describes (equivalent to a game known as the traveler's dilemma), as well as other such "puzzling" games (including the centipede game, finitely repeated prisoner's
dilemma, and ultimatum game) studied in the experimental literature, all have this flavor. They are games with a unique[3] equilibrium in which the Nash strategy is disastrous if the other players
choose a non-Nash strategy. In other words, the only reason players would ever choose the Nash strategy in these games is if they were absolutely certain their opponents would do the same. Real-life
players know that there is no such thing as certainty, and consequently make much better moves than those recommended by the ostensibly rational game theorist.
Varian concludes his article with this amusing anecdote:
Back to picking up girls. In the movie, the fictional John Nash described a strategy for his male drinking buddies, but didn't look at the game from the woman's perspective, a mistake no game
theorist would ever make. A female economist I know once told me that when men tried to pick her up, the first question she asked was: "Are you a turkey?" She usually got one of three answers:
"Yes," "No," and "Gobble-gobble." She said the last group was the most interesting by far. Go figure.
At the risk of priggishness, I must point out that Russell Crowe did look at the game from the woman's perspective; that's why hitting on the pretty blonde first, and then settling for her friends,
was a losing strategy.
Finally, I must also disagree with Varian's colleague (who is no doubt a typical game theorist): Never once has my reply of "Oink-oink!" to a female's inquiry led to success.
[1] In Varian's article, he says the winner picks one-half the average of what everyone else says, but I have changed it to the more conventional game. The logic remains the same.
[2] Varian would probably say that not only is he assuming full rationality among players, but also that all players know they are all fully rational. As I argue above, however, I still believe his
conclusion does not follow.
[3] For the ultimatum game, it is the stricter subgame perfect Nash equilibrium that is unique.
|
{"url":"http://mises.org/daily/937/","timestamp":"2014-04-20T19:32:30Z","content_type":null,"content_length":"39379","record_id":"<urn:uuid:97899323-7053-455d-8ab0-13ae41c90ab1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Beginner Question
John Haltigan posted on Wednesday, October 04, 2006 - 8:37 am
Dear Linda & Bengt,
I am in the fourth week of an SEM class as part of my doctoral training. The class has introduced me to MPlus and we just had an assignment using the software (4.0). The assignment asks us to analyze
a structural model (no latent), determine its fit, and if doesn't, how it might be improved to fit the data.
Running the model as specified reveals that it does not fit. In analyzing the output, one can see that a direct path of the model is non-signicant (Est./S.E. = .363). As such, I took this path out of
the model, and then re-ran it. It now fit the data, but just barely (chi square = .06). Examination of the residuals for both the original model and the fitting model does not show any values at the
professor identified 'concern' value of >.1.
Nonetheless, to improve on the model, I added a direct path that was not specified in the original model, and add it to the model above which fit the data. This added path made the chi-square value
more non-significant .356 so I was pleased, but in the order of parsimony, is the simplest model the best?
I recognize the question is a bit elementary and poorly asked, but please bear with me as I am new at this.
Linda K. Muthen posted on Wednesday, October 04, 2006 - 9:10 am
One always seeks the most parsimonious model that fits the data using theory as a guide. Deleting paths are are coincidentally non-significant is not recommended to improve model fit. This may simply
capitalize on the chance relationships in the sample data. Instead, you should retain that path and try to determine why the model does not fit. Likewise, paths should not be added simply to improve
model fit unless there is a theoretical reason to do so. Finally, you should be using Version 4.1 which is the latest version of Mplus.
J.D. Haltigan posted on Wednesday, October 04, 2006 - 9:48 am
thanks for the feedback. In examining the residuals for the first (nonfitting)model, none of them approached .1 (absolute); the closest would have been .068 (positive); in this case, since no
residuals seem to suggest underfitting or overfitting, is it necessary to go back to the theoretical drawing board without deleting or adding any paths to the model?
Linda K. Muthen posted on Wednesday, October 04, 2006 - 10:47 am
One issue to think about is whether your variables are valid and reliable measures of the constructs intended.
J.D. Haltigan posted on Wednesday, October 04, 2006 - 10:54 am
for purposes of this assignment, we are to assume that diagnostics on the data have been performed, and that all measures are reliable and valid. That said, is it feasible to delete the
nonsignificant path to obtain model fit, since this would seem to be our only option (or throw the theory all together).
Linda K. Muthen posted on Wednesday, October 04, 2006 - 12:33 pm
As stated earlier, I would not delete a non-significant path. It sounds like this would be a good question for the class to discuss.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=11&page=1665","timestamp":"2014-04-18T13:11:39Z","content_type":null,"content_length":"23863","record_id":"<urn:uuid:78d06514-e25d-41c6-bca6-32b08779fe2f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
F01LEF LU factorization of real tridiagonal matrix
F04EAF Solution of real tridiagonal simultaneous linear equations, one right-hand side (Black Box)
F04FAF Solution of real symmetric positive-definite tridiagonal simultaneous linear equations, one right-hand side (Black Box)
F04LEF Solution of real tridiagonal simultaneous linear equations (coefficient matrix already factorized by F01LEF)
F08FEF Orthogonal reduction of real symmetric matrix to symmetric tridiagonal form
F08FFF Generate orthogonal transformation matrix from reduction to tridiagonal form determined by F08FEF
F08FSF Unitary reduction of complex Hermitian matrix to real symmetric tridiagonal form
F08FTF Generate unitary transformation matrix from reduction to tridiagonal form determined by F08FSF
F08GEF Orthogonal reduction of real symmetric matrix to symmetric tridiagonal form, packed storage
F08GFF Generate orthogonal transformation matrix from reduction to tridiagonal form determined by F08GEF
F08GSF Unitary reduction of complex Hermitian matrix to real symmetric tridiagonal form, packed storage
F08GTF Generate unitary transformation matrix from reduction to tridiagonal form determined by F08GSF
F08HEF Orthogonal reduction of real symmetric band matrix to symmetric tridiagonal form
F08HSF Unitary reduction of complex Hermitian band matrix to real symmetric tridiagonal form
F08JCF All eigenvalues and optionally all eigenvectors of real symmetric tridiagonal matrix, using divide and conquer
F08JEF All eigenvalues and eigenvectors of real symmetric tridiagonal matrix, reduced from real symmetric matrix using implicit QL or QR
F08JFF All eigenvalues of real symmetric tridiagonal matrix, root-free variant of QL or QR
F08JGF All eigenvalues and eigenvectors of real symmetric positive-definite tridiagonal matrix, reduced from real symmetric positive-definite matrix
F08JJF Selected eigenvalues of real symmetric tridiagonal matrix by bisection
F08JKF Selected eigenvectors of real symmetric tridiagonal matrix by inverse iteration, storing eigenvectors in real array
F08JSF All eigenvalues and eigenvectors of real symmetric tridiagonal matrix, reduced from complex Hermitian matrix, using implicit QL or QR
F08JUF All eigenvalues and eigenvectors of real symmetric positive-definite tridiagonal matrix, reduced from complex Hermitian positive-definite matrix
F08JXF Selected eigenvectors of real symmetric tridiagonal matrix by inverse iteration, storing eigenvectors in complex array
|
{"url":"http://www.nag.com/numeric/fl/manual20/html/indexes/kwic/tridiagonal.html","timestamp":"2014-04-18T02:11:34Z","content_type":null,"content_length":"13001","record_id":"<urn:uuid:a5a8a5a3-ec7a-40cd-9454-108d118f8d22>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving One-Step Equations Involving Multiplication
Home > Solving Equations
> Multiplication Equations
Solving One-Step Equations
Multiplication Equations
Solving one-step equations involving multiplication are easy as long as you can divide!
The next set of one-step equations do not contain a constant that you must add or subtract to remove.
These equations contain a coefficient. A coefficient is a number that is multiplied by the variable.
Therefore, we must remove the coefficient. Take a look at this equation: 3x = 9. Since there is no mathematical symbol between the 3 and the x we know that means multiplication. So, what number times
3 will give us an answer of 9?
You know the answer, right? Yes, 3! 3*3 = 9
Another question to ponder- What is the opposite of multiplication? Yes... division! We are going to divide in order to get x by itself!
Why divide? What is 3/3? Yes... 1! What is 1*x? You got it.... x! That's how we get x by itself.
We want the coefficient to be 1. Anytime you divide a number by itself, you will get an answer of 1!
Let's look at a few examples:
Example 1
Pretty easy, huh? I think multiplication equations are even easier than addition and subtraction equations. Keep working, you'll get the hang of it!
Example 2
Having trouble following this example? Take a look at the following video which will take you step by step through the same problem.
Not too hard if the coefficient is an integer! What happens if the coefficient is a fraction?
Think: If I have 2/3x, how can I make 2/3 a coefficient of 1?
Yes... you will divide by 2/3, but...Do you remember the term reciprocal? When you divide a fraction, you actually multiply by the reciprocal.
If you take 2/3 and you flip it to 3/2, that is the reciprocal! If you multiply by the reciprocal, you will have a coefficient of 1.
Watch this:
Example 3
You may also have problems where your answer results in a fraction. Here's one more example:
Example 4
Still confused, check out this example on the following video.
In this lesson, you learned a new vocabulary word, "coefficient" and hopefully you now have a better understanding of how to solve multiplication equations.
Other Lessons You Might Like on Solving Equations
Like This Page?
We would love to hear what you have to say about this page!
|
{"url":"http://www.algebra-class.com/one-step-equations-2.html","timestamp":"2014-04-20T11:17:33Z","content_type":null,"content_length":"37833","record_id":"<urn:uuid:1ea27e76-c09e-4259-9c30-57dc2b8afdd6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can this nested sum be expressed in terms of generalized harmonic numbers and the cycle index polynomials of the symmetric groups?
up vote 6 down vote favorite
For a paper I was working on recently I needed to find the value of the following sum:
$$S(n,k) = \sum_{i_1 = 1}^n \sum_{i_2 = i_1+1}^n \cdots \sum_{i_k=i_{k-1}+1}^n \frac{1}{i_1 i_2 \cdots i_k}.$$
I found a couple of references (by Adamchik and Cheon and El-Mikkaway) that have an expression for $S(n,k)$ as a polynomial containing generalized harmonic numbers $H_n^{(r)}$, where $$H_n^{(r)} = \
sum_{j=1}^n \frac{1}{j^r}.$$ For example, $$S(n,2) = \frac{1}{2}\left(H_n^2 - H^{(2)}_n \right),$$ $$S(n,3) = \frac{1}{6}\left(H_n^3 - 3H_n H^{(2)}_n + 2 H_n^{(3)}\right),$$ $$S(n,4) = \frac{1}{24}\
left(H_n^4 - 6 H^2_n H_n^{(2)} + 3 (H_n^{(2)})^2 + 8 H_n H_n^{(3)} - 6 H_n^{(4)}\right).$$
Neither of these papers considers the corresponding polynomial sequence of indeterminates (the polynomials before substituting in the generalized harmonic numbers), though. Calculations for small
values of $n$ indicate that these are the cycle index polynomials of the symmetric groups, with the sign pattern such that each factor of $H_n^{(r)}$ contributes a $+1$ if $r$ is odd and a $-1$ if
$r$ is even.
Could someone give a proof of this, particularly one with a combinatorial flavor that gives some real insight into why the cycle index polynomials of the symmetric groups show up here (assuming that
they do)?
(For the record, I don't need this answered for my paper. I just want to know for my own sake.)
As a side note, the papers also give the extremely (and, to me, surprisingly) simple expression $$S(n,k) = \frac{1}{n!} \left[ n+1 \atop k+1 \right],$$ where $\left[ n \atop k \right]$ is an unsigned
Stirling number of the first kind.
co.combinatorics symmetric-group polynomials
add comment
4 Answers
active oldest votes
One can rephrase Qiaochu's argument without using symmetric functions as follows: A standard property of $\left[ n+1\atop k+1\right]$ is the generating function $$ \sum_{k=0}^{n+1}\left[
n+1\atop k\right]t^k = t(t+1)(t+2)\cdots (t+n). $$ Equating coefficients of $t^{k+1}$ gives the result $$ S(n,k) =\frac{1}{n!}\left[ n+1\atop k+1\right]. $$ Now if the cycle index of the
up vote 7 symmetric group $S_k$ is $Z_k(x_1,x_2,\dots,x_k)$, then it is well-known that $$ \sum_k Z_k(x_1,x_2,\dots)t^k = \exp\sum_{r\geq 1}x_r\frac{t^r}{r}. $$ Put $x_r=(-1)^{r-1}H_n^{(r)}$ to get
down vote $$ \sum_{k\geq 0} Z_k(H_n^{(1)},-H_n^{(2)},\dots)t^k = \left(1+t\right)\left(1+\frac t2\right)\cdots \left(1+\frac tn\right), $$ from which it is immediate that $S(n,k)=Z_k(H_n,-H_n^
accepted {(2)},\dots,(-1)^{k-1}H_n^{(k)})$.
Thank you for that very clear explanation. – Mike Spivey Dec 24 '10 at 18:21
add comment
I'll edit in the details later, but this is essentially a consequence of Polya's enumeration theorem together with an inclusion-exclusion argument; see this blog post.
Edit: Okay, so it's a little easier than that; this is just a well-known identity for symmetric functions
$$e_n(x_1, x_2, ...) = \frac{1}{n!} \sum_{\sigma \in S_n} \text{sgn}(\sigma) p_{\sigma}$$
in disguise, where $p_{\sigma} = p_{\lambda_1} ... p_{\lambda_k}$ if $\sigma$ has cycle type $(\lambda_1, ... \lambda_k)$ and $p_k = \sum_i x_i^k$. You get your identity upon substituting
$x_i = \frac{1}{i}$. The identity above is equivalent to the generating function identity
up vote 7 $$\prod (1 + x_i t) = \exp \left( p_1 t - \frac{p_2 t^2}{2} + \frac{p_3 t^3}{3} \mp ... \right)$$
down vote
which formally follows from the identity
$$\prod \frac{1}{1 - x_i t} = \exp \left( p_1 t + \frac{p_2 t^2}{2} + \frac{p_3 t^3}{3} + ... \right)$$
which is in turn a consequence of Polya's enumeration theorem. These are more refined versions of the identities which appear in a previous MO answer of mine, and both of the arguments in
that answer prove this identity without change (you just have to keep track of all the weights); it's a weighted sum over injective functions. (These identities also automatically imply
the Stirling number identity.)
Thanks, Qiaochu! I had the sense that someone who has a good grasp of cycle index polynomials would be able to explain the identity easily. – Mike Spivey Dec 23 '10 at 23:19
add comment
Are the formulae for $S(n,m)$ above in terms of the generalised Harmonic numbers $H_n(r)=\sum_{k=1}^n\frac{1}{k^r}$ actually correct?
Mathematica, for example, shows up $S(10,2)=1026576$ (for Stirling number of the first kind), whereas $\frac{1}{2}\left(H_{10}^2-H_{10}^{(2)}\right) = \frac{177133}{50400}$.
up vote 0 down Further, assuming the Pochhammer symbol is defined $(a)_n=a(a+1)(a+2)\cdots(a+n-1)$ then Adamchik's recursive formula is incorrect too!
I'm actually trying to get to grips with these formulae myself, so it's quite frustrating I can't get them to work. Perhaps I'm using incorrect conventions for symbols? However, I
have checked and double checked everything. Can anyone help?
add comment
Correction: the first paragraph is nonsense; I got confused with the notation for $S(n,m)$ for Stirling numbers, and the finite sum form (it was a late night...)
However, if you follow Adamchik's formula from his original paper you do get
$S(n,2) = 1/2(H_n - H_n^{(2)})$,
up vote 0 down vote
i.e. the power of $H_n$ is equal to $1$, and not equal to $2$, as expected. So the second paragraph still holds from what I can see. Any help on this matter would be much
1 Look at the expression for $\left[ n \atop 3 \right]$ on the fifth line in Section 2 of Adamchik's paper. He does have $H_n^2$ there, not $H_n$. – Mike Spivey Nov 24 '11 at
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics symmetric-group polynomials or ask your own question.
|
{"url":"http://mathoverflow.net/questions/50253/can-this-nested-sum-be-expressed-in-terms-of-generalized-harmonic-numbers-and-th/50256","timestamp":"2014-04-21T13:05:09Z","content_type":null,"content_length":"67959","record_id":"<urn:uuid:796d5397-cbcf-4f55-bd90-f68c9cdfd302>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graterford, PA Science Tutor
Find a Graterford, PA Science Tutor
...For the SAT, each student receives a 95-page spiral-bound book of strategies, notes, and practice problems that I created from scratch after a rigorous analysis of the test. As a Pennsylvania
certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Ma...
19 Subjects: including ACT Science, calculus, statistics, geometry
...The SAT Writing section will test your ability to read, write, and detect subtle errors in various passages. You will need to master both common writing conventions (such as proper use of the
comma and semicolon, etc.) and more advanced grammatical concepts (subjunctive voice, irregular past par...
34 Subjects: including physics, ACT Science, philosophy, calculus
Hi everyone, My name is Elsa, and I am here to help you learn and understand science. This past May, I graduated from Philadelphia University in Pre-Med as a major and Biochemistry as minor.
During the course of my education at PhilaU, I studied biology and chemistry intensely, and I found out that I enjoy learning and teaching science more than anything else.
10 Subjects: including chemistry, biochemistry, algebra 1, biology
Education is a life-long journey which the individual absorbs with every step taken down the path. It's more profound and visible in the beginning years but as long as the awesome capacity of the
mind is intact life's numerous wondrous lessons continue and trail behind us. My experience includes a...
28 Subjects: including astronomy, geometry, reading, sociology
...I've been studying French at home since I was 13 and in school since 8th grade. In college, I tutored beginner and intermediate French students. I also have experience studying abroad in
Montreal and France.
12 Subjects: including biology, English, reading, writing
Related Graterford, PA Tutors
Graterford, PA Accounting Tutors
Graterford, PA ACT Tutors
Graterford, PA Algebra Tutors
Graterford, PA Algebra 2 Tutors
Graterford, PA Calculus Tutors
Graterford, PA Geometry Tutors
Graterford, PA Math Tutors
Graterford, PA Prealgebra Tutors
Graterford, PA Precalculus Tutors
Graterford, PA SAT Tutors
Graterford, PA SAT Math Tutors
Graterford, PA Science Tutors
Graterford, PA Statistics Tutors
Graterford, PA Trigonometry Tutors
Nearby Cities With Science Tutor
Congo, PA Science Tutors
Creamery Science Tutors
Englesville, PA Science Tutors
Eshbach, PA Science Tutors
Fagleysville, PA Science Tutors
Gabelsville, PA Science Tutors
Gulph Mills, PA Science Tutors
Linfield, PA Science Tutors
Morysville, PA Science Tutors
Niantic, PA Science Tutors
Passmore, PA Science Tutors
Rahns, PA Science Tutors
Schultzville, PA Science Tutors
Valley Forge Science Tutors
Zieglersville, PA Science Tutors
|
{"url":"http://www.purplemath.com/Graterford_PA_Science_tutors.php","timestamp":"2014-04-18T08:51:17Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:75426adc-6dcc-458d-a0bd-e3fb5daa0c8e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Good Surface,Bad Surface-Surface classification
up vote 5 down vote favorite
Maybe this question be very simple, but I don't know why it is hard for me. Thanks for any guide and help.
We say a surface $S$ (2-dimensional metric(compact) Riemannian surface) is good (denote by $GS$), if every $2n$, $n\geq1$, points on surface can be separate by some geodesic to two distinct subsets
$V_1$ and $V_2$, where $|V_1|=|V_2|=n$. Also, if a surface $S$ is not good, we say it is bad an denote it by $BS$.
For example, it is not difficult to show that plane is a $GS$. Also, a sphere is $GS$.
1) Do we have some $BS$ examples(class of examples)?
2) Can we characterize the $GS$ and $BS$ surfaces?
I can't find any $BS$ examples and also I can't prove that they are $GS$.
For example, is Klein Bottle $GS$ or $BS$?
Is there any related works and questions about this post?
at.algebraic-topology dg.differential-geometry geometry riemannian-geometry
By a surface, do you mean a two-dimensional Riemannian manifold? Or a 2-dimensional topological space (in which case, you are asking your question for ALL riemannian metrics)? – Igor Rivin Jan 9
'12 at 12:14
I am not a professional in geometry. But I am thinking about two-dimensional metric(compact) Riemannian manifold.I can understand the Klein Bottle and the geodesics on it. Also, I believe that the
Klein Bottle is $GS$. But until now, I don't know what kind of surfaces the Klein Bottle is? – Shahrooz Jan 9 '12 at 13:16
7 A flat torus can't be cut in two by a closed geodesic, so this gives an easy family of bad surfaces. Small perturbations of the flat metric should behave similarly. I think there are lots of
higher-genus surfaces with the same inseparability property. Flat Klein bottles are good (you can parametrize the separating geodesics relatively easily), but I don't know about general metrics. –
S. Carnahan♦ Jan 9 '12 at 14:27
@S. Carnahan: Every closed oriented higher genus surface with a metric of negative curvature has a separating geodesic simple geodesic, and an infinite number of such. One way to attempt to do
2 this is to take a hyperbolic surface $S$ (say of genus 3, for simplicity) and a very large equidistributed point set on it. A simple separating geodesic cuts $S$ into two pieces of unequal area,
so it is plausible that for a sufficiently dense set there is no simple closed geodesic with half the points on each side. Of course, since the curves can be arbitrarily complicated, this is not
clear. – Igor Rivin Jan 9 '12 at 21:48
add comment
1 Answer
active oldest votes
Assuming that "geodesic" in this question means "simple closed geodesic", then every complete hyperbolic surface $S$ of finite area is "bad": You cannot even separate an arbitrary pair
of points. The reason is that the union of simple closed geodesics on $S$ is nowhere dense (even more, its closure has Hausdorff dimension 1) by the result of Birman and Series,
"Geodesics with bounded intersection numbers on surfaces are sparsely distributed", Topology 24 (1985). The paper is available at: http://www.math.columbia.edu/~jb/bdd-int.no-sparce.pdf
up vote 11 In view of this theorem, there exists an open disk $D\subset S$ which is disjoint from all simple closed geodesics in $S$. Now, take two points from this disk. I did not check it, but
down vote it is quite likely that Birman-Series result also holds in the case of negatively pinched variable curvature.
Hyperbolic surfaces are probably still "bad" if you allow non-simple closed geodesics, but pairs of points no longer suffice; one could try to use Hausdorff dimension arguments in the
products of hyperbolic surface by itself to get a contradiction.
Dear Misha, thank you very much for your answer. Would you please give me a short reference related to your answer? – Shahrooz Apr 4 '12 at 18:58
@Shahrooz: I updated my answer to include the link to Birman-Series paper. – Misha Apr 5 '12 at 16:43
@ Dear Misha: thank you for the paper. – Shahrooz Apr 8 '12 at 9:14
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology dg.differential-geometry geometry riemannian-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/85244/good-surface-bad-surface-surface-classification?sort=oldest","timestamp":"2014-04-20T18:27:10Z","content_type":null,"content_length":"61172","record_id":"<urn:uuid:3bdfcf03-5d55-4ec0-ac50-8fda5c68b13d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[plt-scheme] HTDP Exercise 9.5.7
From: S Brown (ontheheap at gmail.com)
Date: Mon Apr 13 12:12:50 EDT 2009
I'm working my way through HTDP and I have a question regarding
Exercise 9.5.7. The exercise asks you to define the function average-
price, which takes as it's input a list of prices, and gives as output
the average of the prices in the list. I came up with the following
(define (checked-average-price toy-list)
((empty? toy-list) (error 'checked-average-price "expected a non-
empty list"))
(else (average-price toy-list))))
(define (average-price toy-list)
((empty? toy-list) 0)
(else (/ (sum-list toy-list)
(num-items toy-list)))))
(define (sum-list a-list)
((empty? a-list) 0)
(else (+ (first a-list)
(sum-list (rest a-list))))))
(define (num-items a-list)
((empty? a-list) 0)
(else (+ 1 (num-items (rest a-list))))))
I came to this solution after not being able to figure out how to keep
track of the sum of the items and also the number of items at the same
time. So my concern/question is, is my solution acceptable, or should
I keep trying to figure out how to keep track of both values (sum and
number of items) without resulting to helper-functions such as sum-
list and num-items? Thanks.
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2009-April/032068.html","timestamp":"2014-04-18T14:20:34Z","content_type":null,"content_length":"6384","record_id":"<urn:uuid:2b7740f0-602b-48ff-983e-ba7603f25266>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Annual Return
An annual rate of return is the profit or loss on an investment over a one-year period. There are many ways of calculating the annual rate of return. If the rate of return is calculated on a monthly
basis, multiplying it by 12 expresses an annual rate of return. This is often called the annual percentage rate (A.P.R.).
|
{"url":"http://investor.gov/print/1801","timestamp":"2014-04-19T06:57:42Z","content_type":null,"content_length":"3603","record_id":"<urn:uuid:f56bea4c-5eb8-46a6-bb0e-0106acc582d1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
M Theory Lesson 20
paper 'Heisenberg Honeycombs Solve Veneziano Puzzle'
shows how important the honeycomb concept is for physics. Take a look at
honeycomb applet
. Your left and right mouse buttons will shrink/expand the hexagons in the diagram.
For a $3 \times 3$ Hermitean matrix a typical honeycomb looks like:
Er, no. That's a bit messy. The Ys should look alike. Anyway, note that the infinite lines go off to the north east, north west and south.
Now why would such hexagon diagrams be so important for physical ampitudes? Remember we
the real moduli $M(0,4)(\mathbb{R})$ with a line segment corresponding to the 1-dimensional Stasheff polytope for the associator, labelled by a 1-level 3-leaved tree. The 2-dimensional analogue is
one of two kinds of hexagon, each labelled by a 2-level tree. Could this be what this is really about?
9 Comments:
It should be noted that the honeycomb's idea is present already in the Nobel winning paper by Heisenberg. Neither Heisenberg nor anybody else till Kholodenko's paper took advantage of such
formulation of quantum mechanics. It is quite remarkable that develpment of this line of thought brings to life quite effortlessly Gromov-Witten invariants for small quantum cohomology ring. The
whole machinery could be discovered much sooner should physicists and mathematicians listen to each other at those times. At the same time, a typical physicist of that generation would most
likely say that such invariants are irrelevant for physics.
Yes, this is remarkable, both physically and from an historical point of view. I'm still working through Kholodenko's papers, but I will certainly be continuing with this train of thought -
although I am supposed to be doing many other things. Thanks, anonymous.
Hi Kea, I was unaware of the honeycomb idea in QM.
Bees are associated with the honeycomb.
Benzene rings are honeycomb-like when drawn in 2D.
Why does nature continue to use similar structures at different scales?
Still the most ubiquitous structure in nature appears to be the helix.
It is theorized in QM by Hestenes; found in nucleic acids; used as trajectory space in game theory, ballistics and mechanics; associated with electromagnetic reconnections and has been actually
been imaged in large scale near the galactic core.
Mark Morris, [PDF] ‘The Galactic Center Magnetosphere’, figure 2.
Whoa! This is a remarkable insight, and I think I will be able to find uses for it very quickly.
My interest, of course, is in the 3x3 matrices of primitive idempotents. We already know that these can be converted into 3x3 circulant Hermitian matrices and that these can represent the
So the idea is that you take linear combinations of these to produce quarks. This sounds suspiciously like adding three 3x3 circulant Hermitian matrices. For the u, it would be 2x the positron
plus 1x the anti neutrino, etc.
I think I will be able to find uses for it very quickly.
That's the idea, Carl!
Er, Carl ... with a baryon mass formula, it won't be so easy for people to dismiss these coincidences, will it?
Kea, I am quite certain that no one reads my Clifford algebra papers because I don't write so well that I am that easily understood, and I have my share of typos, but no one points them out or
asks questions.
All they do is look at the very simplest of equations (Koide, or the new baryon equation) and say that they are just coincidences. It really doesn't matter how many coincidences we write down,
that will always be the answer. My Clifford algebra approach is even more lonely than your functors.
But I am greatly heartened that this thing is solvable because the baryon "preon fine structure" is 1/729. It converges even faster than alpha, and it shows up in that damned number as well as
the neutrino masses. Because of this, I am quite confident we will eventually have a completely unified solution with simple calculations.
Because of this, I am quite confident we will eventually have a completely unified solution with simple calculations.
Oooh, that's nice. Not that I doubted it. Sorry if I haven't been too careful reading all your papers - I just feel my time can be better spent trying to understand things the M Theory way.
Addendum to Heisenberg's honeycombs. I was waiting that somebody would notice that in this paper it is shown (quite explicitly) what is wrong with quantum mechanics as we know it from the the
available texbooks. That, actually, it was discovered most likely by Kramers (not by Heisenberg as officially believed) and, that all these people who got their Nobel Prizes for its creation hid
very skillfully who did what and when. By knowing this information, fortunately,situation can be repaired to a large extent, say, by using current mathematical results by Kerov, Vershik and
Okounkov, for example, in view of their combinatorial and topological nature nicely compatible with available spectroscopic data. Sad but true that it looks like nobody is interested in the
history of science. It is always good to know history before jumping with whatever conclusions regarding to what else can be done. Please, keep in mind that quantum mechanics first and foremost
came out as an attempt to explain the available experimental data as good as possible. Such a task may look completely ugly for some mathematicians but in the end of the day, when it is used in
physics, unless the theory describes experiment logically well enough, it is going to be dismissed irrespective to current mathematical fashions.
|
{"url":"http://kea-monad.blogspot.com/2007/03/m-theory-lesson-20.html","timestamp":"2014-04-16T07:37:18Z","content_type":null,"content_length":"38323","record_id":"<urn:uuid:2f575493-7c04-40d2-8b74-07d534b9c9dd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graterford, PA Science Tutor
Find a Graterford, PA Science Tutor
...For the SAT, each student receives a 95-page spiral-bound book of strategies, notes, and practice problems that I created from scratch after a rigorous analysis of the test. As a Pennsylvania
certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Ma...
19 Subjects: including ACT Science, calculus, statistics, geometry
...The SAT Writing section will test your ability to read, write, and detect subtle errors in various passages. You will need to master both common writing conventions (such as proper use of the
comma and semicolon, etc.) and more advanced grammatical concepts (subjunctive voice, irregular past par...
34 Subjects: including physics, ACT Science, philosophy, calculus
Hi everyone, My name is Elsa, and I am here to help you learn and understand science. This past May, I graduated from Philadelphia University in Pre-Med as a major and Biochemistry as minor.
During the course of my education at PhilaU, I studied biology and chemistry intensely, and I found out that I enjoy learning and teaching science more than anything else.
10 Subjects: including chemistry, biochemistry, algebra 1, biology
Education is a life-long journey which the individual absorbs with every step taken down the path. It's more profound and visible in the beginning years but as long as the awesome capacity of the
mind is intact life's numerous wondrous lessons continue and trail behind us. My experience includes a...
28 Subjects: including astronomy, geometry, reading, sociology
...I've been studying French at home since I was 13 and in school since 8th grade. In college, I tutored beginner and intermediate French students. I also have experience studying abroad in
Montreal and France.
12 Subjects: including biology, English, reading, writing
Related Graterford, PA Tutors
Graterford, PA Accounting Tutors
Graterford, PA ACT Tutors
Graterford, PA Algebra Tutors
Graterford, PA Algebra 2 Tutors
Graterford, PA Calculus Tutors
Graterford, PA Geometry Tutors
Graterford, PA Math Tutors
Graterford, PA Prealgebra Tutors
Graterford, PA Precalculus Tutors
Graterford, PA SAT Tutors
Graterford, PA SAT Math Tutors
Graterford, PA Science Tutors
Graterford, PA Statistics Tutors
Graterford, PA Trigonometry Tutors
Nearby Cities With Science Tutor
Congo, PA Science Tutors
Creamery Science Tutors
Englesville, PA Science Tutors
Eshbach, PA Science Tutors
Fagleysville, PA Science Tutors
Gabelsville, PA Science Tutors
Gulph Mills, PA Science Tutors
Linfield, PA Science Tutors
Morysville, PA Science Tutors
Niantic, PA Science Tutors
Passmore, PA Science Tutors
Rahns, PA Science Tutors
Schultzville, PA Science Tutors
Valley Forge Science Tutors
Zieglersville, PA Science Tutors
|
{"url":"http://www.purplemath.com/Graterford_PA_Science_tutors.php","timestamp":"2014-04-18T08:51:17Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:75426adc-6dcc-458d-a0bd-e3fb5daa0c8e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
November 27th 2009, 07:30 PM #1
Dec 2008
suppose you have a sequence of random variables $(X_n)_{n \geq 0}$. Then what does $sup_n E[X_n]$ mean? it does not make sense to me since if n is fixed then $E[X_n]$ is a constant. Is it the
same thing as $lim sup_{n-> \infty} E[X_n]$?
$E[X_n]$ is a constant
So all you're asking is what is a supremum of a sequence $a_n$
limsup is the limit of the largest subsequence.
so $sup_n E[X_n]=sup_{j \leq n }E[X_j]$?
November 27th 2009, 08:27 PM #2
November 27th 2009, 08:51 PM #3
Dec 2008
|
{"url":"http://mathhelpforum.com/advanced-statistics/117106-expectation.html","timestamp":"2014-04-17T09:45:11Z","content_type":null,"content_length":"36693","record_id":"<urn:uuid:2462a486-7108-4402-a1e3-429678567999>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jackson Heights, NY
1 - 10 of 80
algebra tutoring/teaching jobs near Jackson Heights, NY
Wyzant Tutoring
- New York City, NY
been taught on the back drop of Abstract Algebra and Ring Theory. I desperately need ... these complex concepts. linear algebra Tutoring & Teaching opportunities available...
8 days ago from WyzAnt Tutoring
5 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
• NEW
Wyzant Tutoring
- Brooklyn, NY
eferably on weekends (as soon as this weekend) since I am at work all week long till 7pm. algebra 1 Tutoring & Teaching opportunities available in Brooklyn, NY starting at...
9 hours ago from WyzAnt Tutoring
10 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Brooklyn, NY
I am looking for a math tutor available evenings during Spring Break.algebra 1 Tutoring & Teaching opportunities available in Brooklyn, NY starting at $25-$50/hr.
13 days ago from WyzAnt Tutoring
11 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Norwood, NJ
I'm looking for a chemistry and algebra 2/trigonometry tutor for my 10th grader. algebra 2 Tutoring & Teaching opportunities available in Norwood, NJ starting at $25-$50/hr.
14 days ago from WyzAnt Tutoring
17 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Sunnyside, NY
attending LAGCC, taking MAT115 - college algebra and trigonometry. I'm seeking a tutor ... who can work with me.algebra 2 Tutoring & Teaching opportunities available in...
22 days ago from WyzAnt Tutoring
2 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
• NEW
Wyzant Tutoring
- Bronx, NY
Hey, I am looking for a tutor to help me brush up on my college math. algebra 2 Tutoring & Teaching opportunities available in Bronx, NY starting at $25-$50/hr.
1 day ago from WyzAnt Tutoring
9 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Brooklyn, NY
freshman in High school and needs help with Algebra 1 and science. I would like a tutor ... early afternoon at a local Library.algebra 1 Tutoring & Teaching opportunities available...
11 days ago from WyzAnt Tutoring
9 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Brooklyn, NY
I need help in math on the subjects of: algebra, percentages, exponents and geometry. ... with a tutoring session on 4/18/2014.ASVAB Tutoring & Teaching opportunities available...
6 days ago from WyzAnt Tutoring
5 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
• NEW
Wyzant Tutoring
- Jersey City, NJ
1, physics, and maybe history.physics Tutoring & Teaching opportunities available in Jersey City, NJ starting at...
1 day ago from WyzAnt Tutoring
10 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
• NEW
Wyzant Tutoring
- Staten Island, NY
for my daughter. She is having difficulty in Algebra. She is an 8th grade student who is ... test tomorrow and needs help for it.Regents Tutoring & Teaching opportunities available...
1 day ago from WyzAnt Tutoring
13 mi. -
Jackson Heights, NY
Job Summary: Research Tools: Similar Searches:
Email jobs like this to me
Were you satisfied with these results? Yes | No
|
{"url":"http://www.simplyhired.com/search?q=algebra+tutoring%2Fteaching&l=jackson+heights%2C+ny&fsr=primary","timestamp":"2014-04-25T00:50:12Z","content_type":null,"content_length":"83678","record_id":"<urn:uuid:c9b4971b-6195-401d-bb5f-7153420cb72a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Schrodinger's Equation, special cases
Replies: 1 Last Post: Jul 9, 1996 7:54 PM
Messages: [ Previous | Next ]
Schrodinger's Equation, special cases
Posted: Jul 9, 1996 10:21 AM
Having seen the solutions of Schrodinger's Equation (time independent, not
that it really affects anything anyway) for the cases of the free
particle, particle on a line, particle in a box, etc, I was wondering if a
nice analytic solution exists for the case of a particle confined to a
two-dimensional circle or a 3-dimensional sphere? Does anyone know if this
has been solved at all? Can someone point me to a reference or sketch out
a derivation of the solution?
Date Subject Author
7/9/96 Schrodinger's Equation, special cases Jonathan Katz
7/9/96 Re: Schrodinger's Equation, special cases Robert Israel
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=13573","timestamp":"2014-04-16T21:57:45Z","content_type":null,"content_length":"17426","record_id":"<urn:uuid:3573912c-edb0-43d8-aa83-d31d5d6fa520>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Visualization of Tetrahedron Mesh [Archive] - OpenGL Discussion and Help Forums
10-11-2005, 06:50 AM
Hi guys, I have a problem with displaying my 3D model, which is made up of a large amount of connecting tetrahedrons. (~100 000). i have a complete data set of each tetrahedron and the vertices that
specify each corner of the tetrahedron. Using this input, i can easily display the model.
but, the surface shown is jagged and not smooth and pleasing to the eye at all. Does anyone knows where i can get the code, or perhaps some ideas to smoothen my surface?
another thing.. i am trying to achieve good lighting effects, which firstly requires me to calculate the normals for each face, and assign this normal to each of the 3 vertices in a face. but then,
this is taking up a lot of cpu time as it is only needed to calculate the normals at THOSE polgons at the surface.
so.. does anyone have any source code or algorithms that can enable one to display the surface from a solid 3d data set comprising of vertices?
I really appreciate your help~~!
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-150491.html","timestamp":"2014-04-19T14:33:16Z","content_type":null,"content_length":"8974","record_id":"<urn:uuid:e2601358-e7cc-4084-ae7a-14a21e0eb821>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This is a worn and eroded crater formation. A pair of smaller craters lies along the northeastern rim, and a crater is intruding into the northwest rim. To the south is an outward projection that has
the appearance of a crater partly overlain by Joule. The remainder of the rim and inner wall is somewhat irregular. The interior floo
Found on
http://en.wikipedia.org/wiki/Joule_(crater) [programming language]
Joule is a concurrent dataflow programming language, designed for building distributed applications. It is so concurrent, that the order of statements within a block is irrelevant to the operation of
the block. Statements are executed whenever possible, based on their inputs. Everything in Joule happens by sending mes
Found on
1) Work done by the force of one neutron when its point of application moves through the distance of one meter in the direction of the force. 2) One watt-second.
Found on
A metric unit of energy or work; 1 joule per second equals 1 watt or 0.737 foot-pounds; 1 Btu equals 1,055 joules.
Found on
A unit of energy in the SI system. One joule is 1 kg. m2/s2 which is also 0.2390 calorie.
Found on
A basic unit of energy. A 1 Watt transmitter radiates 1 Joule of energy every second.
Found on
The Joule (J) is the SI unit of energy. One Joule is the energy expended when a force of one newton is applied over a displacement of one meter in the direction of the force. The use of the joule is
probably limited in Radiation Protection but is used in the definition of Absorbed Dose and the Electron volt .
Found on
Unit of energy in the SI (Système International) system of units. The joule is sometimes used in photography to indicate the output of an electronic flash.
Found on
- a unit of electrical energy equal to the work done when a current of one ampere passes through a resistance of one ohm for one second 2. [n] - English physicist who established the mechanical
theory of heat and discovered the first law of thermodynamics (1818-1889)
Found on
[pronounce: jool] The unit for measuring energy (J).
Found on
Measurement of energy. Used to define the maximum muzzle energy. The legal limit for Airsoft weapons is 1.35j. See section on the Law for more details.
Found on
The SI unit of energy is the joule. Defined as:1 joule is the work done by a force of 1 newton moving a distance of 1 metre in the direction of the force.It may also be defined in electrical terms
as:the amount of energy needed to sustain 1 amp for 1 sec in a 1-ohm resistance.
Found on
A unit of energy in the SI system. One joule is 1 kg. m2/s2 which is also 0.2390 calorie.
Found on
(J) The SI unit of energy, equal to the work required to move a 1 kg mass against an opposing force of 1 newton. 1 J = 1 kg m
= 4.184 calories.
Found on
Joule (J) is the SI derived unit of energy, work, and heat. The joule is the work done when the point of application of a force of one newton is displaced a distance of one metre in the direction of
the force (J = N m). The unit is named after the British scientist James Prescott Joule (1818-1889).
Found on
Unit of energy. One joule is one watt for one second. It is the measure Of the 'kick' of a pulse. Joules are the most important measure of the power of the energiser.
Found on
J A measure of work, energy or cell capacity. For electrical energy, one Joule is one Amp at one Volt for one Second, or one WattSecond. 1 Wh = 3.6kJ. For mechanical energy one Joule is a force of
one Newton acting over one metre i.e. One newton metre.
Found on
The SI unit of energy. The release or transfer of one joule per second is one Watt, the SI derived unit of power.
Found on
http://www.theiet.org/factfiles/energy/nuclear-terms.cfm?type=pdf Joule
[ From the distinguished English physicist, James P.
A unit of work which is equal to 10
^ 7
units of work in the C. G. S. system of units (ergs), and is practically equivalent to the energy expended in one second by an electric current of
Found on
<unit> SI unit of energy. ... 1 Joule = 1E7 ergs = 1 Watt of power occurring for one second. 1 Joule is roughly 0.001 BTU and 1 calorie is roughly 4 joules. There are 3.6 million joules in a kilowatt
hour. ... (14 Oct 1997) ...
Found on
(J) (jldbomacl) the SI unit of energy, being the work done by a force of 1 newton acting over a distance of 1 meter.
Found on
• (n.) A unit of work which is equal to 107 units of work in the C. G. S. system of units (ergs), and is practically equivalent to the energy expended in one second by an electric current of one
ampere in a resistance of one ohm. One joule is approximately equal to 0.738 foot pounds.
Found on
unit of work or energy in the International System of Units (SI); it is equal to the work done by a force of one newton acting through one metre. ... [6 related articles]
Found on
The energy required to push with a force of one Newton for one meter.
Found on
A unit of energy J such that the heat capacity of water at 15RC is 4.18 J/gRC.
Found on
No exact match found
|
{"url":"http://www.encyclo.co.uk/define/Joule","timestamp":"2014-04-20T18:35:01Z","content_type":null,"content_length":"36429","record_id":"<urn:uuid:35c24008-82f8-431e-a558-64021a5ea836>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can one prove complex multiplication without assuming CFT?
up vote 20 down vote favorite
The Kronecker-Weber Theorem, stating that any abelian extension of $\mathbb Q$ is contained in a cyclotomic extension, is a fairly easy consequence of Artin reciprocity in class field theory (one
just identifies the ray class groups and shows that each corresponds to a cyclotomic extension). However, one can produce a more direct and elementary proof of this fact that avoids appealing to the
full generality of class field theory (see, for example, the exercises in the fourth chapter of Number Fields by Daniel Marcus). In other words, one can prove class field theory for $\mathbb Q$ using
much simpler methods than for the general case.
The theory of complex multiplication is similar to the theory of cyclotomic fields (and hence the Kronecker-Weber Theorem) in that it shows that any abelian extension of a quadratic imaginary field
is contained in an extension generated by the torsion points of an elliptic curve with complex multiplication by our field. To prove this, one normally assumes class field theory and then shows that
the field generated by the $m$-torsion (or, more specifically, the Weber function of the $m$-torsion) is the ray class field of conductor $m$.
My question is: Can one prove that any abelian extension of an imaginary quadratic field $K$ is contained in a field generated by the torsion of an elliptic curve with complex multiplication by $K$
without resorting to the general theory of class field theory? I.e. where one directly proves class field theory for $K$ by referring to the elliptic curve. Is there a proof in the style of the
exercises in Marcus's book?
Note: Obviously there is no formal formulation of what I'm asking. One way or another, you can prove complex multiplication. But the question is whether you can give a proof of complex multiplication
in a certain style.
nt.number-theory complex-multiplication elliptic-curves class-field-theory
add comment
1 Answer
active oldest votes
Historically, Complex Multiplication precedes Class Field Theory and many of the main theorems of CM for elliptic curves were proved directly. See Algebren (3 volumes) by Weber or Cox's
book for an exposition.
up vote 9 Please also read Birch's article on the beginnings of Heegner points where he points this out explicitly (page three, paragraph beginning "Complex multiplication ...).
down vote
But not all. so the answer is no (unlike what I had mistakenly presumed at first and the comments below alerted me).
The actual history is quite complicated; see Schappacher.
1 Historically, as far as I know, the first complete proof of complex multiplication for imaginary quadratic fields was given by Takagi in 1920, as a corollary of his
class-field-theory. – user4245 Jun 22 '12 at 5:00
@SGP: Are you sure they proved that every abelian extension of a quadratic imaginary field is contained in such an extension? – David Corwin Jun 22 '12 at 19:17
@unknown(google) and Davidac897: Thanks! answer modified and reference of Schappacher added. – SGP Jun 23 '12 at 23:37
3 Takagi's thesis was on the abelian extensions of Q(i); this is also covered in Silverman-Tate, if I recall it correctly. I would assume that for general complex quadratic base fields,
the problem of class number > 1 (equivalent to the existence of the Hilbert class field) becomes a major obstacle. – Franz Lemmermeyer Jun 24 '12 at 9:48
1 @Franz Lemmermeyer: Silverman-Tate mentions the result, but doesn't prove it. I wonder if they were thinking of the whole of class field theory, or if they were thinking of a simpler
proof. – David Corwin Jun 29 '12 at 2:55
show 2 more comments
Not the answer you're looking for? Browse other questions tagged nt.number-theory complex-multiplication elliptic-curves class-field-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/100276/can-one-prove-complex-multiplication-without-assuming-cft","timestamp":"2014-04-20T13:43:07Z","content_type":null,"content_length":"59530","record_id":"<urn:uuid:3899824a-2499-4e60-aaaf-535ca0c340de>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Appendix B, Twelfth Graphic Equation: Results from the 2004 National Survey on Drug Use and Health: National Findings
Skip To Content
The incidence rate as a function of d, a, and t is equal to the ratio of two quantities. The numerator is the summation over i of w sub i times cap i sub i as a function of d, a, and t. The
denominator is the summation over i of w sub i times e sub i as a function of d, a, and t.
Back to Appendix B, Twelfth Graphic Equation
This page was last updated on May 20, 2008.
|
{"url":"http://www.samhsa.gov/data/NSDUH/2k4NSDUH/2k4results/eqb-12alt.htm","timestamp":"2014-04-17T20:09:24Z","content_type":null,"content_length":"1859","record_id":"<urn:uuid:e4b99245-ccb9-4031-b4a5-0e78a8359586>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding sample standard deviation
March 8th 2010, 06:45 PM
Finding sample standard deviation
If a 90% confidence interval for $\sigma^2$ is reported to be (51.47,261.90), what is the value of the sample standard deviation.
$\frac{(n-1)s^2}{\chi_{.95,n-1}^2}=51.47$ and $\frac{(n-1)s^2}{\chi_{.05,n-1}^2}=261.90$
So this looks like it should have an F-distribution with same numerator and denominator degrees of freedom, which should help me figure out what those degrees of freedom are.
I can't figure out how to do this. Or am I way off?
March 8th 2010, 09:39 PM
you need n, and then just look up either of those chi-square percentiles
then solve for s.
If, you really don't have n, just go down the tables and see which of those ratio of percentiles
gives you your number 5.0884
March 8th 2010, 09:56 PM
I really don't have n. The question typed is precisely as it is posed in the text.
Which F-Distribution table should I use? In other words, how do I infer the alpha level from my work so far? Is it just the same as for $\sigma^2$, which is .10?
March 8th 2010, 10:05 PM
no, use the chi-square table.
Let n=2,3,4,5... find the .05 and .95 percentiles and take that ratio
see when the ratio is 5.0884
(But I didn't check the math to see if 5.0884 was correct.)
|
{"url":"http://mathhelpforum.com/advanced-statistics/132802-finding-sample-standard-deviation-print.html","timestamp":"2014-04-18T16:02:01Z","content_type":null,"content_length":"6544","record_id":"<urn:uuid:6953ec4d-ee4f-41a2-8158-bb7d25034899>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Menlo Park Geometry Tutor
Find a Menlo Park Geometry Tutor
...I have attended training sessions at Charles Armstrong School for special needs learning and adapt with all my students to find their best way of learning, be it visual, auditory, etc. I have
worked with all kinds of different learners. I was most often called as a substitute for special needs children.
19 Subjects: including geometry, reading, biology, ESL/ESOL
...I especially enjoy tutoring Chemistry and preparing students for AP Chemistry tests. I also have experience with most of the science and math AP tests. In high school, I passed both the AP
Physics tests (Mechanics and Electricity & Magnetism) with a 5 and a 4, respectfully.
29 Subjects: including geometry, chemistry, calculus, physics
...I have taught economics, operations research and finance related courses. I have acted as a tutor for MBA students in every course they took in their graduate school curriculum. I have a strong
background in statistics and econometrics.
49 Subjects: including geometry, calculus, physics, statistics
...I have taught business psychology in a European university. I tutor middle school and high school math students. I can also teach Chinese at all levels.
11 Subjects: including geometry, calculus, statistics, Chinese
...I took differential equations in college, and continued to take a more advanced calculus class above that. I am very comfortable with differential equations and feel that I could tutor it, as I
have tutored peers in my program in the past. I was a business major in college, and switched that to economics, but I've taken what would be considered a business minor.
35 Subjects: including geometry, reading, calculus, statistics
|
{"url":"http://www.purplemath.com/menlo_park_ca_geometry_tutors.php","timestamp":"2014-04-16T04:56:00Z","content_type":null,"content_length":"23937","record_id":"<urn:uuid:6f45d629-4126-4ce9-8b3d-789cc4e7737e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the semigroup of binary relations
May 31st 2011, 05:42 AM
the semigroup of binary relations
It is known that on the set of all functions $f:X\rightarrow X$ we can define the operation of composition and that the resulting structure is a semigroup called the full transformation semigroup
of $X$. It is also known that we can generalize this operation to the set of all binary operations by defining the compostion of binary relations. It also results in a semigroup $\mathcal{B}(X)$,
which is however ill-behaved (as opposed to the full transformation semigroup, which is its subsemigroup). I will list some problems (some of which the full transformation semigroup shares).
1. The composition of relations is not commutative.
2. $\mathcal{B}(X)$ is not regular. The regular elements don't have a short characterization.
3. The idempotents of $\mathcal{B}(X)$ (we denote the set of all of them by $E(\mathcal{B}(X))$ seem difficult to characterize. I don't know their characterizations, but I know some exist.
Unfortunately, I don't have access to the papers. But I have made some naive attempts at characterizing the idempotents and this is what I've come up with.
a) Every idempotent must be a transitive relation.
b) Let $\mathcal{P}(X)$ be the set all binary relations on X that are pre-orders on some subset of $X$ (that is if $x\in X$ is outside that subset, it is in the relation with nothing). Let $\
mathcal{PT}(X)$ be the set of all partial transformations of $X$. Then if $x\in\mathcal{P}(X)\cup\mathcal{PT}(X)$ and $x$ is transitive, then $x$ is an idempotent. (In the case of partial
transformations, being transitive is equivalent to being a projection.)
c) This is not enough to characterize $E(\mathcal{B}(X))$.
4. We can define a dual operation, which results in an isomorphic semigroup with negation being the isomorphism. However, (seemingly -- I haven't proven this) only in the case of $|X|=1$ are
those operations mutually distributive.
5. The composition doesn't work well enough on pre-orders and equivalences. The compostion of two pre-orders is a preorder iff the composition is commutative. The same is true for equivalences.
In particular, we know that every preorder is a partial order on some equivalence classes. Unfortunately the composition of the preorder and the equivalence gives the partial order iff the
coposition is commutative.
After this long introduction, my question is: is there a way to generalize the composition of functions to all binary relations in a better behaved way? (And why not? :-))
June 1st 2011, 01:07 PM
I have just realized that what I wrote in 3b) makes little sense. The elements of $\mathcal{P}(X)$ are already transitive and indeed they are idempotents. I didn't lie, but I confused things a
bit. My apologies to everybody who read it.
|
{"url":"http://mathhelpforum.com/higher-math/182091-semigroup-binary-relations-print.html","timestamp":"2014-04-19T08:02:36Z","content_type":null,"content_length":"9118","record_id":"<urn:uuid:eeb8a683-4d6e-4dcb-9e76-72934f878468>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
math powerpoints on ratios
Author Message
BegLed01 Posted: Sunday 31st of Dec 07:27
Can anybody help me? I have a math test coming up next week and I am totally confused. I need some help specially with some problems in math powerpoints on ratios that are very tricky.
I don’t want to go to any tutorial and I would greatly appreciate any help in this area. Thanks!
From: The
upper midwest
kfir Posted: Monday 01st of Jan 20:45
You’re not alone. In fact, I had the same problem just before I discovered something very helpful . Have you heard about Algebrator? If you have not heard anything about this wonderful
piece of software yet, let me give you an idea about this software. This is a program that helps you to answer math problems and at the same time you could learn from it because it
displays a step-by-step procedure of solving the problem. It helped me in my woes in math and my grade significantly improved since I tried this. Be reminded that it does not only show
the answer but it helps you learn how to solve the problem that makes it very educational.
From: egypt
Jrobhic Posted: Tuesday 02nd of Jan 21:53
Hi , Algebrator is one superb thing! I started using it when I was in my high school. It’s been years since then, but I still use it occasionally. Mark my word for it, it will really
help you.
mopy8gen Posted: Wednesday 03rd of Jan 13:16
I would like to give it a try. Where can I find the software ?
From: Port
malhus_pitruh Posted: Wednesday 03rd of Jan 19:36
Here http://www.algebrahomework.org/quadratic-and-power-inequalities.html. Happy problem solving!
From: Girona,
|
{"url":"http://www.algebrahomework.org/algebrahomework/adding-functions/math-powerpoints-on-ratios.html","timestamp":"2014-04-19T07:28:23Z","content_type":null,"content_length":"47522","record_id":"<urn:uuid:754c5ae4-1bea-41e5-b12b-7e9a9fe39dab>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Which year was Steve Walters born in?
You asked:
Which year was Steve Walters born in?
Assuming you meant
• Steve Walters (born 28 August 1965), the Australian former professional rugby league footballer of the 1980s and 90s, who at the peak of his career, considered the best hooker in the game
Did you mean?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/which_year_was_steve_walters_born_in","timestamp":"2014-04-20T01:05:45Z","content_type":null,"content_length":"52528","record_id":"<urn:uuid:a67ed444-e603-46e1-a28f-5584e9e07d01>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: November 2001 [00218]
[Date Index] [Thread Index] [Author Index]
Re: Re: Zero does not equal zero et al.
• To: mathgroup at smc.vnet.net
• Subject: [mg31619] Re: [mg31604] Re: Zero does not equal zero et al.
• From: David Withoff <withoff at wolfram.com>
• Date: Sun, 18 Nov 2001 06:29:01 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
> Hello,
> Well, there is definitely something wrong. Perhaps WRI can explain what is
> going on in the following notebook (v4.1, Windows).
> There are two blocks of statements; I have commented where I think the
> results are wrong -- they are certainly inconsistent. I prefer the output
> from the second block; except for the first True, it gives what I would
> expect and recommend.
Your examples are illustrations of how numerical error works,
and in particular that taking numerical approximations at the start
of a calculation cannot (except by accident) give the same result
as taking numerical approximations at the end of a calculation.
These are basic mathematical effects that have nothing to do
with Mathematica.
Consider, for example, the task of multiplying 1/3 by 2 in a system
based on arithmetic with four decimal digits. The best four-digit
approximation to 1/3 is 0.3333, and multiplication by 2 gives 0.6666.
If one instead multiples 1/3 by 2 exactly to get 2/3, the best
four-digit approximation to that result is 0.6667. These results
differ in the last digit. In general, the results can differ by
an arbitrarily large amount.
Your examples are demonstrations of this effect. For example:
\[Epsilon] = 1.0 10^-16; (* float*)
num = 145/247;
num2 = num + \[Epsilon];
num \[Equal] num2
num^83 \[Equal] num2^83
num^84 \[Equal] num2^84
compares the numerical approximation of the exact result num^84
with num2^84, where the approximation is done at the beginning,
in defining num2, rather than at the end. These results will always
(except by accident) be different. Whether the results are considered
equivalent will depend on how the numerical errors work out and
on how the equivalence testing is done. It is not mathematically
possible to design these things so as to never show behaviors that
you are apparently concerned about.
The point is that these are mathematically unavoidable effects which
will always be present in some form in all systems. Furthermore, since
the underlying issues permeate essentially all of numerical computation,
anyone who uses inexact numbers really ought to take the time to
familiarize themselves, at least at some some rudimentary level, with
these effects.
> Concerning === versus ==, it's very inconvenient to use === in tests
> involving symbols because it gives False too often; it is only occasionally
> appropriate.
For example?
> As things now stand, I'm forced to use it to get the results I
> need for approximate numbers. This compels me to make a tedious distinction
> in my code between purely symbolic and numerical quantities. Shouldn't
> Mathematica be able to integrate the two without putting this burden on the
> programmer?
Inexact numerical quantities have fundamentally different mathematical
characteristics than exact or symbolic quantities. Even seemingly elementary
identities, such as commutativity and associativity of addition, are not
satisfied by inexact numbers except within numerical error. If those
fundamental mathematical differences affect your calculations, then you do
indeed have to make appropriate allowances for that basic mathematics in your
programs, and there isn't anything that Mathematica or any other system can
do about it. Computers are not immune to the laws of mathematics.
> What I meant by pattern matching is that a rule such as x_
> f[y_]:> y Sin[x] /; y == z may involve Equal. So problems with == *do*
> affect the operation of rules.
By that definition everything in Mathematica affects the pattern matcher,
since a Condition pattern can invoke anything.
> I think the results show that Mathematica's behavior has become
> schizophrenic due to pollution from the floating point side. Much valuable
> work has been done on both the numeric and symbolic aspects of Mathematica,
> but more thought needs to be given as to how the two pieces are going to fit
> together, both in terms of correctness and in terms of how programmers will
> ultimately use Mathematica.
Certainly calculation with inexact numbers is in some sense a messier
area of mathematics than calculation with exact or symbolic quantities.
That "pollution", however, is just the nature of the mathematics.
Mathematica is not going to stop doing correct mathematics just because
the mathematics doesn't work out as someone might have expected.
Dave Withoff
Wolfram Research
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2001/Nov/msg00218.html","timestamp":"2014-04-19T17:06:08Z","content_type":null,"content_length":"38578","record_id":"<urn:uuid:fdb4cf73-461f-40f2-a417-a90999c80731>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
, 811
"... Abstract. We present two classical conjectures concerning the characterization of manifolds: the Bing Borsuk Conjecture asserts that every n-dimensional homogeneous ANR is a topological
n-manifold, whereas the Busemann Conjecture asserts that every n-dimensional G-space is a topological n-manifold. ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We present two classical conjectures concerning the characterization of manifolds: the Bing Borsuk Conjecture asserts that every n-dimensional homogeneous ANR is a topological n-manifold,
whereas the Busemann Conjecture asserts that every n-dimensional G-space is a topological n-manifold. The key object in both cases are so-called generalized manifolds, i.e. ENR homology manifolds. We
look at the history, from the early beginnings to the present day. We also list several open problems and related conjectures. 1.
, 2008
"... We begin with a chronology tracing the rise of symmetry concepts in physics, starting with groups and their role in relativity, and leading up to more sophisticated concepts from n-category
theory, which manifest themselves in Feynman diagrams and their higherdimensional generalizations: strings, me ..."
Cited by 1 (1 self)
Add to MetaCart
We begin with a chronology tracing the rise of symmetry concepts in physics, starting with groups and their role in relativity, and leading up to more sophisticated concepts from n-category theory,
which manifest themselves in Feynman diagrams and their higherdimensional generalizations: strings, membranes and spin foams.
"... There are several approaches to summarizing a mathematician’s research accomplishments, and each has its advantages and disadvantages. This article is based upon a talk given at Tulane that was
aimed at a fairly general audience, including faculty members in other areas and graduate students who had ..."
Add to MetaCart
There are several approaches to summarizing a mathematician’s research accomplishments, and each has its advantages and disadvantages. This article is based upon a talk given at Tulane that was aimed
at a fairly general audience, including faculty members in other areas and graduate students who had taken the usual entry level courses. As such, it is meant to be relatively nontechnical and to
emphasize qualitative rather than quantitative issues; in keeping with this aim, references will be given for some standard topological notions that are not normally treated in entry level graduate
courses. Since this was an hour talk, it was also not feasible to describe every single piece of published mathematical work that Terry Lawson has ever written; in particular, some papers like [42]
and [50] would require lengthy digressions that are not easily related to the central themes in his main lines of research. Instead, we shall focus on some ways in which Terry’s work relates to an
important thread in geometric topology; namely, the passage from studying problems in a given dimension to studying problems in the next dimensions. Qualitatively speaking, there are fairly
well-developed theories for very low dimensions and for all sufficiently large dimensions, but between these ranges there are some dimensions in which the answers to many fundamental
"... Abstract. This text reviews some state of the art and open questions on (smooth) 4-manifolds from the point of view of symplectic geometry. ..."
Add to MetaCart
Abstract. This text reviews some state of the art and open questions on (smooth) 4-manifolds from the point of view of symplectic geometry.
, 2009
"... Abstract. We introduce the gradient flow of the Seiberg-Witten functional on a compact, orientable Riemannian 4-manifold and show the global existence of a unique smooth solution to the flow.
The flow converges uniquely in C ∞ up to gauge to a critical point of the Seiberg-Witten functional. 1 ..."
Add to MetaCart
Abstract. We introduce the gradient flow of the Seiberg-Witten functional on a compact, orientable Riemannian 4-manifold and show the global existence of a unique smooth solution to the flow. The
flow converges uniquely in C ∞ up to gauge to a critical point of the Seiberg-Witten functional. 1
, 812
"... This article intends to provide an introduction to the ..."
"... on the topology of higher ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4267535","timestamp":"2014-04-18T09:32:14Z","content_type":null,"content_length":"23410","record_id":"<urn:uuid:275b10e3-b36c-4df2-93a4-ceae3d7aa8d1>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inverse of a rational equation with exponential components
August 19th 2010, 05:24 PM #1
Aug 2010
Inverse of a rational equation with exponential components
I'm having trouble coming up with the inverse function of this:
$\frac{1 + e^{-x}} {1 - e^{-x}}=y=f(x)$
I might be overcomplicating how I'm going about doing this; essentially, I ended up using exponent and logarithm laws to arrive at this:
$\ln{(1 + e^{-y})} - \ln{(1 - e^{-y})}=\ln{x}$
Any tips? I'm thinking that I probably did overcomplicate something...
Last edited by VectorRun; August 19th 2010 at 05:57 PM.
Wow, I did overcomplicate things, well, it was more of an oversight actually...I was too hesitant to multiply both sides by the denominator since it looked like that might get messy...But yeah,
that's the right way to do it; multiply by the denominator, isolate y terms from x and factor out the exp, isolate some more and then use exp/log rules.
The inverse is this then: $\ln{(x + 1)} - \ln{(x - 1)}=f^{-1}{(x)}$
August 19th 2010, 05:55 PM #2
Aug 2010
|
{"url":"http://mathhelpforum.com/pre-calculus/154020-inverse-rational-equation-exponential-components.html","timestamp":"2014-04-18T10:14:55Z","content_type":null,"content_length":"32491","record_id":"<urn:uuid:6709d9de-1938-4f50-b0bd-0d3b9d71e952>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EVERT W. BETH
Almelo, 1908 - Amsterdam, 1964
Brief scientific biography
Evert Willem Beth was born in Almelo (eastern Netherlands, near the Dutch-German border) on 7 July 1908 (so in 2008 we can mark his centenary birth anniversary). His father, H.J.E. Beth, had studied
mathematics and physics at Amsterdam University and worked as teacher in mathematics and physics in secondary schools (VAN ULSEN, 2000); Evert studied mathematics and physics at Utrecht University,
and also studied both philosophy and psychology. His 1935 Ph.D. was in philosophy (Faculty of Arts).
In 1946, Beth became professor of Logic and Foundations of Mathematics in Amsterdam (more precisely, in 1946 he was appointed to a part-time professorship; in 1948 this became a full professorship;
it is worth noting that he was the first professor of Logic and Foundations of Mathematics in the Netherlands). Apart from two brief interruptions (in 1952 he worked as a Research Associate at the
University of California in Berkeley, with A. Tarski, and in 1957-1958 he taught as a Visiting Professor at Johns Hopkins University in Baltimore), Beth worked in Amsterdam continuously until his
death (HEYTING, 1966).
Beth was a mathematician, logician and philosopher whose work principally concerned the foundations of mathematics. His name is often remembered with reference to semantic tableaux: the tableau
method was devised independently by Beth (1955), Hintikka (1955) and Schütte (1956) and later developed by Smullyan (1968). Essentially, this important method is dual to Gentzen's natural deduction
(1934) and it is considered by many to be intuitively simple, particularly for students not acquainted with the study of logic. In fact, Gentzen's method is a systematic search for proofs in tree
form, while the tableau method is a systematic search for refutations in upside-down tree form. Beth himself underlined:
"At least three different methods of deduction are known today and are more or less currently applied in research: Hilbert-type deduction, Gentzen's natural deduction, and Gentzen's calculus of
sequents. […] In point of fact the three methods must rather be considered as different presentations of one and the same method" (Formal Methods, 1962, XII).
With the semantic tableaux, Beth explored different areas: classical logic, modal logic and intuitionistic logic; in combination with this method, Beth made a proof-theoretic variant: the deductive
tableaux (HEYTING, 1966; VAN ULSEN, 2000). More generally, Beth's main contributions to logic were the definition theorem, semantic tableaux and the Beth models. The foundation of his work was
Gentzen's extended Hauptsatz, the subformula theorem and Tarskian model theory. During the last period of his life (1960-1964), Beth tried to make his logical research subservient to a wide range of
applications, e.g. the study of language, theorem proving, mathematical heuristics and translation methods in natural languages (VAN ULSEN, 2000).
Beth was the main founder of the Netherlands Society for Logic and Philosophy of Science, and he was also active in the organization of the International Association for Logic and Philosophy of
Science (HEYTING, 1966). From the educational viewpoint it is worth highlighting the book published in 1955 entitled
L'enseignement des mathématiques
, by Beth, Choquet, Dieudonné, Lichnerowicz, Gattegno and Piaget: these Authors were the founders in 1950 of CIEAEM (Commission Internationale pour l'Étude et l'Amélioration de l'Enseignement des
Mathématiques) and organized many international meetings (for instance: 1950: Relations entre le programme mathématique des écoles secondaires et le développement des capacités de l'adolescent; 1951:
L'enseignement de la géométrie dans les premières classes des écoles secondaires; 1951: Le programme fonctionnel: de l'école maternelle à l'université; 1952: structures mathématiques et structures
mentales; 1953: les relations entre l'enseignement des mathématiques et les besoins de la science et de l'industrie; 1953: les rapports entre la pensée des élèves et l'enseignement des mathématiques;
1954: les mathématique modernes à l'école; 1955: l'élève face aux mathématiques. Une pédagogie qui libère).
Beth was a member of the Central Committee of the International Commission on the Teaching of Mathematics (ICMI) from 1952 to 1954. He did the greater part of the editorial work for the series
"Studies in Logic and the Philosophy of Mathematics" founded at his initiative. Beth's merits were rewarded by his election in 1953 to the membership of the Royal Dutch Academy of Science and by a
honorary doctorate in the University of Gent, conferred on him in 1964, when he was already too ill to travel to Gent in order to receive it (HEYTING, 1966).
Evert W. Beth died on 12 April 1964.
Quotations relevant to mathematics education
Beth approach to research in mathematics education was very interesting and profound; let us quote Jean Piaget:
"A logician friend of mine, the late Evert W. Beth […] for a very long time […] was a strong adversary of psychology in general and the introduction of psychological observations into the field
of epistemology, and by that token an adversary of my own work, since my work was based on psychology. Nonetheless, in the interests of an intellectual confrontation, Beth did us the honour of
coming to one of our symposia on genetic epistemology and looking more closely at the questions that were concerning us. At the end of the symposium he agreed to co-author with me, in spite of
his fear of psychologists, a work that we called Mathematical and Psychological Epistemology.
[…]. In his conclusion to this volume, Beth wrote as follows:
The problem of epistemology is to explain how real human thought is capable of producing scientific knowledge. In order to do that we must establish a certain coordination between logic and
psychology. This declaration does not suggest that psychology ought to interfere directly in logic - that is of course not true - but it does maintain that in epistemology both logic and
psychology should be taken into account, since it is important to deal with both the formal aspects and the empirical aspects of human knowledge" (PIAGET, 1970, 1),
It is worth noting that Beth and Piaget gave an important contribution to research in cognitive development; in their book (BETH, PIAGET, 1961) they stated that the problems posed by formalisation
can in some way correspond with current mental mechanisms. So the logico-mathematical structures leading to formalisation can be considered as the point of arrival of a long genetic process.
In the "Preface" to the book (1962)
Formal Methods: an Introduction to Symbolic Logic and to the Study of Effective Operations in Arithmetic and Logic
(Table of contents of this work: I. Purely implicational logic. II. Full sentential logic. III. Theory of quantification, equality, and functionality. IV. Completeness of elementary logic. V. The
formalization of arithmetic and its limitations. VI. The theory of definition. VII. On machines which prove theorems), written in Amsterdam (October, 1961), Beth stated:
"Many philosophers have considered logical reasoning as an inborn ability of mankind and as a distinctive feature in the human mind; but we all know that the distribution of this capacity, or at
any rate its development, is very unequal. Few people are able to set up a cogent argument; others are at least able to follow a logical argument and even to detect logical fallacies.
Nevertheless, even among educated persons there are many who do not even attain this relatively modest level of development. According to my personal observations, lack of logical ability may be
due to various circumstances. In the first place, I mention lack of general intelligence, insufficient power of concentration, and absence of formal education. Secondly, however, I have noticed
that many people are unable, or sometimes rather unwilling, to argue ex hypothesi; such persons cannot, or will not, start from premises which they know or believe to be false or even from
premises whose truth is not, in their opinion, sufficiently warranted. Or, if they agree to start from such premises, they sooner or later stray away from the argument into attempts first to
settle the truth or falsehood of the premises. Presumably this attitude results either from lack of imagination or from undue moral rectitude. On the other hand, proficiency in logical reasoning
is not in itself a guarantee for a clear theoretic insight into the principles and foundations of logic. Skill in logical argumentation is the result of congenital ability combined with practice;
theoretic insight, however, can only arise from reflection and analysis" (p. X).
A meaningful educational statement is the following:
"Lack of formal education can, of course, be remedied, but hardly by the study of logic alone" (p. X).
"[The student] should become acquainted both with the semantic and with the purely formal approach to the notions, the problems, and the results of logical theory. A dogmatic attitude with
respect to the different aspects of logic will easily result if the elements of logic are taught in a narrow spirit. […] Each one-sided approach leaves part of the material more or less in the
dark. […] It should not be forgotten that later on it is extremely difficult to overcome the bad effects of a narrow-minded initiation" (p. XII).
The importance of historical aspects was frequently underlined by Beth; for instance:
"Recent discussion on the foundations of mathematics and physical science cannot be fully understood without reference to their historical and philosophical background. These discussions for the
greater part originate not merely from the results of contemporary scientific research in themselves, but rather from the incompatibility of these results with certain preconceived philosophical
doctrines" (in: Critical Epochs in the Development of the Theory of Science, The British Journal for the Philosophy of Science, 1, 1, 27).
Primary bibliography
(A complete list of Beth's works was published by J.F. STAAL (1965))
E.W. BETH 1935,
Rede en aanschouwing in de wiskunde
(Reason and intuition in mathematics), Dissertation, November 5 1935, Noordhoff, Groningen, Rijks Universiteit van Utrecht
E.W. BETH 1937,
L'evidence intuitive dans les mathematiques modernes
, Travaux du IXe Congres international de Philosophie, Paris, August 1-6 1937, VI, 161-165
E.W. BETH 1940,
Inleiding tot de wijsbegeerte der wiskunde
(Introduction to the philosophy of mathematics), Amsterdam, Standaard Boekhandel
E.W. BETH 1944,
Geschiedenis der logica
(History of logic), Servire's Encyclopaedie, Afd. Logica 37, 's-Gravenhage
E.W. BETH 1946-1947,
Logical and psychological aspects in the consideration of language
, Synthese, 5, 542-544
E.W. BETH 1950,
Les fondements logiques des mathematiques
, Paris and Louvain, Gauthier-Villars and Nauwelaerts
E.W. BETH 1950,
Critical Epochs in the Development of the Theory of Science
, The British Journal for the Philosophy of Science, 1, 1, 27-42
E.W. BETH 1951,
Fundamental Features of Contemporary Theory of Science
, The British Journal for the Philosophy of Science, 1, 4, 291-302
E.W. BETH 1953,
Inleiding tot de wijsbegeerte der exacte wetenschappen
(Introduction to the philosophy of science), Antwerpen, Standaard Boekhandel
E.W. BETH 1955,
Remarks on natural deduction
, Indagationes Mathematicae, 17, 322-325
E.W. BETH 1955,
Semantic entailment and formal derivability
, Mededelingen Koninklijke Nederlandse Akademie van Wetenschappen, Nieuwe Reeks, 18, 13, 309-342
J. PIAGET, E.W. BETH, J. DIEUDONNE, A. LICHNEROWICZ, V. CHOQUET, C. GATTEGNO 1955,
L'enseignement des mathématiques
, Paris, Delachaux et Niestlé
E.W. BETH, A. TARSKI 1956,
Equilaterality as the only Primitive Notion of Euclidean Geometry
, Indagationes mathematicae, 18, 462-467
E.W. BETH 1957,
La crise de la raison et la logique
, Paris and Louvain, Gauthier-Villars and Nauwelaerts
E.W. BETH, W. MAYS, J. PIAGET 1957,
Epistémologie génétique et recherche psychologique
, Paris, Presses Universitaires de France
E.W. BETH 1958,
De weg der wetenschap
(The course of science),
Inleiding tot de methodologie der empirische wetenschappen
(Introduction to the methodology of the empirical sciences), Haarlem, Bohn
E.W. BETH 1959,
The foundations of mathematics. A study in the philosophy of sciences
, Studies in Logic, Amsterdam, North Holland
E.W. BETH 1964,
Door wetenschap tot wijsheid
(Through science to wisdom), Assen, Van Gorcum
E.W. BETH 1965,
Mathematical thought: an introduction to the philosophy of mathematics
, Dordrecht, Reidel
E.W. BETH, J. PIAGET 1961,
Epistemologie mathematique et psychologie
, Paris, Presses Universitaires de France
E.W. BETH 1962,
Les rapports entre langues formalisees et langue naturelle
, in L. Beck (edited by),
La philosophie analytique
, Paris, Les Editions de Minuit, 248-261
E.W. BETH 1962,
Formal methods. An introduction to symbolic logic and to the study of effective operations in arithmetic and logic
, Dordrecht, Reidel
E.W. BETH, J.B. GRIZE, R. MARTIN, B. MATALON, A. NAESS, J. PIAGET 1962,
Implication, formalisation et logique naturelle
, Paris, Presses Universitaires de France
E.W. BETH 1963,
The relationship between formalised languages and natural language
, Synthese, 15, 1, 1-16
Secondary bibliography
J.L. DESTOUCHES (edited by) 1964,
E.W. Beth Memorial Colloquium: Logic and Foundations of Science
, Paris, Institut Henri Poincaré, May 19-21, Dordrecht, Reidel
V.H. DYSON, G. KREISEL 1961,
Analysis of Beth's semantic construction of intuitionistic logic
, Technical Report 3, Applied mathematics and statistical laboratories, Stanford, Stanford University (Contract No. DA-04-200-ORD-997)
M. FRANCHELLA 2002,
Evert Willem Beth's Contributions to the Philosophy of Logic
, Philosophical Writings, 21, 1-23
A. HEYTING 1966,
Evert Willem Beth: in memoriam
, Notre Dame Journal of Formal Logic, VII, 4, 289-295
D.H.J. DE JONGH, P. VAN ULSEN 1998,
Beth's nonclassical valuations
, Technical Report LP-1998-12, Institute for logic, language and computation, Universiteit van Amsterdam
J. PIAGET 1966,
In memory of E.W. Beth (1908-1964)
. Mathematical epistemology and psychology, Dordrecht, Reidel, XI-XII
J. PIAGET 1970,
Genetic epistemology
, New York, Columbia University Press
J.F. STAAL 1965,
E.W. Beth, 1908-1964
, Dialectica, 19, 158-179
A.S. TROELSTRA, P. VAN ULSEN 1999,
The Discovery of E.W. Beth's Semantics for Intuitionistic Logic
, J. Gerbrandy et Al.,
JFAK, a collection of essays dedicated to Johan van Benthem on the occasion of his 50th birthday
, http://www.illc.uva.nl/j50/contribs/troelstra/index.html
P. VAN ULSEN 2000,
E.W. Beth als logicus
, Proefschrift Universiteit van Amsterdam
P.J.M. VELTHUYS-BECHTHOLD 1995,
Inventory of the papers of Evert Willem Beth (1908-1964), philosopher, logician and mathematician, 1920-1964 (c. 1980)
, incorporating the finding-aid by J.C.A.P. Ribberink and P. van Ulsen, Haarlem, Inventarisreeks Rijksarchief in Noord-Holland
Giorgio T. Bagni
Department of Mathematics and Computer Science
University of Udine - Italy
|
{"url":"http://www.dm.unito.it/rome2008/portrait/beth.php","timestamp":"2014-04-16T14:05:17Z","content_type":null,"content_length":"21071","record_id":"<urn:uuid:0a787a77-1760-4147-8c34-ef3203489f7e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Canad. J. Math. Vol. 54 (4), 2002 pp. 673693
Local L-Functions for Split Spinor Groups
Mahdi Asgari
Abstract. We study the local L-functions for Levi subgroups in split spinor groups defined via the
Langlands-Shahidi method and prove a conjecture on their holomorphy in a half plane. These results
have been used in the work of Kim and Shahidi on the functorial product for GL2 × GL3.
1 Introduction
The purpose of this work is to prove a conjecture on the holomorphy of local Lang-
lands L-functions defined via the Langlands-Shahidi method in split spinor groups.
These local factors appear in the Euler products of global automorphic L-functions
and information about their holomorphy is frequently exploited in order to prove
results about the analytic properties of global objects. In particular, in a recent im-
portant work, H. Kim and F. Shahidi have used some cases of our result here in order
to handle some local problems in their long-awaited result on the existence of sym-
metric cube cusp forms on GL2 (cf. [11], [12]).
Apart from trace formula methods, two methods have been suggested to study
these factors: the Rankin-Selberg method which uses "zeta integrals" and the
Langlands-Shahidi method which uses "Eisenstein series". Our focus in this work
is on the latter [13], [16], [18], [20].
Let M be a (quasi) split connected reductive linear algebraic group defined over
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/830/3114109.html","timestamp":"2014-04-19T23:57:33Z","content_type":null,"content_length":"8493","record_id":"<urn:uuid:8db429c0-a20b-4413-a89c-b2ce8c874bf6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relativity and Cosmology
1209 Submissions
[17] viXra:1209.0113 [pdf] submitted on 2012-09-30 15:49:59
The Analysis of Harold White Applied to the Natario Warp Drive Spacetime. from $10$ Times the Mass of the Universe to the Mass of the Mount Everest
Authors: Fernando Loup
Comments: 14 pages.This will be submitted to appreciation by NASA scientists
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The
Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy
conditions because the stress energy momentum tensor is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory
allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive is the huge
amount of negative energy able to sustain the warp bubble.Ford and Pfenning computed the amount of negative energy needed to maintain an Alcubierre warp drive and they arrived at the result of $10$
times the mass of the entire Universe for a stable warp drive configuration rendering the warp drive impossible.However Harold White manipulating the parameter $@$ in the original shape function that
defines the Alcubierre spacetime demonstrated that it is possible to low these energy density requirements.We repeat here the Harold White analysis for the Natario spacetime and we arrive at similar
conclusions.From $10$ times the mass of the Universe we also manipulated the parameter $@$ in the original shape function that defines the Natario spacetime and we arrived at a result of $10$ billion
tons of negative mass to maintain a warp drive moving with a speed $200$ times faster than light.Our result is still a huge number about the weight of the Everest Mountain but at least it is better
than the original Ford-Pfenning result of $10$ times the mass of the Universe.The main reason of this work is to demonstrate that Harold White point of view is entirely correct.
Category: Relativity and Cosmology
[16] viXra:1209.0111 [pdf] submitted on 2012-09-30 06:54:46
Electromagnetic and Gravitational Pictures of the World
Authors: Sergey G. Fedosin
Comments: 14 Pages. Apeiron, 2007, Vol. 14, No. 4, P. 385 – 413.
The review of the theory of electromagnetic field together with the special and general theories of relativity has been made. The similar theory of gravitation has been presented which has the
property of Lorentz-invariancy in its own representation in which the information is transferred at the speed of propagation of the gravitational field. Generalization of the specified gravitation
theory on noninertial reference systems has been made with the help of the mathematical apparatus of the general relativity. It allows to avoid some drawbacks of the standard general relativity
theory and to expand its applicability. The possibility of complementary descriptions of the physical phenomena with the help of simultaneous use of the theories of gravitational and electromagnetic
fields has been shown.
Category: Relativity and Cosmology
[15] viXra:1209.0110 [pdf] submitted on 2012-09-30 06:18:15
The Theory of Infinite Hierarchical Nesting of Matter as the Source of New Ideas
Authors: Sergey G Fedosin
Comments: 11 Pages. FQXi 2012 Essay Contest "Which of Our Basic Physical Assumptions Are Wrong?"
With the help of the theory of infinite hierarchical nesting of matter the need for change in the theoretical foundations of the scientific world outlook is derived – in the philosophy; in the logic
of thinking; in the system theory; in cosmology; in interrelation of matter levels; in describing the properties of matter carriers and the laws of their interaction; in the theory of gravitation; in
the analysis of the mass origin; in the theory of relativity; in the theory of elementary particles; in thermodynamics and other fields of knowledge. The possible ways are described of overcoming the
difficulties and challenges existing in a number of modern physical theories.
Category: Relativity and Cosmology
[14] viXra:1209.0105 [pdf] submitted on 2012-09-28 10:52:47
Book Review: "Fundamental Questions of Practical Cosmology", by Baryshev and Teerikorpi
Authors: M. Lopez-Corredoira
Comments: 3 Pages. invited "book review" published in: Journal of Cosmology, 18, VI (2012)
One might well question the need for a new reference work on cosmology, given that plenty of them have already been published. In my opinion, there is a need to offer a complete overview of the
subject taking on board a wider range of opinions than is at present the case with typical reference works on cosmology. A student can easily get the impression that everything is now well known, and
that only a few minor conceptual details or finer measurements of parameters remain to be worked out. Only standard viewpoints are usually offered in some detail, all others being entirely neglected.
There are also books by heterodox authors who present their own alternative theories, but most of these are sectarian, aiming to press home a particular view, rather than representing all current
knowledge of the subject. There is a dearth of books on cosmology with well-balanced content that present the fundamentals and current knowledge of this area with rigor while at the same time
covering controversial observations and discussions that might cast doubt on the science which is being built. This book stands out as an attempt to cover both goals. It offers "Fundamental
Questions" rather than "definitive answers".
Category: Relativity and Cosmology
[13] viXra:1209.0104 [pdf] submitted on 2012-09-28 10:57:40
Peaks in the CMBR Power Spectrum. I. Mathematical Analysis of the Associated Real Space Features
Authors: M. Lopez-Corredoira, A. Gabrielli
Comments: 23 Pages. accepted to be published in Physica A
The purpose of our study is to understand the mathematical origin in real space of modulated and damped sinusoidal peaks observed in cosmic microwave background radiation anisotropies. We use the
theory of the Fourier transform to connect localized features of the two-point correlation function in real space to oscillations in the power spectrum. We also illustrate analytically and by means
of Monte Carlo simulations the angular correlation function for distributions of filled disks with fixed or variable radii capable of generating oscillations in the power spectrum. While the power
spectrum shows repeated information in the form of multiple peaks and oscillations, the angular correlation function offers a more compact presentation that condenses all the information of the
multiple peaks into a localized real space feature. We have seen that oscillations in the power spectrum arise when there is a discontinuity in a given derivative of the angular correlation function
at a given angular distance. These kinds of discontinuities do not need to be abrupt in an infinitesimal range of angular distances but may also be smooth, and can be generated by simply distributing
excesses of antenna temperature in filled disks of fixed or variable radii on the sky, provided that there is a non-null minimum radius and/or the maximum radius is constrained.
Category: Relativity and Cosmology
[12] viXra:1209.0089 [pdf] submitted on 2012-09-25 22:58:29
[11] viXra:1209.0086 [pdf] submitted on 2012-09-24 12:10:08
Propagation of Light in the Gravitational Field in the Context of the Theory of Gravitational Relativity I
Authors: Akindele O. Adekugbe Joseph
Comments: 37 Pages. A paper from volume one of the Fundamenatl Theory... (monograph)
Propagation of light in the gravitational field is studied in the context of the theory of gravitational relativity (TGR). A local gravitational red shift relation is derived as a consequence of the
gravitational time dilation formula in the context of TGR. A light ray emitted at a given position in a gravitational field, or a light ray emitted elsewhere, which is momentarily passing through the
given position, suffers local gravitational red shift at that position. And when the light ray has propagated to another position, it suffers the local gravitational red shift at the new position.
The gravitational red shift as a light ray passes through two positions of different gravitational potentials is inferred from the local gravitational red shifts at the two positions. The invalidity
of the underlying assumptions of two theories of gravitational red shift encompassed by the general theory of relativity (GR), which invalidates those theories, are shown, while upholding the theory
of gravitational red shift in the context of TGR as the valid theory. The prediction of gravitational red shift for a terrestrial light ray in the context of TGR is in agreement with the result of
the Pound, Rebka and Snider experiment (PRS) to within 99.94% accuracy. Although the prediction of Einstein's theory of gravitational red shift of light (in GR) is in agreement with the result of
this experiment to within the same accuracy of 99.94%, it is shown that this does not imply the validity of that theory.
Category: Relativity and Cosmology
[10] viXra:1209.0073 [pdf] replaced on 2012-11-01 10:52:13
Ford-Pfenning Quantum Inequalities(QI) in the Natario Warp Drive Spacetime using the Planck Length Scale
Authors: Fernando Loup
Comments: 15 Pages.
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The
Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy
conditions because the stress energy momentum tensor(the right side of the Einstein Field Equations) for the Einstein tensor $G_{00}$ is negative implying in a negative energy density. While from a
classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre
himself.The major drawback concerning negative energies for the warp drive are the so-called Quantum Inequalities(QI) that restricts the time we can observe the negative energy density.This time is
known as the sampling time.Ford and Pfenning computed the QI for the Alcubierre warp drive using a Planck Length Scale shape function and concluded that the negative energy in the Alcubierre warp
drive can only exists for a sampling time of approximately $10^{-33}$ seconds rendering the warp drive impossible for an interstellar trip for example a given star at $20$ light years away with a
speed of $200$ times faster than light because such a trip would require months not $10^{-33}$ seconds. We repeated the QI analysis of Ford and Pfenning for the Natario warp drive using the same
Planck Length Scale but with a shape function that although different from the function choosed by Ford and Pfenning it obeys Natario requirements and because the Natario warp drive have a very
different distribution of negative energy when compared to its Alcubierre counterpart this affects the QI analysis.We arrived at a sampling time that can last longer than $10^{-33}$ seconds enough to
sustain a warp bubble for the interstellar travel mentioned above.We also computed the total negative energy requirements for the Natario warp drive and we arrived at a comfortable result.This leads
us to conclude that the Natario warp drive is a valid solution of the Einstein Field Equations of General Relativity physically accessible for interstellar spaceflight. We also discuss Horizons and
infinite Doppler blueshifts.
Category: Relativity and Cosmology
[9] viXra:1209.0053 [pdf] submitted on 2012-09-17 13:56:29
The Accurate Theoretical Calculation of the Percentages of Dark Energy, Dark Matter, and Baryonic Matter
Authors: Ding-Yu Chung
Comments: 24 Pages.
The theoretical calculated percentages of dark energy, dark matter, and baryonic matter are 72.8. 22.7, and 4.53, respectively, in agreement with observed 72.8, 22.7, and 4.56, respectively.
According to the calculation, dark energy started in 4.47 billion years ago in agreement with the observed 4.71 +/- 0.98 billion years ago. The calculation is based on the unified theory of physics
derived from the zero-energy universe and the space-object structures. In this model, the maximum percentage of variable dark energy is 75%, and the ratio of dark matter to baryonic matter is 5 to 1.
For our universe, the zero-energy universe produced the symmetrical positive-energy and negative-energy universes, which then underwent a symmetry breaking through the Higgs mechanism to generate
eventually our universe of baryonic-dark matter with massless particles and the parallel universe of dark energy without massless particles, respectively. The further symmetry breaking through the
Higgs mechanism differentiated baryon matter with massless electromagnetism and dark matter without massless electromagnetism.
Category: Relativity and Cosmology
[8] viXra:1209.0049 [pdf] replaced on 2012-10-15 13:09:30
The Ford-Pfenning Quantum Inequalities(qi) Analysis Applied to the Natario Warp Drive Spacetime
Authors: Fernando Loup
Comments: 16 pages
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The
Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy
conditions because the stress energy momentum tensor(the right side of the Einstein Field Equations) for the Einstein tensor $G_{00}$ is negative implying in a negative energy density. While from a
classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre
himself.The major drawback concerning negative energies for the warp drive are the so-called Quantum Inequalities(QI) that restricts the time we can observe the negative energy density.This time is
known as the sampling time.Ford and Pfenning computed the QI for the Alcubierre warp drive and concluded that the negative energy in the Alcubierre warp drive can only exists for a sampling time of
approximately $10^{-10}$ seconds rendering the warp drive impossible for an interstellar trip for example a given star at $20$ light years away with a speed of $200$ times faster than light because
such a trip would require months not $10^{-10}$ seconds. We repeated the QI analysis of Ford and Pfenning for the Natario warp drive and because the Natario warp drive have a very different
distribution of negative energy when compared to its Alcubierre counterpart this affects the QI analysis.We arrived at a sampling time that can last longer than $10^{-10}$ seconds enough to sustain a
warp bubble for the interstellar travel mentioned above.We also computed the total negative energy requirements for the Natario warp drive and we arrived at a comfortable result.This leads us to
conclude that the Natario warp drive is a valid solution of the Einstein Field Equations of General Relativity physically accessible for interstellar spaceflight. We also discuss Horizons and
infinite Doppler blueshifts.
Category: Relativity and Cosmology
[7] viXra:1209.0033 [pdf] replaced on 2013-06-15 03:59:57
Consciousness, the Laws of Physics, the Big Bang, and the Structure of the Universe
Authors: Jeffrey S. Keen
Comments: 22 Pages + 18 Pages of Appendices, 25 Figures, 2 Tables
In general, science accepts that both the structure of the universe and the laws of physics were created simultaneously at the beginning of the Big Bang, and have since remained constant throughout
the observable universe. The experimental evidence detailed in this paper demonstrates that consciousness, in its widest sense, was also created at the time of the big bang, and together with the
storage,communication and perception of information, is intimately connected to the structure of the universe and the laws of physics. The introduction of this concept of consciousness helps to
explain the “weird” effects of quantum physics and hitherto other inexplicable phenomena including entanglement, dark energy, inflation theory, the universe’s expansion against gravity, and the
possible connection between the essence of consciousness and the Higgs field. This paper augments existing knowledge with published findings from recent mind science experiments. Using noetics and
without the use of physical equipment, equations are found with high correlation coefficients which include universal constants such as the Golden Ratio, 1.61803... (φ), Feigenbaum’s Constant,
4.6692... (δ), sine 1/3, and polyhedra angles. It is thus demonstrated that both consciousness and the structure of the universe are closely linked to concepts including: universal constants,
geometry, chaos theory, numbers and mathematics, vortices, fractal geometry, interaction of mind and matter, multi-body interactions, entanglement and information. General acceptance of the above
facts should lead to a monumental paradigm shift in mankind’s understanding of the cosmos and its incorporation of consciousness. Key Words mind; consciousness; spirals; torus; vorticity; chaos;
Feigenbaum’s constant; golden ratio; gravity; electromagnetism; entanglement; cosmic and subtle energies; Planck level; Higgs field; structure of the universe
Category: Relativity and Cosmology
[6] viXra:1209.0027 [pdf] submitted on 2012-09-10 11:52:58
Disposing Classical Field Theory
Authors: Hans Detlef Hüttenbach
Comments: 21 pages
This article is about the concept of mass and electric charge: When the fundamental relativistic equation E^2=m^2 c^4+〖|p|〗^2 c^2 is solved in the complex, this inevitably leads to an irreducible
representation of the extended Lorentz group as U(4) operating on the complex Clifford algebra Cl(1,3) in which mass is a complex 4x4-spinor. Spinors are a direct consequence of taking the root of
the Minkowski square distance. Doing so with the Minkowski square of differentials then gives a spinor-valued differential form. With that, classical electrodynamics is shown to be extendable into a
relativistically invariant theory, in fact the simplest possible relativistically invariant one. Its symmetries reveal a unified concept of classical charge and mass. A dynamical system based on
this, splits into the direct sum of a dynamical system of pure electromagnetic charges and one of purely neutral particles. In it, charged particles must be fermionic in order to conserve their net
charge, and neutral non-magnetic ones are bosonic in order to be able to assign to them a positive mass. Also, it will be seen that within the Clifford algebra, the Hamiltonian of a self-interacting
mechanical dynamical system of particles can be given in a closed form. I end the paper with a section on superconductivity, where it is shown that superconducting material should electromagnetically
behave as opaque, dark matter.
Category: Relativity and Cosmology
[5] viXra:1209.0024 [pdf] submitted on 2012-09-08 19:23:31
On The Electromagnetic Basis For Gravity
Authors: A. Laidlaw
Comments: 12 Pages. Apeiron Vol 11, no. 3 July 2004
The relationships between two alternative theories of gravity, the "physicalist", Electromagnetics based, "Polarisable Vaccuum" theory of Puthoff and Dicke, and Yilmaz's "phenomenological" variation
of the General Theory of Relativity, are explored by virtue of a simple physical model based in the application of Newtonian mechanics to propagative systems. A particular virtue of the physical
model is that, by introducing distributed source terms, it anticipates nonlocal relationships between observables within the framework of local realism.
Category: Relativity and Cosmology
[4] viXra:1209.0023 [pdf] submitted on 2012-09-08 19:28:15
Relativity and the Luminal Structure of Matter
Authors: A. Laidlaw
Comments: 16 Pages. submitted to Physics Essays
Special Relativity implies definite structural constraints on the massive particles. It is shown from the basic physics of luminal waves of any kind that multi-component wave systems conform to the
usual relativistic mechanics for massive particles, suggesting further consideration of luminal wave soliton models. The usual length contraction and time dilation phenomena are found in an important
subset of such models, leading to the conclusion that internal movements referred to the comoving frame will be luminal in any Lorentz Invariant particle model.
Category: Relativity and Cosmology
[3] viXra:1209.0013 [pdf] submitted on 2012-09-04 14:59:49
The Transformation of the Power and the Constant Acceleration in the 2-Dimension Inertial System
Authors: sangwha Yi
Comments: 8 Pages.
In the special relativity theory, the acceleration about the accelerated matter in 2-Dimension inertial coordinate system and the other acceleration about the accelerated matter in 2-Dimension
inertial coordinate system are same. Therefore using it, derive the transformation of the power. And if the acceleration is the constant acceleration , the acceleration in 2-Dimension inertial
coordinate system and in 2-Dimension inertial coordinate system is the constant acceleration .
Category: Relativity and Cosmology
[2] viXra:1209.0004 [pdf] submitted on 2012-09-02 06:39:06
Maximum Force Derived from Special Relativity, the Equivalence Principle and the Inverse Square Law
Authors: Richard J. Benish
Comments: 19 Pages.
Based on the work of Jacobson [1] and Gibbons, [2] Schiller [3] has shown not only that a maximum force follows from general relativity, but that general relativity can be derived from the principle
of maximum force. In the present paper an alternative derivation of maximum force is given. Inspired by the equivalence principle, the approach is based on a modification of the well known special
relativity equation for the velocity acquired from uniform proper acceleration. Though in Schiller's derivation the existence of gravitational horizons plays a key role, in the present derivation
this is not the case. In fact, though the kinematic equation that we start with does exhibit a horizon, it is not carried over to its gravitational counterpart. A few of the geometrical consequences
and physical implications of this result are discussed.
Category: Relativity and Cosmology
[1] viXra:1209.0003 [pdf] submitted on 2012-09-01 12:53:02
The Complete Doppler Formula: Return to the Origin
Authors: Albert Zotkin
Comments: 2 Pages.
A complete Doppler formula is deduced from first principles and it can dispute predictions to the famous relativistic one.
Category: Relativity and Cosmology
|
{"url":"http://vixra.org/relcos/1209","timestamp":"2014-04-21T02:42:41Z","content_type":null,"content_length":"32121","record_id":"<urn:uuid:b69b5706-a7f6-424d-b605-72895132afc3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Material Results
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
'This is a supplement to the author’s Introduction to Real Analysis. It has been judged to meet the evaluation criteria set...
see more
Material Type:
Open Textbook
William Trench
Date Added:
Jan 15, 2014
Date Modified:
Mar 12, 2014
״This is a text for a two-term course in introductory real analysis for junior or senior mathematics majors and science...
see more
Material Type:
Open Textbook
William Trench
Date Added:
Jan 25, 2011
Date Modified:
Mar 12, 2014
This is a supplement to the author’s Introduction to Real Analysis. It has been judged to meet the evaluation criteria set by...
see more
Material Type:
Open Textbook
William Trench
Date Added:
Jan 15, 2014
Date Modified:
Mar 12, 2014
AccoI have taught the beginning graduate course in real variables and functional analysis three times in the last five years,...
see more
Material Type:
Open Textbook
Shlomo Sternberg
Date Added:
Feb 02, 2011
Date Modified:
Feb 02, 2011
|
{"url":"http://www.merlot.org/merlot/materials.htm?materialType=Open%20Textbook&category=2524&sort.property=overallRating","timestamp":"2014-04-21T10:58:02Z","content_type":null,"content_length":"127273","record_id":"<urn:uuid:cf7b70a6-3517-41e8-b1a2-5b8fe229aaf9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] non-uniform discrete sampling with given probabilities (w/ and w/o replacement)
Christopher Jordan-Squire cjordan1@uw....
Wed Aug 31 14:17:04 CDT 2011
On Wed, Aug 31, 2011 at 2:07 PM, Olivier Delalleau <shish@keba.be> wrote:
> You can use:
> 1 + numpy.argmax(numpy.random.multinomial(1, [0.1, 0.2, 0.7]))
> For your "real" application you'll probably want to use a value >1 for the
> first parameter (equal to your sample size), instead of calling it multiple
> times.
> -=- Olivier
Thanks. Warren (Weckesser) mentioned this possibility to me yesterday
and I forgot to put it in my post. I assume you mean something like
x = np.arange(3)
y = np.random.multinomial(30, [0.1,0.2,0.7])
z = np.repeat(x, y)
That look right?
-Chris JS
> 2011/8/31 Christopher Jordan-Squire <cjordan1@uw.edu>
>> In numpy, is there a way of generating a random integer in a specified
>> range where the integers in that range have given probabilities? So,
>> for example, generating a random integer between 1 and 3 with
>> probabilities [0.1, 0.2, 0.7] for the three integers?
>> I'd like to know how to do this without replacement, as well. If the
>> probabilities are uniform, there are a number of ways, including just
>> shuffling the data and taking the first however-many elements of the
>> shuffle. But this doesn't apply with non-uniform probabilities.
>> Similarly, one could try arbitrary-sampling-method X (such as
>> inverse-cdf sampling) and then rejecting repeats. But that is clearly
>> sub-optimal if the number of samples desired is near the same order of
>> magnitude as the total population, or if the probabilities are very
>> skewed. (E.g. a weighted sample of size 2 without replacement from
>> [0,1,2] with probabilities [0.999,.00005, 0.00005] will take a long
>> time if you just sample repeatedly until you have two distinct
>> samples.)
>> I know parts of what I want can be done in scipy.statistics using a
>> discrete_rv or with the python standard library's random package. I
>> would much prefer to do it only using numpy because the eventual
>> application shouldn't have a scipy dependency and should use the same
>> random seed as numpy.random.
>> (For more background, what I want is to create a function like sample
>> in R, where I can give it an array-like of doo-hickeys and another
>> array-like of probabilities associated with each doo-hickey, and then
>> generate a random sample of doo-hickeys with those probabilities. One
>> step for that is generating ints, to use as indices, with the same
>> probabilities. I'd like a version of this to be in numpy/scipy, but it
>> doesn't really belong in scipy since it doesn't
>> -Chris JS
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-August/058315.html","timestamp":"2014-04-17T11:18:02Z","content_type":null,"content_length":"7128","record_id":"<urn:uuid:d747b33f-b2f7-4ffe-84e0-0414c43bc134>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Consider The GBN Protocol With A Sender Window ... | Chegg.com
1. Consider the GBN protocol with a sender window size of 3 and asequence number range of 1024. Suppose that a time t, the next inorder packet that the receiver is expecting has a sequencenumber of
k. Assume that the medium does not reorder messages.Answer the foll. questions:
a) What are the possible sets of sequence numbers inside thesender's window at time t? Justify your answer.
b) What are all the possible values of ACK field in all possiblemessages currently propagating back to the sender at time t?Justify your answer.
Computer Science
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1-consider-gbn-protocol-sender-window-size-3-asequence-number-range-1024-suppose-time-t-ne-q206367","timestamp":"2014-04-25T06:13:21Z","content_type":null,"content_length":"21378","record_id":"<urn:uuid:682e8cc8-94d2-46f9-a1f3-9bd106696dda>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Calculation of properties of crystalline lithium hydride using correlated wave function theory
S. J. Nolan,1 M. J. Gillan,2,3 D. Alfè,2,3,4 N. L. Allan,1 and F. R. Manby1
1Centre for Computational Chemistry, School of Chemistry, University of Bristol, Bristol BS8 1TS, United Kingdom
2London Centre for Nanotechnology, UCL, London WC1H 0AH, United Kingdom
3Department of Physics and Astronomy, UCL, London WC1E 6BT, United Kingdom
4Department of Earth Sciences, UCL, London WC1E 6BT, United Kingdom
Received 2 July 2009; revised manuscript received 9 September 2009; published 5 October 2009
The lattice parameter, bulk modulus, and cohesive energy of lithium hydride are calculated to very high
accuracy through a combination of periodic and finite-cluster electronic structure calculations. The Hartree-
Fock contributions are taken from earlier work in which plane-wave calculations were corrected for pseudo-
potential errors. Molecular electronic structure calculations on finite clusters are then used to compute the
correlation contributions and finite-size effects are removed through the hierarchical scheme. The systematic
improvability of the molecular electronic structure methods makes it possible to converge the static cohesive
energy to within a few tenths of a millihartree. Zero-point energy contributions are determined from density
functional theory phonon frequencies. All calculated properties of lithium hydride and deuteride agree with
empirical observations to within experimental uncertainty.
DOI: 10.1103/PhysRevB.80.165109 PACS number s : 71.15.Nc, 31.15.bw, 61.50.Lt
Computational studies of crystalline solids are dominated
by density functional theory DFT; see, for example, Ref. 1 ,
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/547/3130685.html","timestamp":"2014-04-19T08:27:05Z","content_type":null,"content_length":"9051","record_id":"<urn:uuid:bf642980-48a2-45ee-8fef-6ff5880e7476>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Orthogonal Sets and Polar Methods in Linear Algebra: Applications to Matrix Calculations, Systems of Equations, Inequalities, and Linear Programming
Author(s): Enrique Castillo, Angel Cobo, Francisco Jubete, Rosa Eva Pruneda
Published Online: 24 OCT 2011 09:09AM EST
Print ISBN: 9780471328896
Online ISBN: 9781118032893
DOI: 10.1002/9781118032893
A unique, applied approach to problem solving in linear algebra
Departing from the standard methods of analysis, this unique book presents methodologies and algorithms based on the concept of orthogonality and demonstrates their application to both standard and
novel problems in linear algebra. Covering basic theory of linear systems, linear inequalities, and linear programming, it focuses on elegant, computationally simple solutions to real-world physical,
economic, and engineering problems. The authors clearly explain the reasons behind the analysis of different structures and concepts and use numerous illustrative examples to correlate the
mathematical models to the reality they represent. Readers are given precise guidelines for:
* Checking the equivalence of two systems
* Solving a system in certain selected variables
* Modifying systems of equations
* Solving linear systems of inequalities
* Using the new exterior point method
* Modifying a linear programming problem
With few prerequisites, but with plenty of figures and tables, end-of-chapter exercises as well as Java and Mathematica programs available from the authors' Web site, this is an invaluable text/
reference for mathematicians, engineers, applied scientists, and graduate students in mathematics.
|
{"url":"http://onlinelibrary.wiley.com/book/10.1002/9781118032893","timestamp":"2014-04-20T23:35:18Z","content_type":null,"content_length":"48439","record_id":"<urn:uuid:c9dae9be-3728-4297-b548-ab3ea08fb341>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Santa Clara, CA Calculus Tutor
Find a Santa Clara, CA Calculus Tutor
...I was a business major in college, and switched that to economics, but I've taken what would be considered a business minor. Also I ran my own marketing business for two summers. I tutored
business informally at the college level.
35 Subjects: including calculus, reading, statistics, geometry
...I am comfortable with APS.NET, C/C++/C#, Java, PHP, JavaScript, HTML, CSS. I have university background in Computer Science. I can teach fundamentals, including data structures, algorithms,
object-oriented programming, classical inheritance, as well as specific languages and/or technologies.
22 Subjects: including calculus, chemistry, physics, geometry
...My teaching philosophy is to help the the students understand where the formulas come from. By doing this, the students won't just have to memorize formulas, they will be able to derive them if
they need to. I am extremely patient with them, and I will explain it in as many times and different ways I need to until the idea is clear.
9 Subjects: including calculus, geometry, algebra 1, algebra 2
...My specialty is in Microeconomics, but I am very familiar with all the major aspects of free-market economic theory, including Macroeconomics, Econometrics, Money & Banking and International
Economics. I have strong Financial background/experience: I am a Chartered Financial Analyst (Level I), I...
22 Subjects: including calculus, geometry, accounting, statistics
...During my many years of work in the industry as an engineer and also computer software developer, I was actively involved in C programming, UNIX core/kernel system software and device drivers,
X server and X Windows, and other software and firmware design and development. I've also worked for ma...
9 Subjects: including calculus, physics, geometry, ASVAB
|
{"url":"http://www.purplemath.com/Santa_Clara_CA_Calculus_tutors.php","timestamp":"2014-04-16T16:26:18Z","content_type":null,"content_length":"24195","record_id":"<urn:uuid:f9230927-1d2f-4534-8c35-692e85ea13f2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algorithm selection as a bandit problem with unbounded losses
- In DCAI 2008 — International Symposium on Distributed Computing and Artificial Intelligence, Advances in Soft Computing , 2008
"... Summary. In recent work we have developed an online algorithm selection technique, in which a model of algorithm performance is learned incrementally while being used. The resulting
exploration-exploitation trade-off is solved as a bandit problem. The candidate solvers are run in parallel on a singl ..."
Cited by 2 (2 self)
Add to MetaCart
Summary. In recent work we have developed an online algorithm selection technique, in which a model of algorithm performance is learned incrementally while being used. The resulting
exploration-exploitation trade-off is solved as a bandit problem. The candidate solvers are run in parallel on a single machine, as an algorithm portfolio, and computation time is shared among them
according to their expected performances. In this paper, we extend our technique to the more interesting and practical case of multiple CPUs. 1
- In , 2009
"... Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no
progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate c ..."
Cited by 1 (0 self)
Add to MetaCart
Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is
observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their
behavior. Hence, a multi-start strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose
multi-start strategies motivated by works on multi-armed bandit problems and Lipschitz optimization with an unknown constant. The strategies continuously estimate the potential performance of each
algorithm instance by supposing a convergence rate of the local search algorithm up to an unknown constant, and in every phase allocate resources to those instances that could converge to the optimum
for a particular range of the constant. Asymptotic bounds are given on the performance of the strategies. In particular, we prove that at most a quadratic increase in the number of times the target
function is evaluated is needed to achieve the performance of a local search algorithm started from the attraction region of the optimum. Experiments are provided using SPSA (Simultaneous
Perturbation Stochastic Approximation) and k-means as local search algorithms, and the results indicate that the proposed strategies work well in practice, and, in all cases studied, need only
logarithmically more evaluations of the target function as opposed to the theoretically suggested quadratic increase. 1.
, 2011
"... Algorithm portfolio selection as a bandit problem ..."
, 2010
"... Abstract Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the Multi-Armed Bandit (MAB)
paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitn ..."
Add to MetaCart
Abstract Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the Multi-Armed Bandit (MAB)
paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitness improvement brought by the corresponding operator to the individual it is applied to.
However, the AOS problem is dynamic, whereas standard MAB algorithms are known to optimally solve the exploitation versus exploration trade-off in static settings. An original dynamic variant of the
standard MAB Upper Confidence Bound algorithm is proposed here, using a sliding time window to compute both its exploitation and exploration terms. In order to perform sound comparisons between AOS
algorithms, artificial scenarios have been proposed in the literature. They are extended here toward smoother transitions between different reward settings. The resulting original testbed also
includes a real evolutionary algorithm that is applied to the well-known Royal Road problem. It is used here to perform a thorough analysis of the behavior of AOS algorithms, to assess their
sensitivity with respect to their own hyper-parameters, and to propose a sound
"... i Preface This is a revised version of the master thesis Algorithm Selection for the Graph Coloring Problem. In the following paragraph, we list the corrections compared to the original version.
Insignificant typos and spelling errors are not marked explicitly. Notation: p. x, t. y means page x, lin ..."
Add to MetaCart
i Preface This is a revised version of the master thesis Algorithm Selection for the Graph Coloring Problem. In the following paragraph, we list the corrections compared to the original version.
Insignificant typos and spelling errors are not marked explicitly. Notation: p. x, t. y means page x, line y from top. Similarly p. x, b. y means page x, line y from bottom. • p. 23, b 8: Changes
citation source to [109]. Note that this changes the enumeration of the remaining references. • p. 39, first subsection: We are using maximal cliques and not maximum cliques as graph feature. iii
Acknowledgements First of all, let me note that I don’t believe that many people will ever read this thesis. From my experience, I know that especially the acknowledgments are one of the first
chapters that everybody skips because of time reasons or just a lack of interest. Nevertheless, I would like to
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=7847436","timestamp":"2014-04-16T22:25:27Z","content_type":null,"content_length":"24307","record_id":"<urn:uuid:b62a44f2-31be-4e7f-9152-3824b90a77e5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Boylston SAT Math Tutor
Find a West Boylston SAT Math Tutor
I am licensed in Moderate Special Needs 5-12, and Social Studies 5-12, and Math 5-8. I just passed the MTEL for ELL and in the process of obtaining licenser. I am currently employed by the
Worcester Public Schools as a special needs teacher.
36 Subjects: including SAT math, English, writing, algebra 1
...The outcome depended in part on the student's very basic math skills and effort. Basically, I am able to predict my teaching outcome through the first conversation with the student. In
February, 2011, I came to America to start my new life - having to attend an American college as a full-time student.
11 Subjects: including SAT math, geometry, accounting, Chinese
...Seasonally I work with students on SAT preparation, which I love and excel at. I have worked successfully with students of all abilities, from Honors to Summer School. I work in Acton and
Concord and surrounding towns, (Stow, Boxborough, Harvard, Sudbury, Maynard, Littleton) and along the Route 2 corridor, including Harvard, Lancaster, Ayer, Leominster, Fitchburg, Gardner.
15 Subjects: including SAT math, calculus, physics, statistics
...I believe anyone who understands concepts and who practices skills can achieve success. My roles as a tutor are to make sure my students understand their subjects, and to encourage the students
to work hard and maintain success-conducive habits such as doing homework. I have tutored students at the high school level in English, mathematics, and physics.
26 Subjects: including SAT math, Spanish, reading, calculus
...People are surprised at how quickly they can learn these subjects once they are given a clear explanation. I have over 20 years of experience tutoring accounting, finance, economics and
statistics. I have a master's degree in accounting, and I currently teach statistics, accounting, and finance at local colleges, where students have given me great evaluations.
14 Subjects: including SAT math, statistics, accounting, algebra 1
|
{"url":"http://www.purplemath.com/west_boylston_sat_math_tutors.php","timestamp":"2014-04-18T23:30:04Z","content_type":null,"content_length":"24279","record_id":"<urn:uuid:78ccdeaf-4b9a-42bc-9701-8843c2f65ab8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Power consumption of a car
I know about the power harvested as I have performed an experiment and done some analysis on matlab.
What I want is rough power consumption of an average car. (obviously the consumption is different at different speeds)
It is just for comparison purposes to represent the % of potential energy savings such systems can lead to.
You would probably not save much if you want to maintain the suspension and stability properties of your car. The most energy you can extract from the suspension is having no suspension at all.
Ofcourse the ride would be rather unpleasent, but mpg would increase.
Well, suspenson is not the enemy, but the suspensions damping mechanism. You can replace these dampers with coils and magnets, charging a battery and feed the electricity directly to an electric
How it affects mpg depends on how the wheels bounce up and down. The average HP to maintain 50mph is approx 15 HP for a typical family car - say Mazda 6, and 35mpg. You have found the energy you can
harvest from the suspension from the weight of the car and road conditions.
Then your calculations can be subtracted from the energy required to maintain velocity of 50mph on a straight and level road. That would provide the answer you're looking for.
|
{"url":"http://www.physicsforums.com/showthread.php?t=681750","timestamp":"2014-04-16T18:58:22Z","content_type":null,"content_length":"37710","record_id":"<urn:uuid:6aba6a73-8db2-4124-b806-2051da5daf32>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oaklyn Math Tutor
Find an Oaklyn Math Tutor
Hello! I am in my 7th year as a local high school physics teacher. I graduated from the University of Maryland in 2007 with a degree in physics and I have been teaching ever since.
4 Subjects: including algebra 1, algebra 2, geometry, physics
...While tutoring French, I focus on drawing parallels between French and other languages, particularly English, to enhance the retention of meaning. I primarily focus increasing ability to
communicate and confidence in one's abilities to do so. I have taken Physics in both high school and college, primarily focused around Mechanics and Acoustics.
33 Subjects: including precalculus, philosophy, Adobe InDesign, art history
...So I try to break down each concept, mechanism and problem down to its bare parts and build an understanding so that each concept, mechanism and problem can be solved logically on their own
without memorization. I am a graduate student getting my Ph.D. in organic chemistry. I have taught both organic chemistry 1 and 2 recitation and lab.
6 Subjects: including algebra 2, algebra 1, prealgebra, chemistry
...I desire to tutor people who have the desire to do well on the SAT and do not feel like dealing with the snobbery that may occur with a professional tutor. I tutored many of my friends on the
SAT and saw improvements in their score. I believe with help from me your SAT score could improve dramatically.
16 Subjects: including algebra 2, chemistry, European history, geometry
...Please do not hesitate to contact me with any specific questions or inquiries. Best, -TomI have earned an undergraduate degree in Marketing along with an MBA with a focus in research
methodologies from Bloomsburg University, an AACSB accredited college. I have over 6 years' sales and marketing experience and over 4 years' experience in various business analytics roles.
20 Subjects: including algebra 2, elementary (k-6th), reading, study skills
|
{"url":"http://www.purplemath.com/oaklyn_nj_math_tutors.php","timestamp":"2014-04-20T15:52:53Z","content_type":null,"content_length":"23763","record_id":"<urn:uuid:c6b743fd-c704-45bc-a935-15886eb511e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
skoool.ie :: exam centre
Topic Overview
Question 3 on Paper 1 covers these two topics, which are apparently unconnected. Not that it concerns us, but the topics become interwoven at university level maths. Anyway this question is one of
the more popular on the first paper, and has often been the easiest question on the paper. There have been exceptions; in 1998, Question 3(c) was particularly difficult.
Complex numbers deal with the so-called 'imaginary unit', i, which stands for the square root of -1. It might at first appear as if this quantity has nothing to do with the world we live in, not
existing in the way the number 3 'exists'. However, complex numbers have many applications in engineering, physics and science in general. On our course we discuss the properties of complex numbers,
see how they help us to solve equations and investigate how to use complex numbers written in polar form.
Matrices were originally introduced to simplify the maths involved in transformation geometry, although in Question 3 on Paper 1 we do not see them being used for this purpose. We concentrate on the
definitions associated with matrices and how we can perform the basic operations of addition, multiplication, etc. One interesting feature of matrices is the absence of division, and the use of the
inverse of a matrix to overcome this problem.
Topic Structure: Complex Numbers
The study of Senior Cycle Complex Numbers can be divided into the following sections:
1. Definitions and Basic Operations
2. Complex Equations
3. Polar Form of a Complex Number
Topic Structure: Matrices
The study of Senior Cycle Matrices can be divided into the following sections:
1. Properties of Matrices
2. Inverse Matrices and Matrix Equations
This is quite a high-powered site, going very far into the theory of complex numbers. But the first section will be of interest to Leaving Cert students.
The S.O.S. site on matrices is again aimed at students starting university, and so many of the questions refer to matrices of higher dimension than the 2 x 2 that we are used to.
This section covers a wide area of problems about complex numbers, with many well-worked examples provided at three different levels, along with good practice material.
More from the PING site, covering a comprehensive introduction to Matrices.
The complex numbers page from the 'Ask Dr Math' site contains previously asked questions and their answers, many of which are relevant to our course.
|
{"url":"http://www.skoool.ie/skoool/examcentre_sc.asp?id=711","timestamp":"2014-04-21T02:41:03Z","content_type":null,"content_length":"21290","record_id":"<urn:uuid:a122b753-a31f-4e78-b0bd-dd4d3334c9d3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3D simulation of morphological effect on reflectance of Si3N4 sub-wavelength structures for silicon solar cells
In this study, we investigate the reflectance property of the cylinder, right circular cone, and square pyramid shapes of silicon nitride (Si[3]N[4]) subwavelength structure (SWS) with respect to
different designing parameters. In terms of three critical factors, the reflectance for physical characteristics of wavelength dependence, the reflected power density for real power reflection
applied on solar cell, and the normalized reflectance (reflected power density/incident power density) for real reflectance applied on solar cell, a full three-dimensional finite element simulation
is performed and discussed for the aforementioned three morphologies. The result of this study shows that the pyramid shape of SWS possesses the best reflectance property in the optical region from
400 to 1000 nm which is useful for silicon solar cell applications.
1. Introduction
Silicon solar cell is one of the promising renewable energy technologies in order to relieve the impact of the climate change. In semiconductor-based solar cells, electron-hole pairs are generated
through absorption of impinging photons. Due to high refraction index of semiconductor materials, especially silicon, the incident sunlight power is largely reflected back, resulting in the reduction
of light absorption and poor energy conversion efficiency. Antireflection coating (ARC) is mounted over absorption layers, resulting in three effects: (a) reduction in surface reflection, (b)
increase in light absorption due to an increase in optical path length by diffraction, and (c) enhancement of internal reflection that reduces the amount of escaping light. Based on the theory of
impedance matching, single layer (SLR) and multilayer of ARC are proposed for reduced reflectance property; however, the resulting reflectance spectra meet the demand only within a narrow spectral
domain. Subwavelength structure's (SWS) dimensions are much smaller than the wavelengths of light; therefore, using ARC on the surface of silicon solar cells can substantially reduce the reflectivity
and improve the capability of light trapping. It thus will achieve the enhanced efficiency according to our recent both numerical and experimental studies [1-3]. Compared with silicon solar cell with
a SLAR, the efficiency of silicon solar cell with Si[3]N[4 ]SWS is promising among various ARC layers in our recent work [4]. A rigorous coupled-wave analysis (RCWA) [1,5-7] has been reported to
estimate the reflectance of Si[3]N[4 ]SWS by approximating structural shapes with partitioned uniform homogeneous layers. RCWA is an exact solution of Maxwell's equations for the electromagnetic
diffraction by grating structures which is generally applicable for 2D plane with 1D periodicity; however, RCWA may suffer numerical difficulties in presence of evanescent orders and it requires a
large amount of calculation for retaining several diffraction orders. These factors limit flexible application of RCWA; in particular, for 3D problems with non-azimuthally symmetric structural
shapes. Numerical simulation of 3D morphological effect on reflectance property has not been studied yet. Therefore, a full 3D finite-element (FE) analysis of Si[3]N[4 ]SWS will be an interesting
examination for quantitative understanding of the reflectance property.
In this study, 3D FE simulation for the reflectance of Si[3]N[4 ]SWS with three types of structural shapes, the cylinder, the right circular cone, and the square pyramid shapes, is conducted with
respect to different geometry parameters and lighting angles for quantitative understanding of reflectance property. First, proper selection on the boundary conditions can alleviate the computational
load from simulating a holistic ARC. The reflectance of Si[3]N[4 ]SWS on the silicon substrate is thus simulated using the 3D finite element method (FEM); consequently, in terms of three critical
factors, the reflectance for physical characteristics of wavelength dependence, the reflected power density for real power reflection applied on solar cell, and the normalized reflectance (reflected
power density/incident power density) for real reflectance applied on solar cell are calculated and discussed for the aforementioned three morphologies. The analysis of reflectance spectrum with
wide-angle incidences of electromagnetic wave and the average reflectance with various heights are presented. Besides, according to our recent study, which presented the optimal design parameters of
Si[3]N[4 ]SWS based on RCWA [4], numerical verification and comparison is accomplished following the discussion. The engineering findings of this study show that the pyramid shape of SWS possesses
the best reflectance property in the optical region from 400 to 1000 nm which is useful for silicon solar cell applications.
This rest of the article is organized as follows. In Section 2, we show the computational structure and model. In Section 3, we report the results and discussion. Finally, we draw conclusions and
suggest future work.
2. The SWS and optical model
Based upon our experimental characterization, Figure 1a illustrates a periodical structure of Si[3]N[4 ]SWS which is used in our 3D FE simulation without loss of generality. We study Si[3]N[4 ]SWS
with the cylinder, the right circular cone, and the square pyramid shapes, as shown in Figure 1b-d, respectively. With a constant volume, the diameter of cylinder- and right circular cone-shaped Si
[3]N[4 ]SWS and the edge length of square pyramid are 130 nm, the heights (h) of the etched part of Si[3]N[4 ]SWS are 200, 600, and 471.3 nm, the height (s) of the non-etched part is 70 nm, and the
base (W) of a unit cell is 200 nm [4]. Note that the thickness of Si substrate is given 600 nm. Note that all structural parameters are adopted from our experimental studies [2-4,8]. Throughout the
article, we consider time-harmonic fields assuming a time-dependence in e^-jωt. The diffraction problem is governed by the well-known Maxwell equations
Figure 1. (a) Plot of the periodic structure of Si[3]N[4 ]SWS with 1 × 1 and 2 × 2 arrays as unit cell. 3D schematic plots of the examined (b) cylinder-, (c) right circular cone-, and (d) square
pyramid-shaped structure, respectively.
where E and D are electric field intensity and flux density, H and B are magnetic field intensity and flux density, λ is the corresponding frequency to the wavelength λ, J, and ρ are current density
and charge density, ε is electric permittivity, μ is magnetic permeability. A repeated pattern is applicable to use periodic boundary conditions, thus the Floquet theorem is adopted to simulate the
boundary condition of periodic structure. Floquet theorem asserts that the analysis region can be reduced significantly in one periodicity cell to characterize the propagation property. The electric
fields in periodic structure are related as follows:
where r is position vector, L is the distance between the periodic boundaries, and θ is a phase factor determined by wave vector k and L:
The polarization of transverse electric (TE) mode, in which the electric field is normal to the direction of wave propagation, is excited as the normal incident light source with wavelengths sweeping
from 400 to 1000 nm. The bottom region of Si substrate is assigned as perfect matched layer in avoidance of reflected wave. The refraction index of Si[3]N[4 ]is 2.05, and the refraction index of Si
is frequency dependent with the relation [1]:
where λ is the incident wavelength, A = 0.939816, B = 8.10461 × 10^-3, λ[1 ]= 1.1071 μm, and ε = 11.6858. The calculation settings of reflectance were reported and can be found in our recent studies
3. Results and discussion
In order to examine the effect of Floquet boundary condition in 3D FE analysis, as shown in Figure 2, we compare the difference between the simulated unit cells of 1 × 1 and 2 × 2 array of Si[3]N[4 ]
SWS. We find at the wavelengths above 600 nm, the reflectance of 1 × 1 array of Si[3]N[4 ]SWS as unit cell is almost consistent with unit cell of 2 × 2 array, meanwhile insignificant discrepancy
occurs at wavelengths shorter than 600 nm. Based on this consequence, it is enable us to do simulation with more computational efficient using 1 × 1 array of Si[3]N[4 ]SWS as a simulated unit cell
with engineering acceptable accuracy. According to our recent RCWA work [4], the reflectance spectra are first plotted in Figure 3 using the optimal design parameters [1,4]. Also, the spectra
calculated by a full 3D FE analysis with the same design parameters are indicated by dashed lines. For the cylinder-shaped Si[3]N[4 ]SWS, the reflectance spectra for RCWA and FE analysis are similar,
but not agreed for the cone-shaped Si[3]N[4 ]SWS due to existing evanescent orders along the top of structures. This comparison confirms the importance of 3D FEM simulation which is beyond the RCWA
approach [4].
Figure 2. Plot of the difference of reflectance spectrum of Si[3]N[4 ]SWS with the cylinder-, the right circular cone-, and the square pyramid-shaped structures as well as two different periodical
configurations: 1 × 1 (solid line) and 2 × 2 arrays (dashed line) in the 3D FEM simulation.
Figure 3. Comparison of the reflectance spectra for the cone- and cylinder-shaped Si[3]N[4 ]SWS calculated by RCWA and 3D FE analysis with the same design parameters.
Figure 4a-c shows the reflectance spectra with incident angles of 0°, 15°, 30°, 45°, and 60° for the cylinder-, right circular cone-, and square pyramid-shaped Si[3]N[4 ]SWS, respectively. For the
normal incidence case, the lowest average reflectance among three structural shapes is 3.47% of square pyramid-shaped structure. The others are 6.86 and 4.42% for the cylinder- and the right circular
cone-shaped Si[3]N[4 ]SWS, respectively. Meanwhile, as shown in Figure 4a-c, one can observe that the reflectance increases significantly with larger incident angles, resulting in average reflectance
beyond 50%. Table 1 summarizes the average reflectance for various incident angles. Height effect on average reflectance of Si[3]N[4 ]SWS at normal incident angle with d = 130 nm and s = 70 nm is
also calculated, as shown in Figure 5. The resulting average reflectance of pyramid-shaped Si[3]N[4 ]SWS nearly keeps lowest in comparison with the cylinder- and the right circular cone-shaped Si[3]N
[4 ]SWS as the structural height is ranging from 50 to 500 nm. Figure 6 shows the reflectance dependence on the structural height and wavelength. The pyramid-shaped Si[3]N[4 ]SWS has lower
reflectance and less sensitivity on structure height in comparison with the cylinder-shaped Si[3]N[4 ]SWS. Hence, the impact of process variation of structure height on solar cell performance is
smaller for pyramid-shaped Si[3]N[4 ]SWS. Based on solar spectrum at the sea level revealed in American Society for Testing and Materials (ASTM) Standard Tables for Reference Solar Spectral
Irradiances: Direct Normal and Hemispherical 37 Tilted Surface [9], we further estimate the reflected power density (W/m^2/nm) defined by reflectance times incident power density, as shown in Figure
7. The higher reflected power density of cylinder-shaped Si[3]N[4 ]SWS (red line) indicates the less efficiency in the solar cell application. Therefore, normalized reflectance defined as
Figure 4. Plots of the reflectance spectrum for the (a) cylinder- (b) circular-cone-, (c) and square-pyramid-shaped Si[3]N[4 ]SWS with incident angles of 0°, 15°, 30°, 45°, and 60°.
Table 1. Summary of the average reflectance of Si[3]N[4 ]SWS with various incident angles
Figure 5. Plot of the average reflectance among the studied three shapes of Si[3]N[4 ]SWS with heights varying from 50 to 500 nm.
Figure 6. 3D view for the height effect on the reflectance with respect to different wavelength. (a) The pyramid-shaped Si[3]N[4 ]SWS has lower reflectance and less sensitivity on structure height in
comparison with (b) the cylinder-shaped Si[3]N[4 ]SWS.
Figure 7. Plot of the reflected power density among three different shapes.
reveals the real power efficiency applied in the solar cell application. Figure 8 shows the normalized reflectance for the cylinder-, right circular cone-, and square pyramid-shaped Si[3]N[4 ]SWS,
respectively. The square pyramid-shaped Si[3]N[4 ]SWS again shows the lowest normalized reflectance 3.13% while the cylinder- and the right circular cone-shaped Si[3]N[4 ]SWSs have 6.66 and 4.12%,
Figure 8. Plot of reflectance with and without considering incident solar spectrum at sea level.
4. Conclusions
In this study, the reflective property of unit cell with a validated Floquet boundary condition has been calculated using a full 3D FE simulation. Considering various incidence angles and height
effect on three experimentally observed structural shapes of Si[3]N[4 ]SWS, we have concluded that the pyramid-shaped Si[3]N[4 ]SWS has best reflective property in the analysis of morphological
effect. Compared with the results of RCWA, the reflective property calculated by the full 3D FEM is significantly deviated from the results from RCWA, giving the hint that a detailed and
comprehensive methodology is dispensable for the design of Si[3]N[4 ]SWS. The results of computed reflectance, reflected power density, and normalized reflectance have shown that the pyramid shape of
SWS may have the best reflectance property in the optical region from 400 to 1000 nm which is useful for silicon solar cell applications. The optimized pyramid-shaped Si[3]N[4 ]SWS is currently under
plan for implementation with silicon solar cells.
3D: three-dimensional; ARC: antireflection coating; FEM: finite element method; RCWA: rigorous coupled-wave analysis; Si[3]N[4]: silicon nitride; SLAR: single layer; SWS: subwavelength structure.
Authors' contributions
M-YL, H-WC, and Z-LL performed the numerical simulation and data analysis, YL conducted whole study including manuscript preparation. All the authors read and approved the final manuscript.
This study was supported in part by the Taiwan National Science Council under contract Nos. NSC-99-2221-E-009-175 and NSC-100-2221-E-009-018.
1. Sahoo KC, Li Y, Chang EY: Numerical calculation of reflectance of sub-wavelength structures on silicon nitride for solar cell application.
Comput Phys Commun 2009, 180:1721-1729. Publisher Full Text
2. Sahoo KC, Lin MK, Chang EY, Tinh TB, Li Y, Huang JH: Silicon nitride nanopillars and nanocones formed by nickel nanoclusters and inductively coupled plasma etching for solar cell application.
Jpn J Appl Phys 2009, 48:126508. Publisher Full Text
3. Sahoo KC, Chang EY, Lin MK, Li Y, Huang JH: Fabrication and configuration development of silicon nitride sub-wavelength structures for solar cell application.
J Nanosci Nanotechnol 2010, 10:5692-5699. PubMed Abstract | Publisher Full Text
4. Sahoo KC, Li Y, Chang EY: Shape effect of silicon nitride sub-wavelength structure on reflectance for solar cell application.
5. Moharam MG, Gaylord TK: Rigorous coupled-wave analysis of planar-grating diffraction.
J Opt Soc Am 1981, 71:811-818. Publisher Full Text
6. Moharam MG, Gaylord TK: Rigorous coupled-wave analysis of metallic surface-relief gratings.
J Opt Soc Am A 1986, 3:1780-1787. Publisher Full Text
7. Moharam MG, Pommet DA: Formulation for stable and efficient implementation of the rigorous coupled-wave analysis of binary gratings.
J Opt Soc Am A 1995, 12:1077-1086. Publisher Full Text
8. Sahoo KC, Lin MK, Chang EY, Lu YY, Chen CC, Huang JH, Change CW: Fabrication of antireflective sub-wavelength structures on silicon nitride using nano cluster mask for solar cell application.
Nanoscale Res Lett 2009, 4:680-683. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
9. ASTM Standard G173-03: Standard Tables for Reference Solar Spectral Irradiances: Direct Normal and Hemispherical on 37° Tilted Surface. [http://www.astm.org/Standards/G173.htm] webcite
Sign up to receive new article alerts from Nanoscale Research Letters
|
{"url":"http://www.nanoscalereslett.com/content/7/1/196?fmt_view=classic","timestamp":"2014-04-18T21:57:15Z","content_type":null,"content_length":"89108","record_id":"<urn:uuid:75e3a1ed-a446-42c4-919e-65e4ae69661b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Euler Characteristic of General Linear Group
up vote 1 down vote favorite
(Edited) How can I find Euler-Poincare Index with compact support of General Linear Group over $\mathbb{R}$. For example let $A$ be a locally closed subset of a manifold $X$ then: $\chi_c(A)=\chi(R\
Which, in a smooth case it is the same as alternating sum of Betti numbers of de Rham cohomologies with compact support. Thank you.
What do you mean by the Euler characteristic with compact support? Do you mean the Euler-Poincaré characteristic of the compactly supported de Rham complex? – José Figueroa-O'Farrill Aug 5 '10 at
4 I'm guessing that's what is meant. In which case, use Poincare duality to rewrite it as the ordinary Euler characteristic up to sign. This can be computed in the usual fashion... At this point it
might help to tell us what part of the story is familiar to you (I'm addressing Karl). – Donu Arapura Aug 5 '10 at 16:28
1 To expand on what Donu said: please provide some motivation (why do you want to know?) and background (what do you already know? what have you already tried?). – Andrew Stacey Aug 5 '10 at 17:16
add comment
2 Answers
active oldest votes
I'm going to assume that "Euler characteristic with compact support" means
"(Euler characteristic of the one point compactification) - 1".
Let me assume that n>1.
The space in question, namely $ GL ( n,R) _ +$ , has a circle action given by any $ S ^ 1 $ subgroup of $ GL(n,R) $. This action is free on $ GL(n,R) $, and fixes the point at infinity. $ S
^ 1 $-orbits contribute zero to the euler characteristic, and the point at infinity contributes 1. So $ \chi ( GL (n,R) _ +) = 1 $, and the Euler characteristic with compact support is
up vote 5 zero.
down vote
To make te above argument precise, you need to pick a cell decomposition of $ ( GL ( n,R)/S ^ 1 ) _ + $, and use it to construct a cell decomposition of $GL ( n,R)$. Above every n-cell of
the quotient space, you put a pair of cells of $GL ( n,R) _ + $, one of dimension n and one of dimension n+1 (except for the 0-cell corresponding to the point at infinity). This might fail
to be a CW-complex, but you can nevertheless compute the Euler characteristic as the alternating sum of the numbers of cells in given dimensions.
For a complete answer, you should mention that $GL(0,\mathbb R)$ consists of a single point, or is empty, depending on the convention, and that $\chi(GL(1,\mathbb R)) = -2$. Note that the
Euler characteristic you are using is the correct one --- it's additive on disjoint unions --- but is not a homotopy invariant. – Theo Johnson-Freyd Aug 6 '10 at 21:24
Thank you all. My idea was to use Poincare Duality for $n>1$. Then using a homotopy equivalence of $GL(n)$ and $SL(n)$. Now, since Euler characteristic of a compact Lie group $Sl(n)$ for
$n>1$ is zero. We will have $chi_c(Gl(n))=0.$ Which coincides with above answers. – Karl Aug 7 '10 at 10:33
Karl: I realize now that you had thought it through and just wanted confirmation. Sorry if my comment seemed a little blunt. I also got zero using the same process. I guess you meant to
write $SO(n)$ rather than $SL(n)$. – Donu Arapura Aug 7 '10 at 12:53
add comment
The group $GL(n,\mathbb{R})$ is homotopic to $O(n)$ so these two spaces have the same Euler characteristic. For $n\geq 2$, $O(n)$ is a compact smooth manifold of positive dimension with
up vote 3 trivial tangent bundle. Hence its Euler class is trivial, and so is its Euler characteristic.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/34645/euler-characteristic-of-general-linear-group","timestamp":"2014-04-21T15:14:58Z","content_type":null,"content_length":"60925","record_id":"<urn:uuid:acb44aef-5f11-4423-9269-a400734621dd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is it obvious that the defining conditions to obtain a particular singularity are well defined on the quotient space?
up vote 0 down vote favorite
Let $~f:\mathbb{C}^2 \rightarrow \mathbb{C}$ be a holomorphic function vanishing at the origin, with the following properties: $$ f_{00}, ~f_{10}, ~f_{01}, ~f_{20}, ~f_{11} =0,~~f_{20} \neq 0 \qquad
\text{where} \qquad f_{ij} := \frac{\partial^{i+j} f(x,y)}{\partial x^i y^j}|_{(0,0)}.$$ What this means is that at the origin, the derivative of $f$ vanishes, plus the tangent vector $ \partial_x $
is in the Kernel of the Hessian of $f$. Moreover, the Hessian is not identically zero.
Hence, we can write the Taylor expansion of $f$ as $$ f = A_0(x) + A_1(x) y + A_2(x) y^2 + \ldots, \qquad \text{where} \qquad A_2(0) \neq 0.$$ Now it is easy to see (using the fact that $A_2(0) \neq
0$ and the implicit function theorem) that one can make a change of coordinates $y = \tilde{y}+ B(x)$, so that $f$ becomes $$f = \hat{A}_0(x) + \hat{A}_2(x) \tilde{y}^2 + \ldots \qquad \text{ie} \
qquad \hat{A}_1(x) =0.$$ This is an explicit procedure, ie $B(x)$ and $\hat{A}_0(x)$ can be computed as power series in $x$ and the coefficients of $x^n$ will just be some functions of the original
$f_{ij}$. Suppose $$ \hat{A}_0(x) = C_k x^k + C_{k+1} x^{k+1} + \ldots $$
My question: Is the quantity $C_k$ is invariant under $y \rightarrow y+x$?
Note that under the transformation $y \rightarrow y+x$, the $f_{ij}$ are all going to change. But the precise combination that makes up $C_k$, is going to be unchanged. That is the claim.
One can explicitly check this claim for each $k$. I believe there should be some obvious reason why this is true from the way those $C_k$ were obtained. But I don't see it immediately.
I intuitively expect this to be true because of the following reason: The direction $\partial_x$ is a special direction, ie the hessian vanishes along $\partial_x$. But, there is nothing really
special about the direction $\partial_y$. It belongs to the quotient space $ T\mathbb{C}^2/ < \partial_x>$ (and not the orthogonal complement with respect to some metric).
$\textbf{Not sure if relevant, but might help with the understanding:}$
The significance of these $C_k$ is that it gives us a necessary and sufficient criteria for the curve $f=0$ to have a singularity of type $A_{k-1}$ at the origin (ie it can be written as $v^2 + u^{k}
=0$ after a change of coordinates). If $C_i =0$ till $i=k-1$ and $C_{k} \neq 0$ then, the curve has an $A_{k-1}$ singularity at the origin as can easily be seen by a change of coordinates.
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry cv.complex-variables vector-bundles singularity-theory power-series or ask your own question.
|
{"url":"http://mathoverflow.net/questions/137141/is-it-obvious-that-the-defining-conditions-to-obtain-a-particular-singularity-ar","timestamp":"2014-04-18T13:33:17Z","content_type":null,"content_length":"49459","record_id":"<urn:uuid:d94de017-64d7-4377-99c7-7bbc397ea199>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
integration problem
March 3rd 2007, 05:49 PM
integration problem
Hi, I just did out this problem and I was wondering if someone could tell me if I did it correctly.
I had to take the integral of the cube root of 5x.
This is what I did:
The integral of the cube root of 5x is the same as (5x)^(1/3).
Therefore, I set u = 5x
This made dx = 1/5 du
Then I got the integral of u^(1/3) * 1/5
Since 1/5 is a constant, this led to:
1/5 * the integral of u^(1/3).
The integral of u^(1/3) is 3/4 u^(4/3).
Therefore, when you multiply this by 1/5, you get 3u^(4/3) divided by 20.
Replacing u with 5x, I got 3(5x)^(4/3) divided by 20 + C for a final answer.
Does this look right?
March 3rd 2007, 05:56 PM
Hi, I just did out this problem and I was wondering if someone could tell me if I did it correctly.
I had to take the integral of the cube root of 5x.
This is what I did:
The integral of the cube root of 5x is the same as (5x)^(1/3).
Therefore, I set u = 5x
This made dx = 1/5 du
Then I got the integral of u^(1/3) * 1/5
Since 1/5 is a constant, this led to:
1/5 * the integral of u^(1/3).
The integral of u^(1/3) is 3/4 u^(4/3).
Therefore, when you multiply this by 1/5, you get 3u^(4/3) divided by 20.
Replacing u with 5x, I got 3(5x)^(4/3) divided by 20 + C for a final answer.
Does this look right?
That's correct, but why didn't you check it by differentiating your answer
|
{"url":"http://mathhelpforum.com/calculus/12135-integration-problem-print.html","timestamp":"2014-04-18T21:05:26Z","content_type":null,"content_length":"5219","record_id":"<urn:uuid:590cedd3-6ce0-49b2-9a9e-9eb128e0f7d5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] C extension compiling question
Robert Kern robert.kern@gmail....
Fri Oct 29 09:28:45 CDT 2010
On Fri, Oct 29, 2010 at 08:45, Henry Gomersall <whg21@cam.ac.uk> wrote:
> On Fri, 2010-10-29 at 15:33 +0200, Jon Wright wrote:
>> You need to call import_array() in initspam. See:
>> http://docs.scipy.org/doc/numpy-1.5.x/user/c-info.how-to-extend.html
> Thanks, that solves it.
> It would be really useful to have a complete example somewhere. As in, a
> set of files for a minimum example that builds and runs.
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-October/053647.html","timestamp":"2014-04-18T19:19:36Z","content_type":null,"content_length":"3617","record_id":"<urn:uuid:adbebcb3-e62f-45b8-bc89-1253f8a858d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
regular logic
type theory (dependent, intensional, observational type theory, homotopy type theory)
computational trinitarianism = propositions as types +programs as proofs +relation type theory/category theory
logic category theory type theory
true terminal object/(-2)-truncated object h-level 0-type/unit type
false initial object empty type
proposition (-1)-truncated object h-proposition, mere proposition
proof generalized element program
cut rule composition substitution
cut elimination for implication counit for hom-tensor adjunction beta reduction
introduction rule for implication unit for hom-tensor adjunction eta conversion
conjunction product product type
disjunction coproduct ((-1)-truncation of) sum type (bracket type of)
implication internal hom function type
negation internal hom into initial object function type into empty type
universal quantification dependent product dependent product type
existential quantification dependent sum ((-1)-truncation of) dependent sum type (bracket type of)
equivalence path space object identity type
equivalence class quotient quotient type
induction colimit inductive type, W-type, M-type
higher induction higher colimit higher inductive type
completely presented set discrete object/0-truncated object h-level 2-type/preset/h-set
set internal 0-groupoid Bishop set/setoid
universe object classifier type of types
modality closure operator monad modal type theory, monad (in computer science)
linear logic (symmetric, closed) monoidal category linear type theory/quantum computation
proof net string diagram quantum circuit
(absence of) contraction rule (absence of) diagonal no-cloning theorem
Regular logic is the internal logic of regular categories. Its logical operations consist only of truth, conjunction, and existential quantification, which makes it a superset of finite-limit logic
and a subset of coherent logic and geometric logic.
|
{"url":"http://ncatlab.org/nlab/show/regular+logic","timestamp":"2014-04-19T04:57:36Z","content_type":null,"content_length":"32106","record_id":"<urn:uuid:e6831710-397e-4f80-a7c9-0f100361de60>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Curvature on a curve and circle
Replies: 6 Last Post: Mar 11, 2013 3:54 PM
Messages: [ Previous | Next ]
Curvature on a curve and circle
Posted: Mar 3, 2013 10:16 PM
The "well behaved, smooth" function f(x) has endpoints f(0) = f(h) = 0. The
curve of the function has length s1.
An arc of a circle passing through (0, 0) and (0, h) has fixed curvature k
and its arc length is also s1.
It is required to show that a point must exist on f(x) where curvature is
also k.
I have set up a CAS program to simulate the situation and the proposition
held up in every case.
I have been working with the idea that, at the required point, the normal to
f(x) is normal to the circle.
I am not making much headway. Any ideas appreciated.
Number of stars = 10 x Number of grains of sand on all the beaches and
deserts of Earth.
Date Subject Author
3/3/13 Curvature on a curve and circle Brad Cooper
3/4/13 Re: Curvature on a curve and circle James Waldby
3/5/13 Re: Curvature on a curve and circle Brad Cooper
3/4/13 Re: Curvature on a curve and circle William Elliot
3/4/13 Re: Curvature on a curve and circle Brian Q. Hutchings
3/8/13 Re: Curvature on a curve and circle Brian Q. Hutchings
3/11/13 Re: Curvature on a curve and circle Narasimham
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2438416&messageID=8512259","timestamp":"2014-04-17T04:12:59Z","content_type":null,"content_length":"23593","record_id":"<urn:uuid:28362a67-3cc9-4163-9435-59f6b51ac1d4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alternate Formula for Matrix Exponential
November 9th 2009, 12:56 PM #1
Nov 2009
Alternate Formula for Matrix Exponential
Trying to solve a proof for Linear Algebra:
for a in the set of complex numbers, e^a = lim k-> ∞ (1 + a/k)^k
Establish the matrix analog for any A in the set of real nxn square matrices
e^A = lim k->∞ (I + A/k)^k
I started off by trying the following:
A = P D P^-1
A^k = P D^k P^-1
e^At = P e^D^kt P^-1
not sure where to go from here or if this is the right thought process, I would appreciate any help I can get!
Here is a link which better describes what I am trying to prove:
The Matrix Exponential as a Limit of Powers
if anyone could help me get further I would really appreciate it!
You appear to be assuming here that A is "diagonalizable" which you are not given.
A^k = P D^k P^-1
e^At = P e^D^kt P^-1
not sure where to go from here or if this is the right thought process, I would appreciate any help I can get!
Shoot forgot to write that down. Sorry the problem does hint to solve by diagonalize, and that the matrix can be assumed to be diagonalizeable
November 9th 2009, 03:36 PM #2
Nov 2009
November 10th 2009, 04:59 AM #3
MHF Contributor
Apr 2005
November 10th 2009, 05:06 AM #4
Nov 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/113494-alternate-formula-matrix-exponential.html","timestamp":"2014-04-18T02:06:03Z","content_type":null,"content_length":"38922","record_id":"<urn:uuid:e1d24347-91e8-47b4-8cd2-47f65a0f3c62>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lecture 18: Displacement Current and Synchronous Motors
Today, I'm going to take a critical look at Ampere's Law.
I'm going to run a current through a wire, as we did before, but now I'm going to also put a capacitor in that line and so we are charging a capacitor.
Here is that capacitor.
And here is the wire.
We are running a current I.
And as we are running this current, clearly, we get a changing electric field inside the capacitor.
The electric field inside the capacitor, sigma free divided by kappa epsilon 0, which is also Q free divided by the area.
This is a circular plate capacitor.
Capital R, is the radius of this capacitor, so we get pi R squared kappa epsilon 0.
But since I run a current the Q free is building up all the time, and so the current per definition is dQ/dT, and so I now have ex- a changing electric field inside, dE/dt, which is the current I
divided by pi R squared, kappa epsilon 0, because I simply take the derivative of this equation, I get dQ/dT, and dQ/dT is I.
And only if the current is 0 is there no changing electric field inside.
So how does this affect the magnetic field?
Well, if I take here a point P1 at a distance little r from the wire, if you're far away from this capacitor it's hard to believe that Ampere's Law would not give the right answer.
And we will apply that very shortly, Ampere's Law.
It's on the blackboard there.
Suppose you are at the same distance from this line here at point P2.
Well, yeah, you've got to admit there's an interruption of current now.
There is no current going through this space and so you expect that the magnetic field here would be a little lower perhaps than it is here.
But not very much.
So the question is, how can we now calculate the magnetic field here and there, now that we have this opening in the wire.
Well, Biot-Savart could handle it but I wouldn't know how to do it because if there's a current flowing like this there's also a current going up on these plates, and one like so, and I wouldn't know
how to apply Biot-Savart.
In principle, yeah, but in practice, no.
How about Ampere's Law?
Well, let's give Ampere's Law a shot.
This is a cylindrical symmetric problem, so I choose a closed loop, which of course itself is a circle with radius R, and I apply -- I attach to this closed loop an open surface.
That's mandatory.
And I give myself an easy time, I make it a flat surface.
So now I apply Ampere's Law.
You see it there on the blackboard.
Anywhere on that closed loop, the magnetic field will have the same strength, for reasons of symmetry, and so we get B times 2 pi r equals mu 0 times I pen, and pen means the current that penetrates
my open surface.
Well, that's I.
I goes right through that surface.
And so the magnetic field at that point, P1, mu 0 times I divided by 2 pi r.
We've seen this several times before.
Now I wonder about P2.
Can I apply Ampere's Law for point P2?
Well, yeah, you can try.
So now I attach a closed loop to this point.
Circle again, radius little r, and I use this flat surface and I apply Ampere's Law.
Well, I'm in for a shock, because B times 2 pi r is not changing but there is no current that penetrates that surface.
And so I is 0, and so I have to conclude that the magnetic field at point P2 is 0 which is absurd.
Couldn't be.
I can make the situation even worse.
I'm going to revisit point P1, and here is my capacitor, and here is my point P1.
My current is flowing like so.
Here's my closed loop.
According to Ampere's Law, int- closed loop integral B dot dL.
Why should I choose a flat surface?
I'm entitled to any surface! I like surfaces like this.
They are attached to a closed loop, so I will choose that kind of a surface.
The surface now goes like so.
[whistle] Right through the capacitor plates, and I apply Ampere's Law, it's open here.
B times 2 pi r, the radius is little r.
Mu 0 times I, but there is no I going through that surface.
Nowhere through this surface is a current poking, because there is no current going between the capacitor plates, so now I have to conclude that the magnetic field at P1, which we first concluded was
this, is now also 0.
So something stinks.
So Ampere's Law is inadequate.
And so of course, Faraday and Ampere were both perfectly aware of this.
But yet it was Maxwell who zeroed in on this and he argued that any open surface that you attach to a closed loop should give you exactly the same result, same answer.
And so he suggested that we amend Ampere's Law, and so he asked himself the question, what is so special about in-between the capacitor plates?
Well, what is special there is in-between the capacitor plates there is a changing electric field.
And Maxwell reasoned, gee, Faraday's Law tells me that a changing magnetic flux gives rise to an electric field, so he says maybe a changing electric flux gives rise to a magnetic field.
And I want to remind you what an electric flux is.
Phi of E is the integral.
In this case it would an open surface of E dot dA.
That is an electric flux.
With Gauss's Law that you see on the blackboard there, we had a closed surface.
I'm talking now about an open surface.
That is an open surface.
This is an open surface, and this is an open surface.
And so Maxwell suggested that we have to add a term which contains the derivative of the electric flux.
And that's what I'm going to do there now, walking over to Ampere's Law.
I'm going to amend it in a way that Maxwell suggested.
He adds a term here, epsilon 0 kappa, d/dT, of the integral over an open surface attached to that closed loop of E dot dA.
This current, which is the one that penetrates, remember, through the surface is really a real current.
This term here, Maxwell called "displacement current.
I want to make sure that I have no slip of the pen, because I hate slips of the pen.
That is correct.
I have everything in place.
You may think now that we can start a party because all four Maxwell's equations are now in place.
Not quite.
We're going to make one small adjustment after spring break, and that adjustment is going to be made in this one, and then we'll have our party.
So now, I would like to use the new law and see whether we can clean up that mess.
So I'm going to revisit my point P1 and I'm going to apply the new law by first having a flat surface, that surface that we have here, and then trying this surface.
And I want to get the same answer.
If I use that surface, do we agree that there is a current going through that surface but there is no electric flux going through that surface?
So that second term, that displacement current term, is 0 for that flat surface.
So this answer is completely valid.
But now, I want to pursue this case.
And so I'll make a new drawing.
We have here this point, P1.
This is my radius little r.
This is my surface going right through here.
Here's the current I, and here is my changing electric field.
And so I get B times 2 pi r, mu 0.
I pen is 0.
There is no current penetrating through this bag.
This is open here.
So the first part is 0.
So I only deal with the second part, which is epsilon 0 K, kappa, displacement current.
And now I have to put in there d phi E/dt.
Phi E is very easy to calculate because E and dA right here- think of this part being flat.
Wherever you were inside the capacitor, if we assume that there are no fringe fields, then there is an electric field only where you're inside the capacitor, and so the electric flux is simply E
times the surface area, E and dA are in the same direction, so it is the electric field times this pi capital R squared.
And therefore, if I want to know what the derivative is, then I get this pi R squared which is that surface area and now I need there dE/dt.
And the dE/dt, we have, that is I divided by pi R square kappa epsilon 0.
I divide it by pi R squared kappa epsilon 0.
So this is the area A, and this is dE/dt, and this is the area of the part inside the capacitor that has a flux going through it, cause outside here there is no flux going through there, so there's
no contribution.
There's no contribution here, either.
There's no contribution here, either.
The electric field is only existent there.
That's my assumption.
And so the whole thing here is now d phi E/dt.
Well, let's look at our results.
I lose a pi, I lose my R squared, I lose my kappa and epsilon 0, and look what I get.
I get mu 0 times I.
That is truly amazing, so I find now that B equals mu 0 times I divided by 2 pi little r, which is exactly what we had before.
Hooray for Mr. Maxwell, because now it doesn't matter anymore whether you take the flat surface or whether you take the bag surface.
You now get the same answer.
In one case, there is no contribution from the displacement current term, and in the other case there is no contribution from the first term, the real current.
Let me make sure whether I'm happy with my results.
Yup, I think that's fine.
We can now also go one step further, and we can calculate anywhere in between the capacitor what the magnetic field is.
I'll make another drawing of the capacitor.
It's right here.
I have this E field.
I will not repeat that every time.
And here I have my point P2 now, which is inside the capacitor at a radius little r from the center.
And this is capital R, circular plates.
I have here my closed loop.
It's a circle, radius little r.
I apply now the new law, B times 2 pi r.
And there we go.
We have a mu 0, we have an epsilon 0, we have a kappa.
There's no current going through here, so it's non-negotiable.
I pen is 0, right?
That's not even an issue.
So we get mu 0, epsilon 0.
I get kappa.
But now, the surface area right here is not pi capital R squared, where there isn't changing electric field, but it is only pi little r squared.
I take a flat surface now.
And so now we multiply this by pi little r squared, and then of course dE/dt is the same, so we get our current I divided by pi capital R squared, divided by kappa epsilon 0.
And now you're again losing a lot.
You lose your epsilon 0, you lose your kappa.
I lose a pi.
And look, I have a little r here and I have a l- little r square there.
So now we're going to get a result which is something that you may actually have anticipated, namely that you get a- a field inside the capacitor that is growing with r, because if I make up my
balance I get upstairs mu 0 times I but I get one little r upstairs.
You see, you have r squared here and you have r here.
And downstairs I get 2 pi and then I get a capital R squared, and I believe that's correct.
Let me check my notes.
And yes, I'm happy with that.
And this is proportional with little r, whereas here falling off 1/R.
And so I can now make a plot of the magnetic field as a function of little r when I'm inside the capacitor plates.
Little r, these are the magnetic fields, and this is the radius of the capacitor plate.
It's going to be a straight line up to that point, and then it will fall off as 1 / little r and you can do your own work on that, that it's very trivial to calculate, to demonstrate that when you go
beyond the edge of the capacitor that then it falls off as 1/r, just in the same way that point P1 is doing.
So now we have a tool to calculate the magnetic field even inside capacitors while we were charging, which we didn't have before.
The strength here, the maximum magnetic field here, you'll find by substituting in there for little r capital R.
And when you do that, if this becomes capital R, this becomes 1/capital R.
If you had substituted in here for little r capital R, you would've found the same result.
This part is not kosher.
It cannot be correct, and I cannot make it right for you.
And the reason why that part can not be kosher is because we have made the assumption, which is wrong, that there is no fringe field.
And so we have assumed in our calculations that the electric field is only here and there, but it's 0 here so that there is nothing, no dE/dt here, no changing magnetic, uh, electric flux.
And that's not true.
So clearly, when you get close to the edge, this is not correct.
And there's no way I can correct for that, because the fringe fields will be different from capacitor to capacitor and those calculations of course are not even very easy to make.
But Maxwell had introduced his displacement current term.
He was a very smart man.
He predicted that as a consequence of that term that radio waves should exist.
There was a time that we didn't know that radio waves existed.
He predicted their existence, but not only did he predict their existence, he even was able to calculate what their speed was going to be.
We call that the speed of light.
And we will do that in a few weeks ourselves in 8.02.
In 1879, which was the year that Maxwell died, the German physicist Helmholtz asked once of his students, Hertz- he was 22 years at the time, he was a junior- to try to demonstrate that radio waves
indeed exist.
Hertz declined, because he argued that the equipment that was available at the time was not good enough.
But 7 years later when new equipment had been developed, he accepted the challenge and it took him 2 years, but then he indeed was able to demonstrate that radio waves do exist.
Imagine what a victory that was! Someone like Maxwell, who predicts out of nothing that radio waves should exist, and here comes someone who actually shows that they do exist.
Hertz died 5 years after his great experiments.
He was 37 years old.
He was very young.
Had he lived 10 more years, there is no doubt in my mind that he would've been awarded with the Nobel Prize for physics, but the first Nobel prize was only given in 1901, so he died just a little bit
too early.
Maxwell also died very young, age 48.
Why did Maxwell call that strange term displacement current?
In the presence of a dielectric, if you put a dielectric in there, the changing electric field will indeed cause a current in between the plates, because the polarization will change all the time.
You get a re-eras- re-arrangement of these induced charges, so there is indeed a current, but in vacuum there shouldn't be any current.
Any electric field changing or not changing will not cause a current in vacuum.
But Maxwell believed that vacuum in a way behaves like any other dielectric, just a special dielectric, happens to be a dielectric with kappa equals 1.
And so he really believed that there was an actual current going between the plates, even though we now know of course that that is not the case.
So the name displacement current was perhaps not a very lucky one, but the term is a must, and it completes the theory of electricity and magnetism.
The name is obviously of no consequence.
After all, Shakespeare said it himself, in Romeo and Juliet, what's in a name?
Remember, what's in a name?
That which we call a rose by any other name would smell as sweet.
Those were the words by Shakespeare.
I will abandon for now the displacement current, but we will revisit it later when we will deal with radio waves and with the propagation of electromagnetic radiation, and I will return now to good
old Faraday, and I will return to electric generators that run our economy.
We've discussed this at length, and I want to revisit that to you- with you.
Remember that if you rotate conducting loops in magnetic fields.
that you create induced EMFs, currents, and that keeps our economy going.
Here is again one of those loops.
Conducting wire, and I don't care about the direction of the magnetic field.
If you want it this way, that's fine.
What matters is that we're going to rotate it about this axis, and as we rotate it about this axis we're going to get an induced EMF.
And that induced EMF, which we derived I think it was last lecture, as a function of time, will be a sinusoidal or a cosinusoidal curve, and therefore, will look something like this.
I call this loop number one, and so this is the EMF produced by loop number one.
But now I'm going to add two more loops which are not electrically connected, physically separate.
If you look from this direction you will see the following.
This would be your loop number one, because you're looking in this direction and you would only see the conducting wire like so.
I have now a second one which is rotated 120 degrees, and so in this picture you will see it like so.
Look number two, and this is 120 degrees.
Physically 120 degree rotated.
And then I have a third one which is again 120 degrees rotated.
It is like so.
And so this angle here is also 120 degrees and so this angle is 120 degrees.
And this is my look number three.
And so each one of those will give an EMF that has this shape but they are offset now in phase by 120 degrees.
And so they're all rotating in exactly the same way, like so, and so the second one will give me an EMF if I try to estimate that roughly, something like this, so this is loop number two.
It comes a little later in time and number three will again be offset, will look like this.
Number three.
And what we- we call this a three phase current.
And a three phase current can produce a rotating magnetic field.
We will make one for you but I'll first explain to you how that works.
So if the period of number one is 60 Hertz, then the period of number two is also 60 Hertz, and number three is also 60 Hertz, but they're just offset in terms of the phase angle.
Suppose you're looking down onto a horizontal table, so this is a horizontal table.
And I have here a solenoid.
This is one and the same solenoid.
When the current runs clockwise here it will also run s- run clockwise there.
But it's open here.
Going to put something in there.
We call this number one.
Then I have another one which is rotated, physically rotated 120 degrees.
It's here.
Also coils.
And I'm going to feed current number two through those coals later, through those coils.
This is number two.
And I have a third one and I'm going to run current number three through those.
So here are coils, here are coils, this is number three.
And so one sees current number one, two sees current number two, and three sees current number three.
At the moment that the current through number one reaches a maximum, the currents in two and three are down by a factor of two.
You can check that [break in tape].
During my lecture I went a little bit too fast over the part that is coming up now, so I'm going to redo it a little slower to make it more clear.
When the current through loop one reaches a maximum, let's say then that the magnetic field due to loop one is in this direction.
The current through this one and the current through this loop are two times smaller.
You may want to check that.
But it just so happens that the vectorial sum of the magnetic field produced by loop number two and by loop number three also happen to be in this direction, so the net magnetic field is in this
Let's now look one third of a period later in time.
Now, the current in loop number two reaches a maximum, so its magnetic field is now in this direction, and you guessed it of course, that it just so happens that the vectorial sum of the magnetic
field produced by the other two loops is now also in this direction.
And if we now look again one third of a period later, when the current through loop number three reaches a maximum, then the magnetic fields will be in this direction and the vectorial sum of the
magnetic field of the other two loops will also be in this direction.
And look now what has happened.
In one complete period, the magnetic field started out like this.
One third of a period later it was like this, and one third of a period later it was like this.
So what we have created is now a rotating magnetic field, and it rotates in one period all the way around, 360 degrees.
OK, let's now go back to my original lecture.
[break in tape] rotates once around in the period of your alternating current.
If that is a 60 Hertz current, then it will rotate around with 60 Hertz, 60 times per second.
And so if we put a, a magnet in here, then this magnet will want to go around -- wants to follow this rotating magnetic field.
And we call that a synchronous motor.
So the rotor of such a synchronous motor itself would be a magnet and it would rotate around with the frequency of your alternating current.
But you need a three-phase current for that so that the magnetic field rotates.
I can also place in here -- this is again a horizontal surface, you have it here, you're going to do it right here.
Here is that -- here are those crazy loops with the three-phase current.
We can also put in here um conducting sphere, or in my case I will use a conducting egg.
And when the magnetic field rotates around, there's a continuous magnetic flux change through the surface of that conducting sphere or egg, and so it's going to run cur- eddy currents.
Now, if you have an eddy current going around, and you have a magnetic field, then the magnetic field in the area current will cause a torque on the current.
In a similar way, when we discussed earlier the idea of a motor that you were going to build, there was a magnetic field and there was a current and that caused a torque, in the same way you get a
torque on the eddy current and so it starts to torque up this conducting sphere.
And all the time there will be eddy current because the magnetic field keep going around, and so you're going to get a torque which will always be in the same direction and this conducting object
will now start to rotate and we call that an induction motor.
Induction motors have no brushes.
How fast the induction motor will go depends on the conducting object.
If it is a sphere, it will probably come very close to 60 Hertz because a sphere has many possibilities for eddy currents to run around, whereas if you take a ring, and we will try that, if you try
to spin a ring in a rotating magnetic field then of course the various paths that are available for eddy currents are very limited and only go around in the ring.
Many of the stationary tools that you find in people's workshops and in the basements are induction motors.
A table saw and drill presses, also electric grass mowers are induction motors.
I want to demonstrate now to you what three-phase currents can do, and the first thing I'm going to do is show you that unit that we have here which is the- which are the coils that I described,
through which we are going to run the three-phase current.
Must get my lights right.
There you see it.
Coils are wound in a very strange way.
After class, you can come a little closer, and so in here you would have a rotating magnetic field.
We use 60 Hertz, which rotates around 60 times per second.
And the first thing I'm going to do is something wild, a little bit in style, I suppose.
I will put on top there a cardboard cover with little magnets.
They're randomly oriented.
There's no way that they can ever rotate.
They th- thee- are these little magnets, flat, a lot of friction.
If I expose them to a rotating magnetic field, then they will go nuts.
I told you, they were going nuts.
You're not supposed to show this to students, but OK.
Now here I have a conducting egg.
And this now has ample possibilities for eddy currents to flow, and so now if I give these coils the right current, a three-phase current, it spun up, the magnetic fields acts on the eddy currents,
and it starts to spin and I have actually tried to measure the rotation rate.
It's only a little bit under 3600 RPM.
3600 RPM would be 60 Hertz.
It's very close to that.
If I spin this object in the direction in which the magnetic field is not rotating, then it says sorry, no way.
It just reverses, because the magnetic field is going to slave it in the direction that it wants it to go.
So we're looking there at an induction motor.
I have here a ring.
Now a ring doesn't have as many possibilities for eddy currents to run.
Could only go around like this, or like this, right?
But I can still make it spin probably if I do the right thing.
First of all, I have to rotate it in the right direction, which would be this one, I think.
There it goes.
It doesn't go anywhere nearly as fast as the egg because of the restrictive paths of the eddy currents.
But it rotates.
It's trying to follow that magnetic field to the best it can, but its abilities are very limited.
And needless to say, if I try to spin it in the wrong direction, that of course it will stop and it has no way like the egg to reverse its direction because of its peculiar geometry.
I owe you an explanation to the secret top.
If you haven't found one yourself yet.
Let me first come to a simple conclusion, and all of you must have come to that conclusion.
That top, when I showed it to you during my exam review was spinning for more than an hour.
In fact, it was spinning the next day.
Energy has to come from somewhere, and so the only conclusion that you could've drawn that the energy came from inside the box, there must be something inside the box.
Clearly there must be a battery in that box, and there is.
But that doesn't tell you how it works yet.
And I can assure you, I can admit that it took me quite a while before I fully understand how it works.
And I want to explain that to you, and then I will demonstrate it to you again.
Remember what it looks like.
In the top itself is a magnet.
So here is the top.
Let's say this is north and this is south and this is that top.
We're rotating it.
We're spinning it.
Inside the box, right at the center of the box, is a solenoid, a switch and a 9-volt battery.
Inside this solenoid is also a little bit of iron.
We have not discussed that in our course.
It's not important for the explanation.
You'll later understand why there is also some iron in here.
It makes the magnetic field so it's a little stronger.
This is right at the center of the- of the little platform on which I was running this.
It is a concave platform, and here is a little plastic node so when the top hits it it bounces off.
Imagine for now that it's rotating in such a way that the North Pole is approaching that solenoid, coming in from above.
What's going to happen now, in this solenoid, in this coil, you are changing the magnetic flux through the surface of this coil and so you're introducing an induced EMF, an induced current.
And this current is sensed by a transistor which I have not put in here, and the transistor throws this switch and now sucks energy out of the battery and runs current, very high current through this
coil, such that the top becomes the south pole.
Remember if you have a coil, and you run current through here, that you get a magnetic field like so.
In this case, this would be north pole and this would be south pole.
If you reverse the current, then this is south and this is north.
That magnetic field that comes out there is fanning out in all direction three dimensionally.
It's going like this and it's fanning out like this.
So when it approaches this coil, there is a change in magnetic flux.
And this becomes a south pole.
The north pole is being attracted by the south pole.
I make you look at this from above now.
Here is the top seen from above, so it's spinning in this direction.
And let's say here is this coil.
This is the north pole and this is the south pole.
And I just discussed with you that the current that's going to flow from the battery will make this a south pole.
The current can only flow in that coil in one direction.
That's just the way it's designed.
So whenever the current goes it's always this becomes a south pole.
The north pole is being attracted by the south pole.
So notice it's going to be torqued up.
So far, so good.
A little later in time, looking from above, the north pole will be here and the south pole will be there, it has rotated a little bit further, and the coil is here.
So now the north pole is leaving.
It's receding.
It's not approaching, it's receding.
If this were to remain a south pole, that would be disastrous because south poles and north poles would attract each other.
You don't want that.
Well, the transistor senses that the EMF in the coil reverses direction.
It has to reverse direction, because if the north pole comes in, the EMF is in one direction but when the north pole leaves the EMF of course goes in the other direction.
You've got a reversal.
And so what the transistor does, it opens the switch, and so there is no north pole here and there is no south pole here, and so the thing starts to go around further.
What happens now when the south pole approaches that solenoid?
OK, here is the situation that the south pole is now approaching.
It's rotating in this direction.
And here is the coil.
South pole is coming in.
From Faraday's point of view, there is no difference between the north pole receding or the south pole approaching.
You should be able to reason that for yourself.
That's exactly the same thing, and so the transistor knows that indeed the current is still in the wrong direction.
It keeps the switch open.
It does nothing, so as the south pole approaches no current through this coil.
Because remember, the current can only go in one direction, can only make this a south pole.
It could have been designed in such a way that the current could go in both directions.
It would have made it more expensive.
Just a matter of economy.
So nothing will happen here.
But now, there comes a time that the south pole is leaving, is receding, and so let us have here our coil.
A south pole receding is exactly the same for Mr. Faraday as a north pole approaching.
And so now the EMF is in the same direction in the coil as it was here.
And so now the transistor says yippee, that's fine.
I close the switch and I'm going to run a current and so this becomes the south pole.
South pole and south pole repel each other, so now the thing gets a kick again.
So when the north pole approaches, it pulls on the north pole, and when the south pole recedes, it pushes on the south pole.
And so it is an induction motor which is only powered half per full rotation, but a very, very clever design.
And I'm going to demonstrate it to you again.
I want you to see that as this top approaches the center that actually, you can see it starting spinning up.
It really gets its energy when it gets close to this coil.
It also works, doesn't matter in which direction you spin it.
That's also great.
You can spin it clockwise or counter-clockwise.
Makes no difference.
And so, you're going to see that top here, I think.
There it is.
And this is probably the best way for you to see it.
We have here this box in which there is this very simple circuit, very simple, and here is the top.
Has little bar magnet in it.
I will try to spin it just a little bit, to make you see that it actually spins up.
It may not be so easy for you to see, but I can s- ah, you see it's really spinning up now.
And then it loses to friction, rotation rate.
It always comes back to the center because the surface is concave, and then it gets to the center there and gets its kicks again, gets spun up.
And this can go on for as long as your battery lasts, which is about a few days.
I can also rotate it counterclockwise.
It's not so easy, it's funny.
Have you ever tried to rotate a top counter-clockwise?
It's very hard! I don't know why that is, because maybe because I'm right-handed.
I'll try.
You see, I failed.
Ah, this is a nice one.
I gave it a teeny-wee little spin.
It doesn't like it.
It stays near the center.
Oh, doesn't like that.
It has to be a little bit away from the center.
Ah, I think, I think I've got it now.
Oh, no.
Oh, no.
Too optimistic.
Isn't it strange that it's hard for a person to rotate something counterclockwise?
OK, I did it.
Slowly coming in, to friction now.
And when it gets close to the center, there it gets its kicks.
Now it's spinning up.
Ah, you can really see it now.
It's being spun up.
You see that?
Really spun up.
I have another fantastic toy for you, which is also an induction motor.
And that one you're going to see there, first want to explain to you how that one works.
That's a real beauty.
It is an induction motor that runs on a two-phase current.
Here we have solenoid.
It's this, this baby here, big one.
And we're going to run 60 Hertz AC current through there, and here we have one, also going to run 60 Hertz AC through there.
Here's the coil.
This one is easy to show.
The other one, very heavy.
These two currents are 90 degrees out of phase.
Not 120, but 90.
So it's a two-phase current.
So when the current is maximum through this one, it is 0 through this one.
When it is maximum through this one, it's 0 through this one.
Let's look at the moment that the current is maximum through this one.
And let's say the magnetic field is in this direction, which is created by this coil.
Then there is no current here.
But one quarter of a period later, this one has no current but this one does.
Let's assume that now the magnetic field from this one is in this direction.
And so what you're going to see now is that again, you have a rotating magnetic field which goes like so.
And if I put in here a can, a paint can, we use a coffee can, it's right here, and at the surface of that can, so I will draw the can here, but it's really there.
At the surface of that can, you're going to have eddy currents, because you have change of magnetic flux all the time and these eddy currents with the magnetic field will cause a torque on this can.
And the torque will always be in the same direction and the thing is going to rotate.
So it's another example of an induction motor.
It's a very cute one.
And I want to show it to you and for that you're going to see it there.
Needless to say that the can is a very special can, Maxwell House coffee.
Of course.
And you see that?
Maxwell House.
All right, so let's see whether we can get this to run.
There it goes.
Nice example of an induction motor, and now I'm going to test you.
I'm going to ask you what happens if I take this coil here, which I can do, and flip over from here to here.
Who thinks that nothing will happen?
And you're all afraid of me now, right?
Who thinks that it will come to a grinding halt?
Grinding halt.
Who thinks that the direction of the motor will reverse?
Good for you.
It's clear that when I flip this one over, you can easily go through that for yourself, that this magnetic field is then not rotating this way, but is rotating that way, and so clearly the motor will
reverse direction.
I'm going to do that now.
Watch it.
So it's torquing now in the opposite direction.
It's coming to a grinding halt.
You were right.
The other people were also right, because now it's reversing direction.
And you see there it goes.
Another striking example of an induction motor.
Next lecture, Friday, I'm going to elevate a woman.
|
{"url":"http://ocw.mit.edu/courses/physics/8-02-electricity-and-magnetism-spring-2002/video-lectures/lecture-18-displacement-current-and-synchronous-motors/","timestamp":"2014-04-20T21:57:41Z","content_type":null,"content_length":"90805","record_id":"<urn:uuid:3a5b037b-e4f9-4343-90a7-cc8d1e4f7a42>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Asymptotic Performance of
Results 1 - 10 of 30
- IEEE TRANS. INFORM. THEORY , 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in
modulation and analog-to-digital conversion was first recognized during the early development of pulsecode modula ..."
Cited by 639 (11 self)
Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and
analog-to-digital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett
published the first high-resolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which
would provide a theory for quantization as analog-to-digital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its
origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
- IEEE Trans. on Information Theory , 1982
"... and their duals a very fast algorithm is given for finding the closest lattice point to an arbitrary point. If these lattices are used for vector quantizing of uniformly distributed data, the
algorithm finds the minimum distortion lattice point. If the lattices are used as codes for a Gaussian chann ..."
Cited by 64 (7 self)
Add to MetaCart
and their duals a very fast algorithm is given for finding the closest lattice point to an arbitrary point. If these lattices are used for vector quantizing of uniformly distributed data, the
algorithm finds the minimum distortion lattice point. If the lattices are used as codes for a Gaussian channel, the algorithm performs maximum likelihood decoding. T I.
, 1993
"... Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which
jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. ..."
Cited by 54 (18 self)
Add to MetaCart
Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly
optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference
vectors, their positions and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic
quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression, i.e., we quantize the wavelet coefficients of gray
level images and measure the reconstruction error. Our approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained
vector quantizati...
, 1997
"... New results are proved on the convergence of the Shannon lower bound (SLB) to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a
fundamental property of informational divergence. As a corollary, it is shown that the SLB is asymptotically ti ..."
Cited by 45 (15 self)
Add to MetaCart
New results are proved on the convergence of the Shannon lower bound (SLB) to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a
fundamental property of informational divergence. As a corollary, it is shown that the SLB is asymptotically tight for norm-based distortions, when the source vector has a finite differential entropy
and a finite ffth moment for some ff ? 0, with respect to the given norm. Moreover, we derive a theorem of Linkov on the asymptotic tightness of the SLB for general difference distortion measures
with more relaxed conditions on the source density. We also show that the SLB relative to a stationary source and single letter difference distortion is asymptotically tight under very weak
assumptions on the source distribution. Key words: rate distortion theory, Shannon lower bound, difference distortion measures, stationary sources T. Linder is with the Coordinated Science
Laboratory, University of Illinoi...
- IEEE Trans. Inform. Theory , 2006
"... Abstract—This paper investigates quantization methods for feeding back the channel information through a low-rate feedback channel in the context of multiple-input single-output (MISO) systems.
We propose a new quantizer design criterion for capacity maximization and develop the corresponding iterat ..."
Cited by 36 (2 self)
Add to MetaCart
Abstract—This paper investigates quantization methods for feeding back the channel information through a low-rate feedback channel in the context of multiple-input single-output (MISO) systems. We
propose a new quantizer design criterion for capacity maximization and develop the corresponding iterative vector quantization (VQ) design algorithm. The criterion is based on maximizing the
mean-squared weighted inner product (MSwIP) between the optimum and the quantized beamforming vector. The performance of systems with quantized beamforming is analyzed for the independent fading
case. This requires finding the density of the squared inner product between the optimum and the quantized beamforming vector, which is obtained by considering a simple approximation of the
quantization cell. The approximate density function is used to lower-bound the capacity loss due to quantization, the outage probability, and the bit error probability. The resulting expressions
provide insight into the dependence of the performance of transmit beamforming MISO systems on the number of transmit antennas and feedback rate. Computer simulations support the analytical results
and indicate that the lower bounds are quite tight. Index Terms—Bit error probability, channel capacity, channel state information, multiple antennas, transmit beamforming, outage probability, vector
quantization (VQ). I.
- IEEE Trans. Inform. Theory , 1995
"... This paper extends Bennett's integral from scalar to vector quantizers, giving a simple formula that expresses the rth-power distortion of a many-point vector quantizer in terms of the number of
points, point density function, inertial profile and the distribution of the source. The inertial profile ..."
Cited by 32 (6 self)
Add to MetaCart
This paper extends Bennett's integral from scalar to vector quantizers, giving a simple formula that expresses the rth-power distortion of a many-point vector quantizer in terms of the number of
points, point density function, inertial profile and the distribution of the source. The inertial profile specifies the normalized moment of inertia of quantization cells as a function of location.
The extension is formulated in terms of a sequence of quantizers whose point density and inertial profile approach known functions as the number of points increases. Precise conditions are given for
the convergence of distortion (suitably normalized) to Bennett's integral. Previous extensions did not include the inertial profile and, consequently, provided only bounds or applied only to
quantizers with congruent cells, such as lattice and optimal quantizers. The new version of Bennett's integral provides a framework for the analysis of suboptimal structured vector quantizers. It is
shown how the loss...
- in Proc. IEEE Int. Symp. on Information Theory, p. 55 , 1997
"... Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to
distortion measures that are increasing functions of the norm of their argument. In both cases, the distortio ..."
Cited by 28 (3 self)
Add to MetaCart
Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to
distortion measures that are increasing functions of the norm of their argument. In both cases, the distortion is uniquely determined by the vector quantization error, i.e., the Euclidean difference
between the original vector and the codeword into which it is quantized. We generalize these asymptotic bounds to input-weighted quadratic distortion measures, a class of distortion measure often
used for perceptually meaningful distortion. The generalization involves a more rigorous derivation of a fixed rate result of Gardner and Rao and a new result for variable rate codes. We also
consider the problem of source mismatch, where the quantizer is designed using a probability density different from the true source density. The resulting asymptotic performance in terms of
distortion increase in dB is shown...
- IEEE Trans. Inform. Theory , 1999
"... Entropy-coded vector quantization is studied using high-resolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally
quadratic" a rigorous derivation of the asymptotic distortion and entropy-coded rate of multidimensional compan ..."
Cited by 22 (3 self)
Add to MetaCart
Entropy-coded vector quantization is studied using high-resolution multidimensional companding over a class of nondifference distortion measures. For distortion measures which are "locally quadratic"
a rigorous derivation of the asymptotic distortion and entropy-coded rate of multidimensional companders is given along with conditions for the optimal choice of the compressor function. This optimum
compressor, when it exists, depends on the distortion measure but not on the source distribution. The rate-distortion performance of the companding scheme is studied using a recently obtained
asymptotic expression for the rate-distortion function which parallels the Shannon lower bound for difference distortion measures. It is proved that the high-resolution performance of the scheme is
arbitrarily close to the rate-distortion limit for large quantizer dimensions if the compressor function and the lattice quantizer used in the companding scheme are optimal, extending an analogous
statement for...
"... High-rate (or asymptotic) quantization theory has found formulas for the average squared length (more generally, the q-th moment of the length) of the error produced by various scalar and vector
quantizers with many quantization points. In contrast, this paper finds an asymptotic formula for the pro ..."
Cited by 12 (2 self)
Add to MetaCart
High-rate (or asymptotic) quantization theory has found formulas for the average squared length (more generally, the q-th moment of the length) of the error produced by various scalar and vector
quantizers with many quantization points. In contrast, this paper finds an asymptotic formula for the probability density of the length of the error and, in certain special cases, for the probability
density of the multidimensional error vector, itself. The latter can be used to analyze the distortion of two-stage vector quantization. The former permits one to learn about the point density and
cell shapes of a quantizer from a histogram of quantization error lengths. Histograms of the error lengths in simulations agree well with the derived formulas. Also presented are a number of
properties of the error density, including the relationship between the error density, the point density and the cell shapes, the fact that its qth moment equals Bennett's integral (a formula for the
average distortio...
, 2004
"... explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the
distortion measure. Such distortion side information is not only useful at the encoder but under certain conditi ..."
Cited by 11 (2 self)
Add to MetaCart
explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the distortion
measure. Such distortion side information is not only useful at the encoder but under certain conditions knowing it at the encoder is optimal and knowing it at the decoder is useless. Thus distortion
side information is a natural complement to Wyner-Ziv side information and may be useful in exploiting properties of the human perceptual system as well as in sensor or control applications. In
addition to developing the theoretical limits of source coding with distortion side information, we also construct practical quantizers based on lattices and codes on graphs. Our use of codes on
graphs is also of independent interest since it highlights some issues in translating the success of turbo and LDPC codes into the realm of source coding. Finally, to explore the dynamics of side
information correlated with the source, we consider fixed lag side information at the decoder. We focus on the special case of perfect side information with unit lag corresponding to source coding
with feedforward (the dual
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=231779","timestamp":"2014-04-19T06:26:15Z","content_type":null,"content_length":"40466","record_id":"<urn:uuid:68c4c348-9a3a-4cf0-a6da-4baf4ce86b1f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
n dimensional extension of Mandelbrot set, based on the SVD
So yesterday I tried out something I wondered for quite a while...
I took the Singular Value Decompoition of an aribitary 2D vector M={{x},{y}}, squared the results in a nearly trivial manner and multiplied it all together.
What I discovered, was exactly what I hoped for:
where U and V in general are rotation-style matrices and Sigma is the Matrix of Singular values. It behaves a lot like a radius.
In this special case, V = 1 and Sigma = {sqrt(a^2+b^2),0,0}.
squaring a square matrix, which is the case for both U and V, is always defined. Sigma will always be as close to a diagonal matrix as possible, which also is defined: Simply square all the numbers.
Well, doing so for said 2D vector above, you actually get {{x²-y²},{2xy}}, which perfectly corresponds to complex number multiplication.
Now, what would stop anybody to do the very same thing for a 3D vector?
Nothing, luckily.
So I have a more or less natural extension of the Mset to 3D or to nD in general
What I got for 3D after svd, squaring and simplifying, is this:
now, just add the constants a,b,c and you should, in theory, get an Mset as close as possible to a "true" 3D version.
At least as far as vector stuff goes.
The Singular Value Decomposition is so to speak the "ideal" decomposition of any Matrix where all the information can be found: rotation+translation, scaling, back-rotation+translation
Using it should make it possible for a wide range of problems to be "propely" extended, as far as this even exists.
I already experimented a bit with this SV3D MSet and did an xy-slice buddhabrot.
Evident from that is, that this is NOT a extension in the sense of simply adding a dimension to the Mset: the slice has the same basic body vs. head structure but besides that doesn't look anywhere
close to the basic Mset usually seen.
It's rather a port to 3D, based on directly perform a "double angle" operation in a 3D-vector sense.
How exactly this angular conversion works in this case, I'm not sure about.
I'll soon post the xy plane and after that start to render the xz plane. What I can see so far is, that it is symmetric to the y-axis....
I'm also curious what happens when looking at the full 3D object... How much whipped cream vs how much "actual detail"...
By now I'm kinda certain that the "true" nD Mset with n>2 will always include whipped cream. I guess, it's partially due to the whole thing being a kind of rotation. So I wouldn't be surprised if the
above transform results in that. Still I'd love to see the results or how it works together with other variants. - The various hybrids look awesome and adding another "template" to play with and
cross-section with other sets could give new interesting patterns...
xy plane (edited and updated)
xz plane
|
{"url":"http://www.fractalforums.com/new-theories-and-research/n-dimensional-extension-of-mandelbrot-set-based-on-the-svd/","timestamp":"2014-04-19T11:56:55Z","content_type":null,"content_length":"92372","record_id":"<urn:uuid:eee0b73e-903d-44ed-864c-237d5d893e1a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|