content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Homework Help
Posted by Cathy on Sunday, June 23, 2013 at 6:19am.
You are given vectors A = 5.0i - 6.5j & B = -3.5i + 7.0j. A third vector C lies on the xy-plane. Vector C is perpendicular to vector A, & the scalar product of C with B is 15.0. From this
information, find the components of vector C.
• Physics - MathMate, Sunday, June 23, 2013 at 6:35am
If A is perpendicular to C, then A.C=0, or
C=[6.5k, 5k]
(verify that A.C=0)
If in addition, C.B=15, we have
But C.B=15, therefore 12.25k=15, =>
=[390/49, 300/49]
Check my arithmetic and that the given conditions are satisfied.
• Physics - Cathy, Sunday, June 23, 2013 at 7:41am
• Physics :) - MathMate, Sunday, June 23, 2013 at 8:14am
you're welcome!
Related Questions
physics - a 2.0 kg particle is given a displacement (delta)r= 2.0i+3.0j (m). ...
Science - We have a problem in AP Physics that we just can't get. It has to do ...
vectors - Given the vectors v=-2i+5j and w=3i+4i, determine (do I just plug what...
Physics/Math - Two vectors are given by a = 3.3i + 6.5j, and b = 2.5i + 4.9j. ...
math - The vectors u, v are given by u = 3i + 5j, v = i - 2j. Find scalars a, b ...
math - The vectors u, v are given by u = 3i + 5j, v = i - 2j. Find scalars a, b ...
science - if u have a negative vector component do u leave it as it is or do u ...
vectors - The position vectors, r km, and the velocity vectors, v km/h, of two ...
physics multiplying vectors - n the product F = qv · B , take q = 4, = 2.0i + 4....
Math - The position vectors, r km, and velocity vectors, vkm/h, of two boats at ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1371982769","timestamp":"2014-04-19T22:48:04Z","content_type":null,"content_length":"9074","record_id":"<urn:uuid:bee057d8-f422-4d2d-aa43-ab124794352f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
|
12.2 Lyapunov Equations
The function LyapunovSolve attempts to find the solution Lyapunov equation
a special class of linear matrix equations that occurs in many branches of control theory, such as stability analysis, optimal control, and response of a linear system to white noise (see, e.g.,
Brogan (1991)). Typically,
involves only matrices
For discrete-time systems, the discrete Lyapunov equation,
arises. The solution can be found with DiscreteLyapunovSolve.
Functions for solving Lyapunov equations.
Here is some matrix a.
Here is matrix c.
This solves the discrete Lyapunov equation and simplifies the result.
We may verify that the solution indeed satisfies the discrete Lyapunov equation.
Consider now an example of design of a feedback controller that uses Lyapunov's method to approximate the minimum-time response system for a given system (see Brogan (1991), Section 10.8). The method
is applicable to systems stable at least in Lyapunov's sense. To find the Lyapunov function
These are matrices
This solves the Lyapunov equation. The matrix
This computes the control law.
Despite the fact that our system is linear, the minimum time response control is not.
This plot shows the first control signal as a function of the state variables.
This is the second component of the control signal.
LyapunovSolve and DiscreteLyapunovSolve use the direct method (via the built-in function Solve) or the eigenvalue decomposition method. The method is set using the option SolveMethod, which
correspondingly accepts the values DirectSolve and Eigendecomposition. If this option's value is Automatic, these methods are tried in turn until one succeeds or all are tried.
Option specific to Lyapunov equation solvers.
|
{"url":"http://reference.wolfram.com/legacy/applications/control/FunctionIndex/DirectSolve.html","timestamp":"2014-04-18T20:56:38Z","content_type":null,"content_length":"38073","record_id":"<urn:uuid:2177df39-872c-400f-80f9-a93696c4477b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
An SQP algorithm for extended linear-quadratic problems in stochastic programming.
(English) Zbl 0835.90058
Summary: Extended Linear-Quadratic Programming (ELQP) problems were introduced by Rockafellar and Wets for various models in stochastic programming and multistage optimization. Several numerical
methods with linear convergence rates have been developed for solving fully quadratic ELQP problems, where the primal and dual coefficient matrices are positive definite. We present a two-stage
sequential quadratic programming (SQP) method for solving ELQP problems arising in stochastic programming. The first stage algorithm realizes global convergence and the second stage algorithm
realizes superlinear local convergence under a condition called $B$-regularity. $B$-regularity is milder than the fully quadratic condition; the primal coefficient matrix need not be positive
definite. Numerical tests are given to demonstrate the efficiency of the algorithm. Solution properties of the ELQP problem under $B$-regularity are also discussed.
90C15 Stochastic programming
90C20 Quadratic programming
[1] J.R. Birge and R.J-B. Wets, Designing approximation schemes for stochastic optimization problems, in particular, for stochastic programs with recourse, Math. Prog. Study 27 (1986) 54–102.
[2] X. Chen, L. Qi and R.S. Womersley, Newton’s method for quadratic stochastic programs with recourse, J. Comp. Appl. Math. 60, to appear.
[3] X. Chen and R.S. Womersley, A parallel inexact Newton method for stochastic programs with recourse, Ann. Oper. Res., to appear.
[4] F.H. Clarke,Optimization and Nonsmooth Analysis (Wiley, New York, 1983).
[5] R. Fletcher,Practical Methods of Optimization, 2nd ed. (Wiley, 1987).
[6] A. Genz and Z. Lin, Fast Givens goes slow in MATLAB, ACM Signum Newsletter 26/2 (April 1991) 11–16.
[7] P. Kall, A. Ruszczyński and K. Frauendorfer, Approximation techniques in stochastic programming, in:Numerical Techniques for Stochastic Programming, eds. Y. Ermoliev and R.J-B. Wets (Springer,
Berlin, 1988) pp. 33–64.
[8] A. King, An implementation of the Lagrangian finite generation method, in:Numerical Techniques for Stochastic Programming, eds. Y. Ermoliev and R.J-B. Wets (Springer, Berlin, 1988) pp. 295–312.
[9] J.M. Mulvey and A. Ruszczyński, A new scenario decomposition for large-scale stochastic optimization, Technical Report SOR-91-19, Princeton University, Princeton, NJ, USA (revised 1992).
[10] J.S. Pang, S.P. Han and N. Rangaraj, Minimization of locally Lipschitzian functions, SIAM J. Optim. 1 (1991) 57–82. · Zbl 0752.90070 · doi:10.1137/0801006
[11] J.S. Pang and L. Qi, Nonsmooth equations: Motivations and algorithms, SIAM J. Optim. 3 (1993) 443–465. · Zbl 0784.90082 · doi:10.1137/0803021
[12] J.S. Pang and L. Qi, A globally convergent Newton method for convexSC 1 minimization problems, J. Optim. Theory Appl., to appear.
[13] L. Qi, Convergence analysis of some algorithms for solving nonsmooth equations, Math. Oper. Res. 18 (1993) 227–244. · Zbl 0776.65037 · doi:10.1287/moor.18.1.227
[14] L. Qi, Superlinearly convergent approximate Newton methods forLC 1 optimization problems, Math. Progr. 64 (1994) 277–294. · Zbl 0820.90102 · doi:10.1007/BF01582577
[15] R.T. Rockafellar, Linear-quadratic programming and optimal control, SIAM J. Contr. Optim. 25 (1987) 781–814. · Zbl 0617.49010 · doi:10.1137/0325045
[16] R.T. Rockafellar, Computational schemes for solving large-scale problems in extended linear-quadratic programming, Math. Progr. 48 (1990) 447–474. · Zbl 0735.90050 · doi:10.1007/BF01582268
[17] R.T. Rockafellar, Large-scale extended linear-quadratic programming and multistage optimization, in:Proc. 5th Mexico-US Workshop on Numerical Analysis, eds. S. Gomez, J.P. Hennart and R. Tapia
(SIAM, Philadelphia, 1990).
[18] R.T. Rockafellar and R.J-B. Wets, A dual solution procedure for quadratic stochastic programs with simple recourse, in:Numerical Methods, Lecture Notes in Mathematics 1005, ed. A. Reinoza
(Springer, Berlin, 1983) pp. 252–265.
[19] R.T. Rockafellar and R.J-B. Wets, A Lagrangian finite-generation technique for solving linear-quadratic problems in stochastic programming, Math. Prog. Study 28 (1986) 63–93.
[20] R.T. Rockafellar and R.J-B. Wets, Linear-quadratic problems with stochastic penalties: the finite generation algorithm, in:Stochastic Optimization, Lecture Notes in Control and Information
Sciences 81, eds. V.I. Arkin, A. Shiraev and R.J-B. Wets (Springer, Berlin, 1987) pp. 545–560.
[21] R.T. Rockafellar and R.J-B. Wets, Generalized linear-quadratic problems of deterministic and stochastic optimal control in discrete time, SIAM J. Contr. Optim. 28 (1990) 810–822. · Zbl
0714.49036 · doi:10.1137/0328046
[22] A. Ruszczyński, A regularized decomposition method for minimizing a sum of polyhedral functions, Math. Progr. 35 (1986) 309–333. · Zbl 0599.90103 · doi:10.1007/BF01580883
[23] R.J-B. Wets, Stochastic programming: Solution techniques and approximation schemes, in:Mathematical Programming: The State of the Art – Bonn 1982, eds. A. Bachem, M. Grötschel and B. Kort
(Springer, Berlin, 1983) pp. 566–603.
[24] R.J-B. Wets, Stochastic programming, in:Handbook in Operations Research and Management Science, Vol. 1:Optimization, eds. G.L. Nemhauser, A.H.G. Rinnooy Kan and M.J. Todd (North-Holland,
Amsterdam, 1989) pp. 573–630.
[25] C. Zhu and R.T. Rockafellar, Primal-dual projected gradient algorithms for extended linear-quadratic programming, SIAM J. Optim. 3 (1993) 751–783. · Zbl 0788.65069 · doi:10.1137/0803039
|
{"url":"http://zbmath.org/?q=an:0835.90058","timestamp":"2014-04-20T01:03:22Z","content_type":null,"content_length":"28159","record_id":"<urn:uuid:663450c0-d53e-48f7-9181-5bf8908de916>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [Bug-gnubg] (OT) Position ID documentation
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Bug-gnubg] (OT) Position ID documentation
From: Massimiliano Maini
Subject: Re: [Bug-gnubg] (OT) Position ID documentation
Date: Mon, 31 May 2010 17:46:40 +0200
I've some python that computes the number of positions where both
players occupy no more than X
regular points (board points, 1..24).
The computation is a sophistication of W.Trice's one: you assume that
black occupies m regular points
(m being of course between 0 and you limit X) and you compute number
of positions for black checkers
in the same way as Walter. Then for white it's more complicate: you
can't use just D(26-m,15) as you
want to limit the number of occupied points, so you just do the same
reasoning you've just done for
black with the additional note that if black occupies m points, white
can't occupy more than the min
between X (your imposed limit), 15 (max # of checkers) and 24-m
(vacant points).
It turns out (if my computations are OK) that limiting black and white
to be holding no more than 11
regular points each, then the number of points is already less than 2^64.
Python output here below, code further below. I'm ashamed about the
naive implementation of Cnm,
but hey, it works and takes no time, screw efficiency :)
C:\Documents and Settings\mmaini\Desktop\BG
posID>d:\python26\python.exe bgposid.py
TOT1: 18528584051601162496,1.852858e+19
TOT2: 18528584051601162496,1.852858e+19
False False
import math;
def Cnm(n,m):
def Dnm(n,m):
return Cnm(long(n+m-1),long(m))
def BlackOnM(m):
# Black occupies m regular points (board points, 0..24).
T1 = Cnm(24,m)
# The remaining 15-m black checkers are distributed over
# the occupied m points + the bar and the off (m+2 points).
T2 = Dnm(m+2,15-m)
# The 15 whie checkers are distributed over the 24-m
# available regular points, plus the bar and the off (26-m).
T3 = Dnm(26-m,15)
return long(T1*T2*T3)
def BlackOnM_WithLim(m,lim):
# Black occupies m regular points (board points, 0..24).
T1 = Cnm(24,m)
# The remaining 15-m black checkers are distributed over
# the occupied m points + the bar and the off (m+2 points).
T2 = Dnm(m+2,15-m)
# White can occupy a number of points that is the minimum
# between 15 (number of checkers), 24-m (points left vacant
# by black) and lim (self-imposed limit).
T3 = 0
for p in range(0,min(lim,15,24-m)+1):
# White occupies p regular points out of the available 24-m.
T31 = Cnm(24-m,p)
# The remaining 15-p white checkers are distributed over
# the occupied p points + the bar and the off (m+2 points).
T32 = Dnm(p+2,15-p)
T3 += T31*T32
return long(T1*T2*T3)
def TotPos():
Tot = 0
for M in range(0,16):
Bm = BlackOnM(M)
Tot += Bm
return Tot
def TotPos_WithLim(lim):
Tot = 0
for M in range(0,lim+1):
BmL = BlackOnM_WithLim(M,lim)
Tot += BmL
return Tot
# Sanity check :)
TOT1 = TotPos()
TOT2 = TotPos_WithLim(15)
print 'TOT1: %d,%e' % (TOT1,TOT1)
print 'TOT2: %d,%e' % (TOT2,TOT2)
print '%d,%e' % (TOT1-TOT2,TOT1-TOT2)
print TOT1<(pow(2,64)), TOT2<(pow(2,64))
# Iterate on Lim.
print 'Lim,#Pos,in64'
for lim in range(0,16):
NP = TotPos_WithLim(lim)
print '%d,%d,%d' % (lim,NP,NP<pow(2,64))
[Prev in Thread] Current Thread [Next in Thread]
|
{"url":"http://lists.gnu.org/archive/html/bug-gnubg/2010-05/msg00033.html","timestamp":"2014-04-20T17:00:34Z","content_type":null,"content_length":"10009","record_id":"<urn:uuid:9453b765-90a1-41da-8cb8-23fb1ce1edea>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Problem
October 12th 2008, 03:16 PM
Homework Problem
1.How would you differentiate (6^x)log_4 (x)? Note that the underscore here indicates subscript here.
2. Solve for f'(x) given that :
Please help! I have homework due!
Thanks a lot.
October 12th 2008, 03:36 PM
Chris L T521
You will need to use the product rule here.
But recall that:
$\frac{d}{\,dx}\left[a^x\right]=a^x\ln a$
$\frac{d}{\,dx}\left[\log_a x\right]=\frac{1}{x\ln a}$
Can you take this one from here?
October 12th 2008, 04:15 PM
Thanks,So this means..
The answer is
((6^x)/(xln4))+ log_4(x)((6^x)(ln6))
I assume this is right, since I have used the product rule as well as the formulae. But I keep getting an "Incorrect" on my webwork..
|
{"url":"http://mathhelpforum.com/calculus/53311-homework-problem-print.html","timestamp":"2014-04-19T04:40:15Z","content_type":null,"content_length":"5194","record_id":"<urn:uuid:bc36cb3f-00e8-4d54-a8b5-375a79885048>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Growing a hyperdodecahedron
skip to main content
Growing a hyperdodecahedron
August 2, 2011 10:44 AM Subscribe
This short computer graphics animation presents the regular 120-cell: a four dimensional polytope composed of 120 dodecahedra and also known as the hyperdodecahedron or hecatonicosachoron.
Gian Marco Tedesco's animation was part of the
MathFilm Festival 2008
, which also included
- if you like it, you'll probably like
Hitoshi Akayama's other animations
- and the chilling documentary
Attack of the Note Sheep
posted by Wolfdog (29 comments total) 19 users marked this as a favorite
You remember that time you fell down the stairs for no reason at all? That's because one of our outer-dimension overlords rolled a one on that thing.
(Also: Holy shit that was done entirely in POV-Ray.)
posted by griphus at 10:51 AM on August 2, 2011 [7 favorites]
(Also: Holy shit that was done entirely in POV-Ray.)
Why are you surprised? It's complex geometry, but it's not very fancy rendering.
posted by anigbrowl at 11:03 AM on August 2, 2011
That was awesome. Now is someone can just explain tesseracts in a way that makes sense in my brain...
posted by Navelgazer at 11:05 AM on August 2, 2011
The last time I toyed with it was almost fifteen years ago; I'm just surprised it's still around.
posted by griphus at 11:05 AM on August 2, 2011
Wish there was room in my brain for a fourth dimension. I want to get it. Neat animation but I don't. Everything after 2 dimensions was simply 3-dimensional.
posted by rahnefan at 11:10 AM on August 2, 2011
I am no math whiz. I don't get how this is "four dimensional."
Isn't it just a three-dimensional object made of three-dimensional objects? Is calling it four-dimensional just a way to handwave the obvious fact that it's not actually "regular"?
posted by Sys Rq at 11:13 AM on August 2, 2011
The last time I toyed with it was almost fifteen years ago; I'm just surprised it's still around.
Oh, I see! It's very well established as a backend, still. If you use something like Blender or one of the other 3d modeling platforms, I think you can just point it at a Pov-Ray installation and
have it run pretty much seamlessly. Last time I did any 3d was a couple of years ago, but the workflow is a snap nowadays.
posted by anigbrowl at 11:17 AM on August 2, 2011
Sys Rq
: "
Isn't it just a three-dimensional object made of three-dimensional objects? Is calling it four-dimensional just a way to handwave the obvious fact that it's not actually "regular"?
The problem is that you can't really show us a four-dimensional object. You can only show us its three-dimensional slices (and those are represented as two spatial dimensions and time, i.e.
animation). In the same way, a two-dimensional being wouldn't really be able to comprehend a sphere. All it could see is a series of circles.
/Flatlandposted by Plutor at 11:22 AM on August 2, 2011 [1 favorite]
> Is calling it four-dimensional just a way to handwave the obvious fact that it's not actually "regular"?
I think the 3D object we see is something like the regular 4D object's "shadow", and it's deformed in the same way that the
shadow of the single dodecahedron
posted by lucidium at 11:33 AM on August 2, 2011
The distortion is sort of analogous to drawing a "cube" in two dimensions like the picture below:
|\ /|
| \ / |
| \---/ |
| | | |
| | | |
| | | |
| /---\ |
| / \ |
|/ \|
(There are similar drawings of a dodecahedron in two dimensions, which of course come out distorted, but we've reached the limits of my ASCII art skills.)
posted by madcaptenor at 11:52 AM on August 2, 2011 [3 favorites]
lucidium has it. it could have been explained better, but the reason he first shows the 2D shadow of the 3D dodecahedron is that, while we are able to see 3D, if we couldn't, we could still see a 2D
shadow of a 3D object. We can't see 4D, as it happens, but since we can see 3D we can "glimpse" 4D to an extant by seeing a 3D shadow of a 4D object. Which is basically what he shows.
posted by gilrain at 11:55 AM on August 2, 2011
Note that someone who had only ever seen 2D would similarly say of the shadow of the dodecahedron, "how is that different from a lot of 2D shapes morphing in weird ways. Understanding what causes the
strangeness of the lower-dimensional shadow is what allows us to partially conceptualize what higher-dimensional object must be causing it.
posted by gilrain at 11:57 AM on August 2, 2011
I wish they could make MIDI in 4-D. (Still, this was fun to watch.)
posted by not_on_display at 12:05 PM on August 2, 2011
That was awesome. Now is someone can just explain tesseracts in a way that makes sense in my brain...
A tesseract is the three dimensional shape that is created when you unfold a four-dimensional hypercube; in the same way that
is the two-dimensional shape created when you unfold a three-dimensional cube.
posted by alby at 12:06 PM on August 2, 2011
That was awesome. Now is someone can just explain tesseracts in a way that makes sense in my brain...
Madeleine L'Engle has got you
posted by vverse23 at 12:36 PM on August 2, 2011
That was neat. I love higher-dimensional space.
Dimensions math
, a set of (multilingual!) videos that eases you into 4-dimensional spaces, and from there to fractals and other pretty pictures. If you liked the main video you'll like this.
Visualizing 4 dimensions using color
; maybe more intuitive (with a nice picture of a Klein bottle at the end, showing how it really isn't self-intersecting!)
A patent on a 4-dimensional UI
4-dimensional Rubik's cube
5-dimensional Rubik's cube
Explore the 120-cell
, a 4-dimensional space shooter. Surprisingly playable.
posted by BungaDunga at 12:39 PM on August 2, 2011 [1 favorite]
More amazing to me is the fact that there is no such thing as a "five-dimensional dodecahedron". This one only goes up to four.
posted by erniepan at 12:42 PM on August 2, 2011
objects that exist for longer than a nano-instant "four-dimensional", really ?
posted by genghis at 12:44 PM on August 2, 2011
objects that exist for longer than a nano-instant "four-dimensional", really ?
We are talking here about four spatial dimensions, not three dimensions plus time.
posted by gilrain at 12:48 PM on August 2, 2011
Huh. I guess maybe I just don't see the point of >3-dimesional "objects." Is there one? It all just sounds like mathematical drug-talk to me.
posted by Sys Rq at 2:14 PM on August 2, 2011
For those saying that this just looks like a 3D object, I think you may be overlooking the parts where the object sort of turns inside out and "engulfs" the viewer's point-of-view. At one point we're
outside of it, and at another, we seem to be on the inside. You can roughly think of that as the effect of rotating/twisting the object through the 4th spatial dimension.
If you're having trouble visualizing what the whole 4D object "looks like," that's because, strictly speaking, you can't, given that we're (practically speaking) 3-dimension beings inhabiting a
posted by treepour at 2:19 PM on August 2, 2011
Huh. I guess maybe I just don't see the point of >3-dimesional "objects." Is there one? It all just sounds like mathematical drug-talk to me.
They're every bit as real as anything else mathematicians study. Have you ever
the set of positive integers? No, you haven't, and that's about as concrete and well-established as mathematics gets. But if you watched that video, you
seen a 3-dimensional projection of the 120-cell.
posted by baf at 2:39 PM on August 2, 2011 [1 favorite]
Huh. I guess maybe I just don't see the point of >3-dimesional "objects." Is there one?
Nonmathematicians sometimes think that because the
is three-dimensional, our
should only be three-dimensional. But this only makes sense if you insist on using only the most direct possible correspondence between model and thing modelled — a "point" in the model has to
represent a physical location in the world, a "space" in the model has to be a representation of the space we live in, and so on. Other relationships are possible and useful.
Consider, for example, a scatterplot of, I don't know, height versus weight. A point on this plot represents a certain combination of height and weight, which is an entirely abstract thing, not a
physical point at all. A line on this plot represents a certain relation between height and weight, again an abstract thing, not corresponding to a physical line in any way. Of course it is useful to
think of height/weight data arranged in this kind of abstract 2-dimensional "space".
The only difference between this height/weight scatterplot and a 25-dimensional scatterplot is the number of parameters in the data. From this point of view, it is pointless and absurd to impose the
requirement that only 3-dimensional spaces be used.
(I do not claim any such practical application for the object considered in the post.)
(A 3-dimensional projection of the 120-cell is
hanging from the ceiling of the Fields Institute
posted by stebulus at 2:43 PM on August 2, 2011 [5 favorites]
This may be the set of convex figures
the books
were talking about in their math song?
posted by Buckt at 5:05 PM on August 2, 2011
(we don't really see three dimensional either.)
posted by wobh at 7:59 PM on August 3, 2011
(we don't really see three dimensional either.)
It's close enough for government work.
posted by gilrain at 8:56 PM on August 3, 2011
« Older It's the late 1970s, you're in France, and you're ... | Jason van Gumster has been tel... Newer »
This thread has been archived and is closed to new comments
|
{"url":"http://www.metafilter.com/106119/Growing-a-hyperdodecahedron","timestamp":"2014-04-16T11:21:42Z","content_type":null,"content_length":"37938","record_id":"<urn:uuid:d47e2426-17ee-4d1b-8045-3691af34a6a8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
When he goes out to eat, Austin always leaves a tip for his server that is equal to 15% of his bill. Let f = the amount of the bill. Which algebraic expression could be used to calculate Austin's
total cost of eating out at any restaurant?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
if \(f\) is the amount of the bill and he leaves a 15% tip then the total cost would be \[ \large C=f+f\cdot15\%=f\cdot(1+0.15)=1.15f \]
Best Response
You've already chosen the best response.
So f.15 would be the algebraic expression?
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50326567e4b0aa95e34ceba0","timestamp":"2014-04-17T06:55:04Z","content_type":null,"content_length":"32587","record_id":"<urn:uuid:f5b49844-5205-4862-9fff-9b49fad9bde8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of partial differential equation
partial differential equations
) are a type of
differential equation
, i.e., a
involving an unknown
(or functions) of several
independent variables
and its (resp. their)
partial derivatives
with respect to those variables. Partial differential equations are used to formulate, and thus aid the solution of, problems involving functions of several variables; such as the propagation of
fluid flow
. Interestingly, seemingly distinct physical phenomena may have identical mathematical formulations, and thus be governed by the same underlying dynamic.
A relatively simple partial differential equation is
$frac\left\{partial\right\}\left\{partial x\right\}u\left(x,y\right)=0, .$
This relation implies that the values u(x,y) are independent of x. Hence the general solution of this equation is
$u\left(x,y\right) = f\left(y\right),,$
where f is an arbitrary function of y. The analogous ordinary differential equation is
which has the solution
$u\left(x\right) = c,,$
where c is any constant value (independent of x). These two examples illustrate that general solutions of ordinary differential equations involve arbitrary constants, but solutions of partial
differential equations involve arbitrary functions. A solution of a partial differential equation is generally not unique; additional conditions must generally be specified on the boundary of the
region where the solution is defined. For instance, in the simple example above, the function $f\left(y\right)$ can be determined if $u$ is specified on the line $x=0$.
Existence and uniqueness
Although the issue of the existence and uniqueness of solutions of ordinary differential equations has a very satisfactory answer with the
Picard-Lindelöf theorem
, that is far from the case for partial differential equations. There is a general theorem (the
Cauchy-Kovalevskaya theorem
) that states that the
Cauchy problem
for any partial differential equation that is
in the unknown function and its derivatives have a unique analytic solution. Although this result might appear to settle the existence and uniqueness of solutions, there are examples of linear
partial differential equations whose coefficients have derivatives of all orders (which are nevertheless not analytic) but which have no solutions at all: see Lewy (1957). Even if the solution of a
partial differential equation exists and is unique, it may nevertheless have undesirable properties. The mathematical study of these questions is usually in the more powerful context of
weak solutions
An example of pathological behavior is the sequence of Cauchy problems (depending upon n) for the Laplace equation
$frac\left\{part^2 u\right\}\left\{partial x^2\right\} + frac\left\{part^2 u\right\}\left\{partial y^2\right\}=0,,$
with initial conditions
$u\left(x,0\right) = 0, ,$
$frac\left\{partial u\right\}\left\{partial y\right\}\left(x,0\right) = frac\left\{sin n x\right\}\left\{n\right\},,$
where n is an integer. The derivative of u with respect to y approaches 0 uniformly in x as n increases, but the solution is
$u\left(x,y\right) = frac\left\{\left(sinh ny\right)\left(sin nx\right)\right\}\left\{n^2\right\}.,$
This solution approaches infinity if nx is not an integer multiple of π for any non-zero value of y. The Cauchy problem for the Laplace equation is called ill-posed or not well posed, since the
solution does not depend continuously upon the data of the problem. Such ill-posed problems are not usually satisfactory for physical applications.
In PDEs, it is common to denote partial derivatives using subscripts. That is:
$u_x = \left\{partial u over partial x\right\}$
$u_\left\{xy\right\} = \left\{part^2 u over partial y, partial x\right\} = \left\{partial over partial y \right\} left\left(\left\{partial u over partial x\right\}right\right).$
Especially in (mathematical) physics, one often prefers use of del (which in cartesian coordinates is written $nabla=\left(part_x,part_y,part_z\right),$) for spatial derivatives and a dot $dot u,$
for time derivatives, e.g. to write the wave equation (see below) as
$ddot u=c^2triangle u.,$(math notation)
$ddot u=c^2nabla^2u.,$(physics notation)
Heat equation in one space dimension
The equation for conduction of heat in one dimension for a homogeneous body has the form
$u_t = alpha u_\left\{xx\right\} ,$
where u(t,x) is temperature, and α is a positive constant that describes the rate of diffusion. The Cauchy problem for this equation consists in specifying $u\left(0,x\right)= f\left(x\right)$, where
f(x) is an arbitrary function.
General solutions of the heat equation can be found by the method of separation of variables. Some examples appear in the heat equation article. They are examples of Fourier series for periodic f and
Fourier transforms for non-periodic f. Using the Fourier transform, a general solution of the heat equation has the form
$u\left(t,x\right) = frac\left\{1\right\}\left\{sqrt\left\{2pi\right\}\right\} int_\left\{-infty\right\}^\left\{infty\right\} F\left(xi\right) e^\left\{-alpha xi^2 t\right\} e^\left\{i xi x\right
\} dxi, ,$
where F is an arbitrary function. In order to satisfy the initial condition, F is given by the Fourier transform of f, that is
$F\left(xi\right) = frac\left\{1\right\}\left\{sqrt\left\{2pi\right\}\right\} int_\left\{-infty\right\}^\left\{infty\right\} f\left(x\right) e^\left\{-i xi x\right\}, dx. ,$
If f represents a very small but intense source of heat, then the preceding integral can be approximated by the delta distribution, multiplied by the strength of the source. For a source whose
strength is normalized to 1, the result is
$F\left(xi\right) = frac\left\{1\right\}\left\{sqrt\left\{2pi\right\}\right\}, ,$
and the resulting solution of the heat equation is
$u\left(t,x\right) = frac\left\{1\right\}\left\{2pi\right\} int_\left\{-infty\right\}^\left\{infty\right\}e^\left\{-alpha xi^2 t\right\} e^\left\{i xi x\right\} dxi. ,$
This is a Gaussian integral. It may be evaluated to obtain
$u\left(t,x\right) = frac\left\{1\right\}\left\{2sqrt\left\{pi alpha t\right\}\right\} expleft\left(-frac\left\{x^2\right\}\left\{4 alpha t\right\} right\right). ,$
This result corresponds to a normal probability density for x with mean 0 and variance 2αt. The heat equation and similar diffusion equations are useful tools to study random phenomena.
Wave equation in one spatial dimension
wave equation
is an equation for an unknown function
) of the form
$u_\left\{tt\right\} = c^2 u_\left\{xx\right\}. ,$
Here u might describe the displacement of a stretched string from equilibrium, or the difference in air pressure in a tube, or the magnitude of an electromagnetic field in a tube, and c is a number
that corresponds to the velocity of the wave. The Cauchy problem for this equation consists in prescribing the initial displacement and velocity of a string or other medium:
$u\left(0,x\right) = f\left(x\right), ,$
$u_t\left(0,x\right) = g\left(x\right), ,$
where f and g are arbitrary given functions. The solution of this problem is given by d'Alembert's formula:
$u\left(t,x\right) = frac\left\{1\right\}\left\{2\right\} left\left[f\left(x-ct\right) + f\left(x+ct\right)right\right] + frac\left\{1\right\}\left\{2c\right\}int_\left\{x-ct\right\}^\left\{x+ct\
right\} g\left(y\right), dy. ,$
This formula implies that the solution at (t,x) depends only upon the data on the segment of the initial line that is cut out by the characteristic curves
$x - ct = hbox\left\{constant,\right\} quad x + ct = hbox\left\{constant\right\}, ,$
that are drawn backwards from that point. These curves correspond to signals that propagate with velocity c forward and backward. Conversely, the influence of the data at any given point on the
initial line propagates with the finite velocity c: there is no effect outside a triangle through that point whose sides are characteristic curves. This behavior is very different from the solution
for the heat equation, where the effect of a point source appears (with small amplitude) instantaneously at every point in space. The solution given above is also valid if t is negative, and the
explicit formula shows that the solution depends smoothly upon the data: both the forward and backward Cauchy problems for the wave equation are well-posed.
Spherical waves
Spherical waves are waves whose amplitude depends only upon the radial distance
from a central
point source
. For such waves, the three-dimensional wave equation takes the form
$u_\left\{tt\right\} = c^2 left\left[u_\left\{rr\right\} + frac\left\{2\right\}\left\{r\right\} u_r right\right]. ,$
This is equivalent to
$\left(ru\right)_\left\{tt\right\} = c^2 left\left[\left(ru\right)_\left\{rr\right\} right\right],,$
and hence the quantity ru satisfies the one-dimensional wave equation. Therefore a general solution for spherical waves has the form
$u\left(t,r\right) = frac\left\{1\right\}\left\{r\right\} left\left[F\left(r-ct\right) + G\left(r+ct\right) right\right],,$
where F and G are completely arbitrary functions. Radiation from an antenna corresponds to the case where G is identically zero. Thus the wave form transmitted from an antenna has no distortion in
time: the only distorting factor is 1/r. This feature of undistorted propagation of waves is not present if there are two spatial dimensions.
Laplace equation in two dimensions
Laplace equation
for an unknown function of two variables φ has the form
$varphi_\left\{xx\right\} + varphi_\left\{yy\right\} = 0.$
Solutions of Laplace's equation are called harmonic functions.
Connection with holomorphic functions
Solutions of the Laplace equation in two dimensions are intimately connected with analytic functions of a complex variable (a.k.a. holomorphic functions): the real and imaginary parts of any analytic
function are
conjugate harmonic
functions: they both satisfy the Laplace equation, and their gradients are orthogonal. If
, then the
Cauchy-Riemann equations
state that
$u_x = v_y, quad v_x = -u_y,,$
and it follows that
$u_\left\{xx\right\} + u_\left\{yy\right\} = 0, quad v_\left\{xx\right\} + v_\left\{yy\right\}=0. ,$
Conversely, given any harmonic function in two dimensions, it is the real part of an analytic function, at least locally. Details are given in Laplace equation.
A typical boundary value problem
A typical problem for Laplace's equation is to find a solution that satisfies arbitrary values on the boundary of a domain. For example, we may seek a harmonic function that takes on the values
(θ) on a circle of radius one. The solution was given by
$varphi\left(r,theta\right) = frac\left\{1\right\}\left\{2pi\right\} int_0^\left\{2pi\right\} frac\left\{1-r^2\right\}\left\{1 +r^2 -2rcos \left(theta -theta"\right)\right\} u\left(theta\text{'}\
Petrovsky (1967, p. 248) shows how this formula can be obtained by summing a Fourier series for φ. If r<1, the derivatives of φ may be computed by differentiating under the integral sign, and one can
verify that φ is analytic, even if u is continuous but not necessarily differentiable. This behavior is typical for solutions of elliptic partial differential equations: the solutions may be much
more smooth than the boundary data. This is in contrast to solutions of the wave equation, and more general hyperbolic partial differential equations, which typically have no more derivatives than
the data.
Euler-Tricomi equation
Euler-Tricomi equation
is used in the investigation of
u_{xx} , =xu_{yy}.
Advection equation
advection equation
describes the transport of a conserved scalar
in a velocity field
$\left\{bold u\right\}=\left(u,v,w\right)$
. It is:
psi_t+(upsi)_x+(vpsi)_y+(wpsi)_z , =0.
If the velocity field is solenoidal (that is, $nablacdot\left\{bold u\right\}=0$), then the equation may be simplified to
psi_t+upsi_x+vpsi_y+wpsi_z , =0.
The one dimensional steady flow advection equation $psi_t+u.psi_x=0$ (where $u$ is constant) is commonly referred to as the pigpen problem. If $u$ is not constant and equal to $psi$ the equation is
referred to as Burgers' equation.
Ginzburg-Landau equation
Ginzburg-Landau equation
is used in modelling
. It is
iu_t+pu_{xx} +q|u|^2u , =igamma u where
are constants and
is the imaginary unit.
The Dym equation
Dym equation
is named for
Harry Dym
and occurs in the study of
. It is
u_t , = u^3u_{xxx}.
Initial-boundary value problems
Many problems of Mathematical Physics are formulated as initial-boundary value problems.
Vibrating string
If the string is stretched between two points where
=0 and
denotes the amplitude of the displacement of the string, then
satisfies the one-dimensional wave equation in the region where 0<
is unlimited. Since the string is tied down at the ends,
must also satisfy the boundary conditions
$u\left(t,0\right)=0, quad u\left(t,L\right)=0, ,$
as well as the initial conditions
$u\left(0,x\right)=f\left(x\right), quad u_t\left(0,x\right)=g\left(x\right). ,$
The method of separation of variables for the wave equation
$u_\left\{tt\right\} = c^2 u_\left\{xx\right\}, ,$
leads to solutions of the form
$u\left(t,x\right) = T\left(t\right) X\left(x\right),,$
$T+ k^2 c^2 T=0, quad X+ k^2 X=0,,$
where the constant k must be determined. The boundary conditions then imply that X is a multiple of sin kx, and k must have the form
$k= frac\left\{npi\right\}\left\{L\right\}, ,$
where n is an integer. Each term in the sum corresponds to a mode of vibration of the string. The mode with n=1 is called the fundamental mode, and the frequencies of the other modes are all
multiples of this frequency. They form the overtone series of the string, and they are the basis for musical acoustics. The initial conditions may then be satisfied by representing f and g as
infinite sums of these modes. Wind instruments typically correspond to vibrations of an air column with one end open and one end closed. The corresponding boundary conditions are
$X\left(0\right) =0, quad X"\left(L\right) = 0.,$
The method of separation of variables can also be applied in this case, and it leads to a series of odd overtones.
The general problem of this type is solved in Sturm-Liouville theory.
Vibrating membrane
If a membrane is stretched over a curve
that forms the boundary of a domain
in the plane, its vibrations are governed by the wave equation
$frac\left\{1\right\}\left\{c^2\right\} u_\left\{tt\right\} = u_\left\{xx\right\} + u_\left\{yy\right\}, ,$
if t>0 and (x,y) is in D. The boundary condition is $u\left(t,x,y\right) = 0$ if $\left(x,y\right)$ is on $C$. The method of separation of variables leads to the form
$u\left(t,x,y\right) = T\left(t\right) v\left(x,y\right),,$
which in turn must satisfy
$frac\left\{1\right\}\left\{c^2\right\}T +k^2 T=0, ,$
$v_\left\{xx\right\} + v_\left\{yy\right\} + k^2 v =0.,$
The latter equation is called the Helmholtz Equation. The constant k must be determined in order to allow a non-trivial v to satisfy the boundary condition on C. Such values of k^2 are called the
eigenvalues of the Laplacian in D, and the associated solutions are the eigenfunctions of the Laplacian in D. The Sturm-Liouville theory may be extended to this elliptic eigenvalue problem (Jost,
There are no generally applicable methods to solve non-linear PDEs. Still, existence and uniqueness results (such as the Cauchy-Kovalevskaya theorem) are often possible, as are proofs of important
qualitative and quantitative properties of solutions (getting these results is a major part of analysis). Computational solution to the nonlinear PDEs, the Split-step method, exist for specific
equations like nonlinear Schrödinger equation.
Nevertheless, some techniques can be used for several types of equations. The h-principle is the most powerful method to solve underdetermined equations. The Riquier-Janet theory is an effective
method for obtaining information about many analytic overdetermined systems.
The method of characteristics (Similarity Transformation method) can be used in some very special cases to solve partial differential equations.
In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis
techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Many interesting problems in science and engineering are solved in this way using computers,
sometimes high performance supercomputers.
Other examples
Schrödinger equation
is a PDE at the heart of non-relativistic
quantum mechanics
. In the
WKB approximation
it is the
Hamilton-Jacobi equation
Except for the Dym equation and the Ginzburg-Landau equation, the above equations are linear in the sense that they can be written in the form Au = f for a given linear operator A and a given
function f. Other important non-linear equations include the Navier-Stokes equations describing the flow of fluids, and Einstein's field equations of general relativity.
Also see the list of non-linear partial differential equations.
Some linear, second-order partial differential equations can be classified as
. Others such as the
Euler-Tricomi equation
have different types in different regions. The classification provides a guide to appropriate initial and boundary conditions, and to smoothness of the solutions.
Equations of first order
Equations of second order
, the general second-order PDE in two independent variables has the form
$Au_\left\{xx\right\} + Bu_\left\{xy\right\} + Cu_\left\{yy\right\} + cdots = 0,$
where the coefficients A, B, C etc. may depend upon x and y. This form is analogous to the equation for a conic section:
$Ax^2 + Bxy + Cy^2 + cdots = 0.$
Just as one classifies conic sections into parabolic, hyperbolic, and elliptic based on the discriminant $B^2 - 4AC$, the same can be done for a second-order PDE at a given point.
1. $B^2 - 4AC , < 0$ : solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of
Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be
approximated with elliptic PDEs, and the Euler-Tricomi equation is elliptic where x<0.
2. $B^2 - 4AC = 0,$ : equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the
transformed time variable increases. The Euler-Tricomi equation has parabolic type on the line where x=0.
3. $B^2 - 4AC , > 0$ : hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds
can be approximated with hyperbolic PDEs, and the Euler-Tricomi equation is hyperbolic where x>0.
If there are n independent variables x[1], x[2 ], ..., x[n], a general linear partial differential equation of second order has the form
$L u =sum_\left\{i=1\right\}^nsum_\left\{j=1\right\}^n a_\left\{i,j\right\} frac\left\{part^2 u\right\}\left\{partial x_i partial x_j\right\} quad hbox\left\{ plus lower order terms\right\} =0.
The classification depends upon the signature of the eigenvalues of the coefficient matrix.
1. Elliptic: The eigenvalues are all positive or all negative.
2. Parabolic : The eigenvalues are all positive or all negative, save one which is zero.
3. Hyperbolic: There is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative.
4. Ultrahyperbolic: There is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues. There is only limited theory for ultrahyperbolic equations
(Courant and Hilbert, 1962).
Systems of first-order equations and characteristic surfaces
The classification of partial differential equations can be extended to systems of first-order equations, where the unknown
is now a vector with
components, and the coefficient matrices
matrices for
$nu=1, dots,n$
. The partial differential equation takes the form
$Lu = sum_\left\{nu=1\right\}^\left\{n\right\} A_nu frac\left\{partial u\right\}\left\{partial x_nu\right\} + B=0, ,$
where the coefficient matrices A[ν] and the vector B may depend upon x and u. If a hypersurface S is given in the implicit form
$varphi\left(x_1, x_2, ldots, x_n\right)=0, ,$
where φ has a non-zero gradient, then S is a characteristic surface for the operator L at a given point if the characteristic form vanishes:
$Qleft\left(frac\left\{partvarphi\right\}\left\{partial x_1\right\}, ldots,frac\left\{partvarphi\right\}\left\{partial x_n\right\}right\right) =detleft\left[sum_\left\{nu=1\right\}^nA_nu frac\
left\{partial varphi\right\}\left\{partial x_nu\right\}right\right]=0.,$
The geometric interpretation of this condition is as follows: if data for u are prescribed on the surface S, then it may be possible to determine the normal derivative of u on S from the differential
equation. If the data on S and the differential equation determine the normal derivative of u on S, then S is non-characteristic. If the data on S and the differential equation do not determine the
normal derivative of u on S, then the surface is characteristic, and the differential equation restricts the data on S: the differential equation is internal to S.
1. A first-order system Lu=0 is elliptic if no surface is characteristic for L: the values of u on S and the differential equation always determine the normal derivative of u on S.
2. A first-order system is hyperbolic at a point if there is a space-like surface S with normal ξ at that point. This means that, given any non-trivial vector η orthogonal to ξ, and a scalar
multiplier λ, the equation
$Q\left(lambda xi + eta\right) =0, ,$
has m real roots λ[1], λ[2], ..., λ[m]. The system is strictly hyperbolic if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form Q
(ζ)=0 defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has m sheets, and the axis ζ = λ ξ runs inside these sheets: it does not intersect any of
them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets.
Equations of mixed type
If a PDE has coefficients which are not constant, it is possible that it will not belong to any of these categories but rather be of
mixed type
. A simple but important example is the Euler-Tricomi equation
u_{xx} , = xu_{yy}
which is called elliptic-hyperbolic because it is elliptic in the region x < 0, hyperbolic in the region x > 0, and degenerate parabolic on the line x = 0.
Methods to solve PDEs
Separation of variables
The method of separation of variables will yield particular solutions of a linear PDE on very simple domains such as rectangles that may satisfy initial or boundary conditions.
Change of variable
Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example the Black–Scholes PDE
$frac\left\{partial V\right\}\left\{partial t\right\} + frac\left\{1\right\}\left\{2\right\}sigma^2 S^2frac\left\{partial^2 V\right\}\left\{partial S^2\right\} + rSfrac\left\{partial V\right\}\
left\{partial S\right\} - rV = 0$
is reducible to the Heat equation
$frac\left\{partial u\right\}\left\{partial tau\right\} = frac\left\{partial^2 u\right\}\left\{partial x^2\right\}$
by the change of variables (for complete details see Solution of the Black Scholes Equation):
$V\left(S,t\right) = K v\left(x,tau\right),$
$x = ln\left(S/K\right),$
$tau = frac\left\{1\right\}\left\{2\right\} sigma^2 \left(T - t\right)$
$v\left(x,tau\right)=exp\left(-alpha x-betatau\right) u\left(x,tau\right).,$
Method of characteristics
Superposition principle
Because any superposition of solutions of a linear PDE is again a solution, the particular solutions may then be combined to obtain more general solutions.
Fourier series
If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite
domains. The solution for a point source for the heat equation given above is an example for use of a Fourier integral.
• R. Courant and D. Hilbert, Methods of Mathematical Physics, vol II. Wiley-Interscience, New York, 1962.
• L.C. Evans, Partial Differential Equations, American Mathematical Society, Providence, 1998. ISBN 0-8218-0772-2
• F. John, Partial Differential Equations, Springer-Verlag, 1982.
• J. Jost, Partial Differential Equations, Springer-Verlag, New York, 2002.
• Hans Lewy (1957) An example of a smooth linear partial differential equation without solution. Annals of Mathematics, 2nd Series, 66(1),155-158.
• I.G. Petrovskii, Partial Differential Equations, W. B. Saunders Co., Philadelphia, 1967.
• A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9
• A. D. Polyanin and V. F. Zaitsev, Handbook of Nonlinear Partial Differential Equations, Chapman & Hall/CRC Press, Boca Raton, 2004. ISBN 1-58488-355-3
• A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
• D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
• Y. Pinchover and J. Rubinstein, An Introduction to Partial Differential Equations, Cambridge University Press, Cambridge, 2005. ISBN 978-0-521-84886-2
External links
|
{"url":"http://www.reference.com/browse/partial+differential+equation","timestamp":"2014-04-21T10:18:26Z","content_type":null,"content_length":"122724","record_id":"<urn:uuid:cd22704f-56f0-4ff3-84f2-9c1ddb937cf2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: VERY BRIEF INTRODUCTION TO
P.M.E.Altham, Statistical Laboratory, University of Cambridge.
February 28, 2005
FIRST CLASS IS MONDAY JAN 24, 2005. YOU NEED TO SIGN UP FOR
ONE OF THE SLOTS 10-11 am, 11-12 noon, 12 noon- 1pm.
Catam users: To start R:
Open a Command Prompt window from the start button
You should now have the special R window on your screen.
Note that R can also be started from the Windows Start Menu
Start->All Programs->PWF Programs->Teaching Packages->Catam->R
demo(graphics) # for a demonstration of R graphics
# this course is not actually about fancy graphics, but this makes
# a colourful way to start
data() # to find out the datasets available in R
x = rnorm(100) # to generate a random sample of 100
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/478/3800945.html","timestamp":"2014-04-20T22:04:46Z","content_type":null,"content_length":"7918","record_id":"<urn:uuid:8e133873-77f3-479e-af0c-5f1fecbc7a9e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the range value
March 27th 2013, 06:05 AM #1
Junior Member
Aug 2012
The Earth
Find the range value
Find the set of value of k, where k is a constant, when equation x^3 -12x^2 +45x -34 = k has (a)one root and (b)three roots.
How should I get start with the problem like this?
Re: Find the range value
Am I correct in assuming you want real roots?
Re: Find the range value
March 27th 2013, 06:15 AM #2
March 27th 2013, 06:31 AM #3
Junior Member
Aug 2012
The Earth
|
{"url":"http://mathhelpforum.com/algebra/215759-find-range-value.html","timestamp":"2014-04-18T03:55:04Z","content_type":null,"content_length":"33399","record_id":"<urn:uuid:f4559fab-e707-45f6-83af-da9055b072e9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] More than a Typo in Friedman's Concept Calculus Paper
Frode Bjørdal frode.bjordal at ifikk.uio.no
Fri Aug 16 15:38:48 EDT 2013
Friedman's typographical adjustment does not rectify the Concept Calculus.
1: The definition Friedman now suggests, y>exPhi iff Forallz(y>z iff
Existsx(Phi and (x=z or z>x))), is not materially adequate as the R.H. may
hold on account of Phi holding for x much smaller than y whereas we would
not want the L.H. to be true in such cases.
2: Moreover, Phi may hold for some x incomparable with y, and so my
objection to the proof of Lemma 8.2 retains its full force.
3: It seems clear that the amendment I suggested is a formalization which
better captures the informal account invoked by Friedman in justifying what
he took to be just a typographical error.
4: However, a more elegant recommendation is that one in defining y>exPhi
includes the additional conjunct y>Phi. It may well be that the Concept
Calculus is rectifiable with the latter amendment of what was taken as a
typographical error, but this should be checked.
Frode Bjørdal
2013/8/15 Harvey Friedman <hmflogic at gmail.com>
> I just saw a typo in the definition of y >ex phi. Change x > z to z > x.
> This is an obvious typo because of the informal definition of "exactly
> better" in the section Better Than, Much Better Than, which reads:
> "We say that x is exactly better than a given range of things if and only
> if x is better than every element of that range of things, and everything
> that something in that range of things is better than, and nothing else."
> Harvey Friedman
> On Mon, Aug 12, 2013 at 2:18 PM, Frode Bjørdal <frode.bjordal at ifikk.uio.no
> > wrote:
>> As I have attempted to come to more clarity on Friedman's Concept
>> Calculus it turns out that Lemma 8.2. of
>> http://www.math.osu.edu/~friedman.8/pdf/ConcCalcInf103109.pdf is false.
>> In defining y>exPhi on page 13 Friedman has Forallz(y>z iff Existsx(Phi
>> and (x=z or x>z))).
>> However, in proving Lemma 8.2 on page 28 it is presupposed that y>exPhi
>> iff Forallz(y>z iff Existsx(Phi and (x=z or y>x>z))).
>> But the proof of Lemma 8.2 cannot be rectified even if one changes the
>> definition og y>exPhi to the one presupposed in the text. For the step
>> indicated by «Suppose Phi. Then x < y.» is a non sequitur, as nothing
>> prevents that Phi holds for an x which is incomparable with y.
>> It may be that a rectification may be had by presupposing the more
>> elaborate definition y>exPhi iff Forallz(y>z iff Existsx(Phi and (x=z or
>> y>x>z)) and not Existsx(Phi and x incomparable with y)).
>> In my previous post Remarks on the Concept Calculus I make a suggestion
>> for domain for MBT which is not quite apt; nevertheless, it may be that a
>> rectified Concept Calculus may be modelled by elaborating appropriately
>> upon my suggestion.
>> Professor Dr. Frode Bjørdal
>> Universitetet i Oslo Universidade Federal do Rio Grande do Norte
>> quicumque vult hinc potest accedere ad paginam virtualem meam<http://www.hf.uio.no/ifikk/personer/vit/fbjordal/index.html>
>> _______________________________________________
>> FOM mailing list
>> FOM at cs.nyu.edu
>> http://www.cs.nyu.edu/mailman/listinfo/fom
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20130816/268613b1/attachment.html>
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-August/017486.html","timestamp":"2014-04-19T17:05:21Z","content_type":null,"content_length":"7520","record_id":"<urn:uuid:c34dcd54-dc9b-4c2b-be5c-8c5dce6880ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
split idempotent
split idempotent
Category theory
Universal constructions
A split idempotent in a category $C$ is a morphism $e: A \to A$ which has a retract, meaning an object $B$ and morphisms $r: A \to B$ and $s: B \to A$ such that $s \circ r = e$ but $r \circ s = 1_B$.
• Any split idempotent is an idempotent, since
$e \circ e = (s \circ r) \circ (s \circ r) = s \circ (r \circ s) \circ r = s \circ 1_B \circ r = s \circ r = e .$
• The splitting of an idempotent $e$ is both the limit and the colimit of the diagram containing only two parallel endomorphisms of $A$, namely $e$ and the identity. Splittings of idempotents are
preserved by any functor, making them absolute (co)limits. In ordinary (i.e. unenriched) categories, every absolute (co)limit can be constructed from split idempotents. Thus, the Cauchy
completion of an ordinary (Set-enriched) category is just its completion under split idempotents.
• A category in which all idempotents split is called idempotent complete. The free completion of a category under split idempotents is also called its Karoubi envelope.
Revised on June 1, 2013 01:09:04 by
Urs Schreiber
|
{"url":"http://www.ncatlab.org/nlab/show/split+idempotent","timestamp":"2014-04-17T21:27:31Z","content_type":null,"content_length":"23159","record_id":"<urn:uuid:8bccb3ed-16f3-4421-ad2f-8624aff6050a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If you had an army of your clones on each others heads, about how many would you need to reach the moon? Hint: make sure you use the same units when comparing your height to the distance to the moon.
My height is 5'6". The distance between Earth and the Moon is 385,000 kilometers. How would I go about solving this?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
You are 1.6764m tall. I don't know how many sig figs your teacher wants so use however many decimals you want. Convert the Km to m by way of the good ol' KHDMDCM method (Moving the decimal) and
then simply divide. Hope that helped!
Best Response
You've already chosen the best response.
5feet=60 in 60in +6in= 66in 2.54cm/in 66in*2.54cm/in=167.64cm 100cm/m=1.6764meters 385,000,000 m / 1.6764 m=229,658,792.7 clones?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/504fa8f5e4b03b79a332e3a2","timestamp":"2014-04-20T03:27:17Z","content_type":null,"content_length":"30455","record_id":"<urn:uuid:ae7bd10e-98e3-4332-b219-5ab69c88b592>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to "globalize" the inverse function theorem?
up vote 6 down vote favorite
Let $F: V \times W\rightarrow Z$, where $V,W,Z$ are finite-dimensional smooth (or analytic) manifolds and $F$ is smooth (or analytic). Assume that $\dim W=\dim Z$ and the usual inverse function
theorem applies when we fix the first argument of $F$, so locally we have an inverse (w.r.t. the second argument of $F$) mapping $f: V \times Z \rightarrow W$ such that for any $v,w,z$ from suitably
chosen small vicinities we have $F(v,f(v,z))\equiv z$ and $f$ is smooth (or analytic). Here comes the question:
Is there any sensible way to glue together these $f$'s so that we can speak of them globally, and under which conditions?
More precisely, is there a way to treat this situation "as if" $f$ were defined globally, without doing the "restrict ourselves to suitably chosen small vicinities" thing all the time, even if we
allow the Jacobian of $F$ w.r.t. $w$ to vanish at some points or even on some submanifolds of $V\times W$ of nonzero codimension?
I do not quite expect that (under reasonably mild conditions) one can construct a global $f$ defined on the whole $Q=Z\times V/Sing$ where $Sing=\lbrace(v,z)\in V\times Z|f \quad \mbox{can't be
reasonably defined}\rbrace$ but perhaps one can have a sheaf of such $f$'s on $Q$ or something of the sort? I just started with learning the sheaf theory, so I don't quite (yet) know how to make
things work on my own.
While in my particular application I would like $V$ to be a manifold, if this makes things easier, I can quite well make do with $W$ and $Z$ being just open domains in $\mathbb{R}^n$ or $\mathbb{C}^
n$ (or whole $\mathbb{R}^n$ or $\mathbb{C}^n$ for that matter); that's the main reason why I wrote the domain of $F$ as $V\times W$ rather than just some general manifold $M$.
In fact, I am quite convinced that things like that should have been treated in the literature but I wasn't able to google up anything reasonable so far (maybe I just don't know the right keywords),
so pointing out any suitable references is most welcome.
reference-request sheaf-theory ca.analysis-and-odes
2 Have you tried to work this out yourself? The inverse function theorem gives uniqueness as well as existence. What goes wrong when you try to extend the domain of the inverse map? – Deane Yang Nov
28 '10 at 22:21
@Deane: Thanks for your input; however, in the applications I have in mind I must allow the Jacobian of $F$ to vanish at some points (or rather on a submanifold of $V\times W$ of nonzero
codimension in $V\times W$), so I can't genuinely globalize $f$, that's the whole point. – anonymous Nov 29 '10 at 5:41
If the Jacobian vanishes on a submanifold, then that's where all the action is. You have to study the behavior of $F$ on and near the singular submanifold in order to say anything more. There is
1 not going to be a general theorem about this, since the behavior can be quite complicated. But perhaps your situation has some additional conditions (dimensional constraints, for example) that
might reduce the possibilities. – Deane Yang Nov 29 '10 at 16:23
add comment
1 Answer
active oldest votes
There exist several known global implicit function theorems. Those results tend to be tailored for specific applications. It seems to be rather difficult to state a universally useful
one-size-fits-all version.
One result that I find particularly helpful goes back to Hadamard (see, e.g., Chapt. 6 of "The Implicit Function Theorem" by Krantz and Parks):
Theorem. Let $M$ and $N$ be smooth, connected manifolds of dimension $d$ and let $f:M\to N$ be a $C^1$ mapping. If
□ $f$ is proper (i.e. $f^{-1}(K)\subset M$ is compact whenever $K\subset N$ is compact),
up vote 6 □ the Jacobian of $f$ vanishes nowhere on $M$, and
down vote □ $N$ is simply connected,
then $f$ is a homeomorphism.
You might be interested also in this paper by Rheinboldt. It contains some topological conditions on when the local solvability of the equation $$F(x,f(x,z))=z$$ leads to the global
Instead of assuming the target simply connected, you might assume the map has degree one in some sense, e.g. some fiber is one point, or if the manifolds are both compact, the map on top
homology is isomorphic. – roy smith Nov 29 '10 at 0:41
Thank you, Andrey and Roy, that's interesting but it's going in a direction which is different from what I originally had in mind. The problem is that, in the notation you adopted,
requiring the nonvanishing of Jacobian of $F$ everywhere on $M$ is too restrictive for the applications I need; that's why I used "globalize" (with the quotation marks!) in the title and
threw in the remark on sheaves. – anonymous Nov 29 '10 at 4:39
add comment
Not the answer you're looking for? Browse other questions tagged reference-request sheaf-theory ca.analysis-and-odes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/47612/how-to-globalize-the-inverse-function-theorem","timestamp":"2014-04-16T19:44:22Z","content_type":null,"content_length":"59886","record_id":"<urn:uuid:697cd27a-f2ec-45ef-a4a1-5f0b048b37ac>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonlinear Mixed-Effects Modeling of Population Pharmacokinetics Data
Data sets involving nonlinear, sparse grouped data are common in the health sciences, especially in drug trials, where they are used to measure drug absorption, distribution, metabolism, and
elimination. In this approach, patients are grouped using characteristics such as age, sex, weight, and smoking history. Given the expense of drug trials, however, it is not always possible to obtain
sufficient patient data.
Nonlinear mixed-effects (NLME) modeling provides a good solution for modeling sparse datasets. These models account for both fixed effects (population parameters assumed to be constant each time data
is collected) and random effects (sample-dependent random variables). In modeling, random effects act like additional error terms, and their distributions and covariances must be specified.
Mixed-effects models provide a reasonable compromise between ignoring data groupings entirely, thereby losing valuable information, and fitting each group with a separate model, which requires
significantly larger sample sizes.
Using population pharmacokinetics (popPK) data as an example, this article demonstrates a workflow for implementing a nonlinear mixed-effects model using SimBiology™.
The Phenobarbital Case Study^1
This well-known case study involved 59 pre-term infants who were given phenobarbital to prevent seizures during the first 16 days after birth. We will use the data collected during this study to
estimate model parameters to best fit this data. This involves visualizing and preprocessing data, creating a PK model, fitting the model to data, and analyzing the results.
Visualizing and Preprocessing the Data
Data visualization and preprocessing reveals patterns in and distributions of the data. It also lets us deal with outliers and with bad or missing data points. For example, we may want to determine
what type of elimination route phenobarbital has from this infant population, or look at the ranges of concentrations for each time point.
Data can be imported from a number of sources, including text files, Excel files, MAT files, and the MATLAB^® workspace. If a data file has common headers, such as ID or Time, SimBiology
automatically recognizes and stores the headers as the group and independent variables.
To visualize the data, we select a plot type and an x and y variable in the external data panel in SimBiology (Figure 1). Plots are automatically updated as we select different variables or plots,
and can be saved for later reuse. We can call MATLAB functions to create additional plots.
The exclude tab lets us remove outliers or bad data points either by selecting rows in the data table or by specifying rules. For example, to exclude all data pertaining to a specific patient, we can
use a rule to remove all data points associated with that subject. Excluding a data point does not remove it from the dataset permanently, but rather flags the data point to be ignored during
analysis. We can create new columns of data based on existing columns, and perform statistical analyses on the data, such as calculating the mean, area under the curve (AUC), area under the first
moment curve (AUFMC), and mean residence time (MRT).
Creating a PK Model
We begin by defining the core of a PK model, the base model. This often consists of a number of compartments, a dosing type, and an elimination route. We may have a priori knowledge about the base
model, or observe a trend in our data that will suggest where to start with a component. If we have no trends or prior knowledge, we can experiment with different base models and see what works best.
Because the data in our example appears to have a linear elimination route, we’ll use a simple one-compartment model with bolus intravenous dosing and linear clearance elimination.
We create the base model in SimBiology by entering the model components in the Model Wizard (Figure 2). Alternatively, we could create a blank model, import a model from a file, or import a model
from the MATLAB workspace.
Fitting the Model to Data
In this step, we estimate model parameters based on the external data. In our example, we want to estimate the volume of the compartment (Central) and the parameter representing the clearance of the
drug from the compartment (Cl-Central). We will need to calculate random effects for both.
In the Fit Parameters task on the SimBiology desktop (Figure 3), we specify parameters that we want to estimate, parameters for which we want to calculate random effects, and the dataset to fit to.
We also perform dataset mapping to identify group and independent variables and specify the covariance pattern. We click Run to begin the parameter fit.
Analyzing the Results
After fitting our model to the data, we’ll want to determine how well our fitting performed. SimBiology generates two types of outputs: a data panel summarizing the results, and diagnostic plots
specified in the Fit Parameters task. The data panel includes log-transformed fixed estimates for the parameters, a list of the random effects for each parameter for each patient, a summary of
statistics on the fit, and the estimated covariance matrix. We could make this an iterative workflow by examining the goodness-of-fit statistics, such as Akaike and Bayesian information criteria (AIC
and BIC, respectively), from various models and selecting the one that best fits our data set.
The SimBiology desktop offers several prepackaged diagnostic plot types, including trellis plots, which plot both the observed and predicted time courses of drug concentration for each patient. The
plot in Figure 3 shows that the predicted results accurately replicate the observed data for four of the subjects. Other plot types include observed versus predicted concentration values, box plots
of the random effects calculated for each parameter, residual errors over time, and the distribution of residuals.
We can quickly capture our work by automatically creating an HTML or XML report in the SimBiology report generator. The report will be stored as a node in the SimBiology project. A SimBiology project
stores multiple models, datasets, reports, analysis tasks, and all other components used in the workflow in one file, making it easy to manage and organize associated data files, models, and results.
Extending This Approach
We looked at a simple popPK example, but there are many possibilities for further modeling and analysis. For example, we could incorporate parallel computing to increase performance, leverage the
SimBiology command line to include covariates in our model, and incorporate MATLAB code to customize or automate the workflow.
The author would like to thank Priya Moorthy, Sam Roberts, and Sowmini Sampath for their contribution to the example on which this article is based.
^1Grasela TH Jr, Donn SM. "Neonatal population pharmacokinetics of phenobarbital derived from routine clinical data." Dev Pharmacol Ther 1985:8(6). 374-83. PubMed Abstract
|
{"url":"http://www.mathworks.nl/company/newsletters/articles/nonlinear-mixed-effects-modeling-of-population-pharmacokinetics-data.html?nocookie=true","timestamp":"2014-04-23T09:41:04Z","content_type":null,"content_length":"31943","record_id":"<urn:uuid:07eddd79-65fe-4946-a79e-fdc00672d140>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
does there exist a family of objects over the tangent space to the base space of a family of objects?
up vote 2 down vote favorite
Suppose we have a family of objects $\Xi \to S$ over a base smooth projective scheme $S$. Take a closed point $p\in S$ and consider the tangent space to $S$ at $p$. Can one construct an "induced
family" over this tangent space (or probably rather over its projectivized space) starting from $ \Xi$ ? what interpretation could one give to this? a second-order deformation?
1 Would you settle for having a family over the tangent cone to $p\in S$? Which one could instead call the normal cone. Then the normal cone to $\Xi_p \subseteq \Xi$ will map to the normal cone to
$p \in S$. I don't have much interpretation for its fibers, though. – Allen Knutson Feb 8 '13 at 12:30
Thank you, Allen. I don't fully understand what you mean by $\Xi_p$, is it the full fiber over $p$, right? – IMeasy Feb 10 '13 at 17:59
Yes, that's what I meant. – Allen Knutson Feb 15 '13 at 4:15
add comment
1 Answer
active oldest votes
Imagine that $S$ is an open subset of some fine moduli space which does not contain any rational curves. This does happen although I don't know any examples. Then any family over an
affine space will have to be constant, so the answer would be no.
up vote 2 On the other hand, of course there is formal family over the completion of $S$ at the given point $p$ ($\hat{S}_p = \lim Spec (\mathcal{O}_{S, p}/\mathfrak{m}_{S, p}^n)$), which looks
down vote like the infinitesimal neighborhood of $0$ in the tangent space ($\hat{T}_0 = Spf(k[[t_1, \ldots, t_s]])$ where the $t_i$ form a basis of the dual of $T= T_{p} S$). I think morally this
accepted might play the role of what you want.
2 For an example, take the modular curve $X(n)$ (with $n$ not too small). – Dan Petersen Feb 6 '13 at 17:53
thank you two! @Dan: could you develop your example a little more? – IMeasy Feb 6 '13 at 18:19
What I think Dan meant is: if the families you are considering are families of elliptic curves with fixed basis of $n$-torsion, the moduli space will be the modular curve $X(n)$ (or
rather its open subset $Y(n)$). After removing finitely many points, it will be a fine moduli space. As computed here en.wikipedia.org/wiki/Modular_curve , the genus of $X(n)$ is
nonzero for most $n$. Since $X(n)$ is smooth, for such $n$ there are no non-constant maps $\mathbb{A}^1\to X(n)$, that is, every family of the considered type over $\mathbb{A}^1$ has
to be constant. – Piotr Achinger Feb 7 '13 at 23:05
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry moduli-spaces deformation-theory or ask your own question.
|
{"url":"https://mathoverflow.net/questions/120976/does-there-exist-a-family-of-objects-over-the-tangent-space-to-the-base-space-of","timestamp":"2014-04-17T04:16:36Z","content_type":null,"content_length":"58727","record_id":"<urn:uuid:ec21141c-5046-448b-b6c9-3f354a19b809>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Briggs on Logarithms
Date: 08/28/2001 at 06:29:52
From: Khalid Mahmood
Subject: Logarithms
Dr. Math!
I want to know how Briggs constructed logarithmic tables of common
logarithms. The Encyclopedia Britannica says:
"To construct this table Briggs, using about thirty places of
decimals, extracted the square root of 10 fifty-four times, and thus
found that the logarithm of 1.0000 0000 000012781 91493 20032 35 was
0.0000 0000 0000 05551 11512 31257 82702 and that for a number of this
form (i.e.for numbers beginning with 1 followed by fifteen ciphers,
and then by seventeen or less numbers of significant figures) the
logarithms were propotional to these significant figures. He then by
means of a simple proportion deduced that log (1.0000 0000 0000 1)
= 0.0000 0000 0000 04342 94481 90325 1804, so that, a quantity 1.0000
0000 0000x(where x consists not more than seventeen figures) having
been obtained by repeated extraction of the square root of a given
number, the logarithm of 1.0000 0000 0000x could then be found by
multiplying x by .0000 0000 0000 o4342..."
(1) extract the square root of 10 fifty-four times for me
(2) tell me why he took the square root of 10 fifty-four times.
(3) How did he know that the logarithm of 1.0000 0000 0000
12781 91493 20032 35 was 0.0000 0000 0000 05551 11512 82702 ?
Please explain the paragraph.
I shall be thankful to you for your help.
Date: 08/28/2001 at 12:52:55
From: Doctor Peterson
Subject: Re: Logarithms
Hi, Khalid.
First, let's look at what he was trying to do. Why take the square
root 54 times?
We know that log(10) = 1. We also know that
log(a^b) = b log(a)
In particular, therefore,
log(10^x) = x log(10) = x
When x is an integer, this is simple. But what if we used a very small
fraction? Then we could find the logarithm of a number very close to
1. We can raise ten to a small fractional power by taking the square
root repeatedly:
log(sqrt(10)) = log(10^0.5) = 0.5 = 2^-1
log(sqrt(sqrt(10))) = log(10^0.25) = 0.25 = 2^-2
log(sqrt(sqrt(sqrt(10)))) = log(10^0.125) = 0.125 = 2^-3
Do you see what I am doing? Each time I take the square root, I am
raising 10 to a smaller fractional power, and therefore finding a
number whose log is a smaller number.
If I take the square root 54 times, I am finding
10^(2^-54) = 1.0000000000000001278191493200323
and I know that its log is
2^-54 = 5.5511151231257827021181583404e-17
= 0.000000000000000055511151231257827021181583404
(Of course, I actually found the second number first on my calculator,
and then raised 10 to that power; Briggs would have actually done the
square root by hand 54 times, a much longer process. You can take the
square root repeatedly on a calculator if you want to get the same
result. I used the calculator on my computer, which gives more
precision than most.)
You didn't ask me to go on and see how he used this result, but let's
continue. To find log(1.0000000000000001), we can use the fact that
log(y) - log(1) log(x) - log(1)
--------------- =~ ---------------
y - 1 x - 1
for x and y close to 1, approximating the log curve by a straight
line. Then log(y) can be estimated as
log(y) =~ (y-1)/(x-1) * log(x)
Taking x as the number above, this gives
log(1.0000000000000001) =~
0.0000000000000001 / 0.0000000000000001278191493200323 *
= 0.7823553867474 * 0.00000000000000005551115123125782702
= 0.0000000000000000434294481903245
That can be used to find other logarithms and build the table.
One more question: why did he use 54? I assume he wanted a particular
level of precision, so he continued until he got a number close enough
to 1, and then continued until it looked like 1.000...01..., so that
he could make an accurate approximation to 1.000...01 by this method.
I checked to see whether we have covered this topic before, and found
this in our archives (by searching for "briggs log"):
Logarithms: History and Use
This illustrates the process using simpler numbers.
- Doctor Peterson, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/55600.html","timestamp":"2014-04-16T11:02:12Z","content_type":null,"content_length":"9267","record_id":"<urn:uuid:534fce75-2312-498b-b7b8-78bced2d5d92>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is THE ANSWER CALLED WHEN YOU SUBTRACT ONE NUMBER FROM ANOTHER?
Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS!
All the definitions and meanings found are from third-party authors, please respect their copyright.
© 2014 - mrwhatis.net
|
{"url":"http://mrwhatis.net/the-answer-called-when-you-subtract-one-number-from-another.html","timestamp":"2014-04-20T13:21:03Z","content_type":null,"content_length":"40322","record_id":"<urn:uuid:106eb9f2-edcf-48ea-8b5c-c52aecd13a61>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gravitation: Orbits
Problem : What is the escape velocity from the earth? ( M [e] = 5.98×10^24 kilograms, r [e] = 6.38×10^6 meters)
The escape velocity is given by:
Thus the escape velocity from the earth is roughly 11.2 kilometers per second.
Problem : If a black hole contains a singularity (all the mass is concentrated at a point) with a mass 1000 times the mass of the sun, what is the radius beyond which light cannot escape? This is
called the Schwartzchild radius. The mass of the sun is 1.99×10^30 kilograms and the speed of light is 3×10^8 m/s.
The escape velocity is dependent on the radius as well as the mass. The escape velocity is given by:
v =
. Solving for
, we have:
r =
. Substituting in we find:
r = ^6
meters. This is less than half the radius of the earth. Closer than this radius the escape velocity will be greater than the speed of light.
Problem : We would expect a satellite orbiting the earth to be slowed down by friction with the atmosphere. We also know that velocity is inversely proportional to the radius of the orbit, so when
the satellite slows down, it should spiral away from the earth. But it is observed that satellites spiral in towards the earth. How can we explain this paradox?
The satellite is actually caused to speed up by the viscous drag. This causes the satellite to gain kinetic energy, but lose potential energy as it spirals towards the earth. Overall, energy is
dissipated into heat by the frictional force.
Problem : Suppose the viscous drag causes a 2000 kilogram satellite to increase its speed from 10000 m/s to 15000 m/s. If the satellite had an initial orbital radius of 6.6×10^3 kilometers, what is
its new orbital radius? ( M [e] = 5.98×10^24 kilograms).
We can calculate the energy of the old orbit by summing the potential and kinetic energies.
T [o] = 1/2mv ^2 = 1×10^11
U [o] = - ^11
Joules. The
E = T [o] + U [o] = - 0.2×10^11
Joules. The new kinetic energy is
T [n] = 1/2mv ^2 = 2.25×10^11
Joules. The potential energy in this new arrangement is given by
U [n] = E T = (- 0.2 2.25)×10^11 = - 2.45×10^11
Joules. We can now calculate the corresponding radius as
r = ^6
metres. As we would expect, the satellite is now in a lower orbit.
Problem : At a height of 2×10^7 meters above the center of the earth, three satellites are ejected from a space shuttle. They are given tangential speeds of 5.47 kilometers/sec, 4.47 km/sec, and 3.47
km/sec respectively. Which one(s) will assume circular orbits? Which elliptical? The mass of the earth is 5.98×10^24 kilograms.
We can calculate the total energy of each satellite in terms of its mass
. In fact, the potential energy of each orbit will be the same:
U = - ^7)m
. For a circular orbit, the kinetic energy must be exactly half the absolute value of the potential energy. For the 4.47 km/sec satellite,
T = 1/2mv ^2 = (1.0×10^7)m
. This is half the potential, so this satellite will take on a circular orbit. The other two satellites will take elliptical orbits.
|
{"url":"http://www.sparknotes.com/physics/gravitation/orbits/problems_1.html","timestamp":"2014-04-17T09:57:35Z","content_type":null,"content_length":"57193","record_id":"<urn:uuid:d89defe6-8872-4656-a7dc-8333bbc23403>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Perpendicular Bisector and Circumference help
October 22nd 2012, 04:00 PM #1
Oct 2012
Perpendicular Bisector and Circumference help
1)Show that ABC is a right-angled triangle.
2)The line y= 2x -6 is the equation of the perpendicular bisector of AC. Find the equations of the other two perpendicular bisectors of the sides of the triangle.
3)Find the circumcentre of the triangle, the point where the perpendicular bisectors intersect.
4)Write down the equation of the circle which has all three vertices of the triangle on the circumference.
5)Find the shortest distance between the line y=2x-6 and the line y=2x-1.
6)Find the length of the line y=2x-1 inside the circle.
ImageShack® - Online Photo and Video Hosting
Thank you!
Re: Perpendicular Bisector and Circumference help
This looks like a homework exercise!
You need to remember several things:
1. Distance formula
2. How to find the midpoint/gradients of perpendicular lines
3. How to find where lines intersect
4. Think about how you might find the centre, and the radius. This is all you need
5. Draw a diagram perhaps. Shortest distances tend to be perpendicular
6. Find the intersections. Distance formula.
These should all be in your notes.
Good luck! Post back if you're struggling
Re: Perpendicular Bisector and Circumference help
Thank you so much, but what I really need is just answers with the working!
October 24th 2012, 01:37 PM #2
October 25th 2012, 03:09 PM #3
Oct 2012
|
{"url":"http://mathhelpforum.com/geometry/205900-perpendicular-bisector-circumference-help.html","timestamp":"2014-04-17T07:48:03Z","content_type":null,"content_length":"35100","record_id":"<urn:uuid:3a8c7bac-c930-4517-ac07-023e7cabaf71>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What was the reason spacetime expanded in the first place?
2012-Apr-18, 05:58 PM #1
Join Date
Apr 2012
What was the reason spacetime expanded in the first place?
In an approach recently conjectured by Fotini Markopoulou, geometrogenesis explains that geometry appeared late in the universe when it became sufficiently cool enough and matter dominated
atleast the small portion of the universe it does. Her paper as well as others, uncluding the spin network and it's important tie with the triangle inequality is in the references.
Currently it seems we simply believe there was some high potential for a universe to come into existence. Unless we where to think the universe somehow tunneled into existence because of this
potential, or there was some mechanism which drove expansion. If we look at the conditions of the universe at big bang, we find that normally we think of regions where spacetime cannot be
described by quantum physics because of a singularity. Hawking, in his book a brief history of time, he states that a way to vision this singularity for the big bang is by thinking of matter
being infinitely stacked on top of each other. Yet I suppose, one must remember that this is really all done at some ''point''.
If we suppose we try and understand that physical picture of the singularity of the big bang, we can also presume that these particles must be occupying the same space as we presume at the
initial conditions there was no dimensions we can really speak about. In the phase space that Markopoulou uses is constructed of what is called a spin network. It has neighbours (N(N-1)=2 for any
one particle in a Hilbert Space. They can be thought of forming geometric triangles, and the length of any of the sides must be less than or equal to the other side, obeying the triangle
inequality. This is simply just a geometrical form of your usual Uncertainty Principle. If all lengths are reduced to zero, then all edges converge to a single point; it is here I speculate my
own extension of the theory saying it is here where particles experience a high instability in space.
So let me explain how this model works. First of all, it seems best to note that in most cases we are dealing with ''three neighbouring points'' on what I call a Fotini graph. Really, the graph
has a different name and is usually denoted with something like the spin network. I should perhaps say, that to any point, there are two neighbours.
Of course, as I said, we have two particles in this model
I have found it customary to place a coupling constant here
It so happens, that Fotini's approach will in fact treat
which most will recognize as an expection value. The Fotini total state spin space is
Going back to my interaction term, the potential energy between particles
Thus we will see that to each vertex
From here I construct a way to measure these spin states in the spin network such that we are still speaking about two particles
where the
Thus my force equation can take into respect a single spin state, but denoted for two particles
with a magnetic coefficient
I now therefore a new form of the force equation I created with an interaction term, as I came to the realization that squaring everything would yield (with our spin states)
Sometimes it is customary to represent the matrix in this form:
As we have in our equation above. The entries here are just short hand notation for some mathematical tricks. Notice that there is a magnetic moment coupling on each state entry. We will soon see
how you can derive the Larmor Energy from the previous equation.
Sometimes you will find spin matrices not with the magnetic moment description but with a gyromagnetic ratio, so we might have
The compact form of the Larmor energy is
We can swap our magnetic moment part for
This is madness I can hear people shout? In the Larmor energy equation, we don't have
Well yes, this is true, but we are noticing something special. You see,
This is the angle between two vectors. What is
So by my reckoning, this seems perfectly a consistent approach.
Now that we have derived this relationship, it adds some texture to the original equations. If we return to the force equation, one might want to plug in some position operators in there - so we
may describe how far particles are from each other by calculating the force of interaction - but as we shall see soon, if the lengths of the triangulation between particles are all zero, then
this must imply the same space state, or position state for all your
As distances reduce between particles, our interaction term becomes stronger as well, the force between particles is at cost of extra energy being required. Indeed, for two particles
In general, most fundamental interactions do not come from great distance and focus to the same point, or along the same trajectories. This actually has a special name, called Liouville's
Theorem. Of course, particles can be created from a point, this is a different scenario. Indeed, in this work I am attempting to built a picture which requires just that, the gradual seperation
of particles from a single point by a vacua appearing between them, forced by a general instability caused by the uncertainty principle in our phase space.
As I have mentioned before, we may measure the gradual seperation of particles using the Lyapunov Exponential which is given as
and for previously attached systems eminating from the same system, we may even speculate importance for the correlation function
acts as coefficient of sigma zero. Thus the energy is represented by a Hamiltonian of spin states
Now, moving onto the implications of the uncertainty principle in our triple intersected phase space (with adjacent edges sometimes given as
It actually turns out that this is really a basic tensor algebra relationship of the irreducible representions of
For spins that do not commute ie, they display antisymmetric properties, there could be a number of ways of describing this with some traditional mathematics. One way will be shown soon.
Spin has close relationships with antisymmetric mathematical properties. An interesting way to describe the antisymmetric properties between two spins in the form of pauli matrices attached to
This is actually a map, taking the form of
This is amap of an action on a pair of vectors. In our case, we will arbitrarily chose these two to be Eigenvectors, derived from studying spin along a certain axis. In this case, our
eigenvectors will be along the
with an abuse of notation in my eigenvectors.
It is a 2-form (or bivector) which results in
This is a result where
[1] - http://arxiv.org/abs/0801.0861
[2] - http://arxiv.org/abs/0911.5075
[3] - http://fqxi.org/community/forum/topic/376
[4] - http://arxiv.org/abs/gr-qc/9505006
Last edited by Aethelwulf; 2012-Apr-19 at 02:31 AM.
No one has answered my little theory so I thought I'd add some new thoughts I have had when thinking about the converging of two particles to their neighbouring points in this model. We should
remind ourselves, that there are three neighbours which form a triangle in our phase space. Our original phase space constructed of Fotini's approach for a pairwise interaction which had the
Now, for a Hamiltonian describing two particles converging to the adjecent edge
As one of a few possibilities. There are six possible solutions in all for different coordinates. The spins in our space is assigning energy to our particles
Actually looks very innocent. But it cannot happen in nature, not normally. Nature strictly refuses two objects to converge to a single point like
The same is happening in our Hamiltonian. The force equation, with it's rapid increase of energy is proportional to the Hamiltonian experiencing an increase of energy from the spin terms
Last edited by Aethelwulf; 2012-Apr-27 at 12:39 AM.
Sorry, the last one should have been
But I am sure you get the drift. Anyway, going one way, the convergence of two particles into a single point, we can surely think about it the opposite way: particles emerging from a single
Which basically says that two pairwise interactions have diverged from a single point, occupying the
I hate it to see that you are not yet being engaged by those who might grasp what you are saying so in the interim let me ask a question that might bring in others if only to show they have a
better grasp than I do, lol.
Correct me if I'm wrong but are you using a beginning point in space and time where all dimensions of your spin triangles are zero resulting in all particles being stacked in the same space, a
point space occupied by multiple spin networks at the creation point?
Coincident with that initial condition then are you saying that there is the uncertainty principle in play corresponding to inherent instability of extreme temperatures in the phase space (the
initial point)?
Do I understand you to say that this uncertainty then can provide the force for separation and therefore expansion by causing vacua to appear between particles giving volume to the spin networks?
Can you elaborate on this?
Can you correct any misunderstanding as I have stated it and explain the progression of the expansion, i.e. is the vacua equivalent to dark energy?
Are you considering preconditions, i.e. do you have a scenario of how the stacking of the spin networks into a point-space came about, or is it a singularity, or is it simply 'something from
nothing"? *
I hate it to see that you are not yet being engaged by those who might grasp what you are saying so in the interim let me ask a question that might bring in others if only to show they have a
better grasp than I do, lol.
Correct me if I'm wrong but are you using a beginning point in space and time where all dimensions of your spin triangles are zero resulting in all particles being stacked in the same space, a
point space occupied by multiple spin networks at the creation point?
Exactly. These multiple spin networks are highly unstable - they are literally occupying the same point with angular momentum which is equally forbidden by the Pauli Exclusion if there was any
fermions in the mix.
I haven't quite made that connection, but I couldn't say it was wrong. In the high energy epoch, we are dealing with very volatile conditions. More so maybe because these high temperatures are
linked to no geometry yet it is concerned with jam-packing particles into the smallest point possible, giving rise to a new instability, the uncertainty principle.
You understand right. Spacetime did not really exist at big bang. If everything arose from a point, then imagine your phase space as being made of a single point. As temperatures cooled, then
geometry appeared - then so must have our spin network - which is the ability to organize particle spins in phase space. You wind back the hands of time to the very first point, the very first
instant, is like reducing the lengths of your triangle to zero to let all neighbouring particles occupy the same point and this is why it shoud indicate a violation of the uncertainty principle.
[QUOTE=Bogie;2010963]Can you correct any misunderstanding as I have stated it and explain the progression of the expansion, i.e. is the vacua equivalent to dark energy?
Yes, I have considered preconditions. The beginning where these particles where stacked infinitely high, could have only existed for a very short period of time, maybe the kind of time we
associated to a small virtual quantum fluctuation. Whatever the singularity was, by this theory, it did not exist for very long. Whatever existed before the singularity (if anything did) may have
existed a lot longer. Maybe eons.
Last edited by Aethelwulf; 2012-Apr-24 at 06:15 PM.
About the dark energy part, sure it is allowed... see my theory does not explain continued expansion, it only explains why it expanded in the first place.
In fact, you might ask, what kind of preconditions give rise to singularities? This question may give clues to what happened before big bang. As I said, the initial singularity could not exist in
its current state long without any degrees of freedom... but what happened before this may have existed for eons.
Damn... I said pauli uncertainty... there is no such thing hahaha
Pauli exclusion principle, stating that no two fermions can exist in the same energy state.
Hey, why is no one responding to my OP? lots of Op's have been anwered with less...
... *shakes shoulders*
This person, whoever wrote this was on the right track. They assume that the Heisenberg Uncertainty principle dominated the early universe. I bet even my idea above using causal triangulation to
explain an instability of space would have impressed perhaps?
As they explain, general relativity breaks down, things take infinite values and a lot of this has to do with the fact we don't know how to apply our theory. If my theory is correct, it will be
the first of a kind at applying quantum mechanics where physics usually struggles with interpretations.
Let's study this equation a bit more:
What we have in our physical set-up above, is some particle oscillations which presumably, under a great deal of force, being measured to converge to position
Thus my force equation can take into respect a single spin state, but denoted for two particles
But perhaps, more importantly, you may decompose the equation for both particles. Let us say, particle http://en.wikipedia.org/wiki/File:Spin_network.svg provided by wiki. If we stood in the
z-vertex, and made the xy-vertices merge to the zth
Then naturally it follows that the force once describing the seperation of particles no longer exists, because anything multiplied by zero is of course zero. Here we have violated some major
principles in quantum mechanics. Namely the uncertainty principle and for the fact that particles do not converge like this. By making more than one particle occupy the same space is like saying
that either particle will have a definate position and this of course from the quantum mechanical cornerstone, the uncertainty principle is forbidden. May we then speculate that the universe was
born of uncertainty? Uncertainty has massive implications for statistical physics. In the beginning of the universe, most physicists would agree that statistical mechanics will dominate the
quantum mechanical side... quantum mechanics is afterall a statistical theory at best. Perhaps then, no better way to imagine the beginning of the universe other than through the eye's of
[1] - http://www.math.bme.hu/~gabor/classwork/ising.pdf
Last edited by Aethelwulf; 2012-Apr-27 at 02:10 AM.
Below Planck Lengths, the Uncertainty Principle dominates. When the universe was a point at our universes finite past, time as we know it did not exist. But because we are inferring on things
below the Planck Time, there is a very large uncertainty in energy - could my own analysis of the spin networks and the triangulation of spacetime be hinting at an uncertainty which is also
At below planck lengths, geometry as it is understood by General Relativity breaks down. The planck lengths are derived using dimensional analysis. Another way to state this, is that the
Schwartzchild radius of a black hole is equal to the Compton wavelength at the planck scale thus a photon trying to probe this would gain no information at all.
For a quick comparisson, the Classical Electron Radius is in fact
Basically, all particles have a wavelength. Photon's can never be at rest but the energy of a photon can be low enough to have it's wavelength match any particle who is at rest. It's often seen
in the eye's of many scientists as the ''size'' of a particle. Actually, a more accurate representation of the size of an object would be the Reduced Compton Wavelength. This is just when you
divide the Compton Wavelength by
Furthermore, if a photon could measure a planckian object, it could actually create a new class of particle called a Planck Particle - it would distort that space so badly that the photon would
be gobbled up and no measurement could be performed. This is due to the Uncertainty Principle if my memory serves
Interestingly, Brian Greene has speculated on sub-Planckian existences. Whilst the Planck lengths could be fundamental, we don't know this for fact. He said:
"the familiar notion of space and time do not extend into the sub-Planckian realm, which suggests that space and time as we currently understand them may be mere approximations to more
fundamental concepts that still await our discovery.”
Which is interesting, because if anyone actually follows my own speculations and contentions, I have been wheeling the idea that space and time could certainly not be fundamental since in the
very beginning, there was no geometry (space-time) - not in the sense that GR deals with it.
So, as we have seen, the geometry imposes a very strong connection between time and energy at below Planck Lengths, pretty much approaching the singular state of existence. If the spin networks
is the correct approach, then maybe the uncertainty relationship I deduced are both actually hinting at the same phenomenon. This might be the way in how to treat quantum mechanics very early on
in the universes history.
... I was asking for this to be closed now; but I realized I actually do have something more to say.
Last edited by Aethelwulf; 2012-May-01 at 04:52 AM.
There is one major inconsistency with my idea - no one has brought it up which I find quite odd. I have been attempting to explain the expansion of space via the conditions set for a singularity
- a confined region, a point in which energy is stacked up to infinity - and thereby using quantum mechanics, namely the uncertainty principle to state that there needed to be extra degree's of
freedom .
Well, there is one big problem with this: quantum mechanics does not allow for singularities, and the main reason why is because of the uncertainty principle! That's like trying to fight a fire
with fire in my approach?
I must admit, I am surprised no one even mentioned this, all this time. A lot of what I have said is true however - in the beginning, there was indeed a lot of uncertainty. If we bring quantum
mechanics into the picture there simply cannot be any singularities because of the connetcion of momentum and location that must exist for every object - if you try and squeeze an object into a
space too small, it will resist that squeeze. This is why physicists say a singularity cannot exist...
But wait a minute!!!
Isn't that what my theory has been saying all along?
Yes it is. I think we can still deal with a singularity, but I don't think the singularity was there for
1) There forever, and
2) Was even there that long
We must infer that the beginning of time was a very special case of violating the uncertainty principle. It almost hardly never happens in nature, but the beginning of existence itself might
require a little violation. One which was quick, one which keeps to the laws of quantum mechanics but tries to accommodate for a singularity. What fascinates me, is, that is the big bang is the
true representation of the universe, then scientists have thrown out the suggestion that quantum mechanics cannot explain singularities because you can't squeeze matter into regions of spacetime
without expecting a violation to occur. But perhaps we have overlooked the question that maybe the violation needed to occur in the first. The singularity afterall, makes no physical sense from
GR, therefore why should it make any lawful sense in QM?
The uncertainty principle surely did dominate the early framework - and I think we should prepare our minds to begin thinking that maybe violations like this had to occur - afterall, it was only
for a short time - maybe a time equivalent to the lifespan of a virtual particle. But what of the infinite singularity? Well it may not exist in that form today - the singularity that once
existed has been shifted, not of an infinitely dense point, but of an infinite expansion of space with an infinite amount of energy and matter contained therein.
At night the stars put on a show for free (Carole King)
All moderation in purple - The rules
2012-Apr-22, 11:17 PM #2
Join Date
Apr 2012
2012-Apr-22, 11:26 PM #3
Join Date
Apr 2012
2012-Apr-24, 02:48 AM #4
2012-Apr-24, 02:58 PM #5
Join Date
Apr 2012
2012-Apr-24, 05:31 PM #6
Join Date
Apr 2012
2012-Apr-24, 06:04 PM #7
Join Date
Apr 2012
2012-Apr-24, 06:16 PM #8
Join Date
Apr 2012
2012-Apr-24, 10:35 PM #9
Join Date
Apr 2012
2012-Apr-26, 05:51 PM #10
Join Date
Apr 2012
2012-Apr-27, 01:15 AM #11
Join Date
Apr 2012
2012-Apr-28, 10:14 PM #12
Join Date
Apr 2012
2012-Apr-30, 01:17 PM #13
Join Date
Apr 2012
2012-May-01, 05:07 AM #14
Join Date
Apr 2012
2012-May-01, 05:59 PM #15
|
{"url":"http://cosmoquest.org/forum/showthread.php?131070-What-was-the-reason-spacetime-expanded-in-the-first-place&p=2011238","timestamp":"2014-04-17T03:55:14Z","content_type":null,"content_length":"159963","record_id":"<urn:uuid:96ded35b-a2e5-4b99-bed9-c4063674c8d0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fourth Dimension
Date: 05/13/97 at 19:19:15
From: alexszeto1
Subject: hi
Hi Dr.Math,
I'm a freshman in high school and I'm doing a research paper on the
fourth dimension. Will you please help me try to understand what the
fourth dimension is? If you could, can you also include any
information you know about the fourth dimension?
Thank you for your time,
Date: 06/25/97 at 13:34:41
From: Doctor Barney
Subject: Re: hi
The fourth dimension is not so much a thing as it is an idea. This is
like the fact that the number 3 isn't really a thing in and of itself.
You can write a symbol to represent 3 on a piece of paper or a chalk
board, you can use the idea 3 to describe how many apples you have,
and you can even buy a birthday candle in the shape of a 3, but you
haven't really bought "three," have you?
Mathematics is a philosophy which we use primarily to describe
physical phenomena. The most obvious example is the way Euclidean
geometry (three-dimensional geometry) describes physical space. That
is because the physical space as we understand it has three
independent "degrees of freedom," or three directions in which we and
the other objects in space are able to exist, expand, or move. For
example: length, width, height; latitude, longitude, altitude; Elm
street, fourth building down, second floor; x, y, z.
Now, the idea of the fourth dimension (or the idea of the first four
dimensions all together) is an idea we use to describe any quality,
state, object, event, or concept which requires four independent
degrees of freedom (ways in which it is able to be different) in order
to describe it completely. For example: length, width, height, weight;
latitude, longitude, altitude, temperature; Elm street, fourth
building down, second floor, 9 O'clock on Thursday; x,y,z,w. It's
really that simple.
The only reason people get confused about it is because they cannot
visualize it. If I tell you the length, width and height of an
object, you can get an idea of what it looks like, perhaps a cube or a
long slender rod or anything in between. But if I also tell you what
temperature something is or how much it weighs, what does that look
The problem with visualization becomes even more acute when we try to
graph the data. You can make a drawing of a three-dimensional object
on a flat piece of paper, and you can even make a model to represent
three-space, like a relief map that shows the elevation of the ground
at every point over a given area. But when you try to draw a picture
of four-dimensional space it is impossible.
For your paper, you might want to explore some potential applications
in more detail. Here are some ideas to get you started.
1) Time is considered to be the fourth dimension in many applications,
with the first three being the classical representation of physical
space. For example, to study how an airplane travels through the sky,
you would need a series of data telling where it was at any given
time. The individual data points might look like (x,y,z,t) where t is
the time that the plane was at point (x,y,z). In your paper you might
discuss what types of functions could describe how the plane travels
through this four-dimensional space. To try to visualize the
function, picture a plane flying through the sky leaving a very long
contrail. Now picture signs along the contrail depicting constant
intervals of time. The signs will be farther apart when the airplane
is going faster. Just as the slope of a line or other function plotted
on a two-dimensional graph tells you one parameter of the function
changes relative to another, so the distance between the signs on our
imaginary contrail would tell you how the airplane's position changes
relative to time.
2) You could create or describe a hypothetical representation of the
temperature in the atmosphere as a function of location. (You could
also use pressure, or humidity, or any other property of the air.)
Create several layers, using a transparent media such as vu-graph
foils. Color the different areas different colors on a spectrum from
hot to cold to represent the temperature at that location. Be sure
the temperature is a continuous function (does not jump from the
hottest color to the coldest) as you go from one layer to another
directly over the same point.
In the February '97 National Geographic there is a fascinating article
on the oceanic science of the Arctic. On page 49 is a really cool
figure showing one layer from a multi-dimensional model scientists
have created to describe the temperature as a function of position in
the Arctic Sea.
3) You could also create a four-dimensional database that has nothing
to do with physical space. Collect four different pieces of data on
20 or so of your classmates, such as age in months, shoe size, number
of siblings, and amount of money on their person when they answered
the questions. Then experiment with different tables, charts or graphs
to present the data and explore any potential correlations. For the
purpose of this study, these people would exist in the four-
dimensional space which you define.
I hope this helps. On a final note, you are welcome to use my
suggestions and rewrite some of my ideas in your own words, but please
don't copy large sections of my answer directly into your research
paper. That would be plagiarism, which is a form of cheating.
-Doctor Barney, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 06/25/97 at 14:34:25
From: Doctor Sydney
Subject: Re: hi
Dear Jody,
Hello! I want to add just a little to Dr. Barney's response to you.
I hope you find what I have to say interesting even though it's too
late for your research paper!
Dr. Barney did a nice job explaining how lots of people understand the
fourth dimension. However, there is one major way that people think
about the fourth dimension that was not addressed in his response to
you. That is what I will talk about with you.
Let's start with what we know. Let's consider first a plane. We know
that a plane is 2-dimensional, right? What does that really mean?
Well, one way to think of it is that we can put a pair of axes on the
plane (an x-axis and y-axis, for example), and then we can assign to
each point on the plane a pair of numbers, right? You've probably
worked with graphing things on a plane, so you are probably familiar
with what I am talking about. If I told you to graph the triangle
with vertices (1,0), (0,0), and (0,1) you would know what to do,
right? The space is 2-dimensional because we use PAIRS of numbers to
define points in space. Each number in the pair tells us how far we
should move from the origin in a certain direction, right? So, each
number in the pair tells us where to go in space.
Think now about the third dimension. In three dimensions, we assign
TRIPLETS of numbers to each point in space, right? Again, each of the
coordinates in the triplets assigned to points in 3-D space tells us
how far to move in a certain direction from the origin, right? For
instance, if we think of a 3-D space with the standard x, y, and z
axes and origin (0,0,0), then the point (1,2,3) would tell us to move
1 unit down the x-axis from the origin, 2 units down the y-axis from
the origin, and 3 units down the z-axis from the origin, right? So,
all of the coordinates are space coordinates; they tell us how far to
move in a specific direction.
So, if we wanted to think of the fourth dimension as a space that has
4 SPACE parameters (just in the way that the plane has 2 space
paramaters and the world we live in has 3 space parameters), then we
have to use our imagination a little, right? If what I have said above
confused you don't worry. Just start following from here. This is
confusing stuff, believe me!
One way to understand what the fourth dimension "looks like" is to
carefully examine what the 3rd dimension looks like to "creatures"
living in a 2-dimensional world. If we can understand this, then we
can understand some of what the fourth dimension looks like to us
creatures living in the 3-D world by using appropriate analogies.
Here is what I mean. Suppose you live in a 2-dimensional world. For
simplicity, let's say that you live in a plane. Let's call this world
"Flatland." You don't see anything that is outside of this plane. As
far as you know, that is the only space there is. Let's say that you
are a solid yellow square. I am a hollow green circle. Now, what
will you see when you look at me? Remember that you can see only
things that are in the plane. It might be fun and helpful for you to
cut out of construction paper a yellow square and a green circle.
Then, to figure out what you see when you look at me, put the square
and circle on a flat surface (like a table), and think about what you
would see at eyelevel with the table. You might be surprised! Below,
I will talk about what you see because it will help us to understand
the fourth dimension; however, it will be more fun for you if you
don't read what the answer is but figure it out yourself.
Okay, so now you have done the experiment or perhaps thought about
what you would see, so I can talk about what the answer is. As you
probably found, a person who lives in this plane sees a green line
when he or she looks at me. When I look at you, I see a yellow line.
When I first see you, if we are both sitting still, I can't tell if
you are a square, a circle, a line, or some other shape, right? All
of these things look the same to a person in Flatland. How do you
think people in Flatland determine what shapes their friends are? Or,
do they not care? (: These are fun things to think about.
Okay, so now we understand a little bit about what life is like in
Flatland. It might be fun for you to think about what life is like in
Flatland. What do houses look like? If I have a twin sister who is a
solid green circle and is the same size as me, can you tell us apart
(Recall that I was a hollow green circle)? Drawing pictures will help
Let's think about some objects in Flatland. Think about a safe in
Flatland. What would it look like? We would want it to be a container
such that when locked, inhabitants of Flatland cannot get to the
objects inside of it, right? What kind of shape would work for this?
In our 3-D real world, we often use a hollow cube, right? Well, in
Flatland, they use, among other things, hollow squares. Does that
make sense? Think about it - an object that is in a hollow square
cannot be taken by an inhabitant of Flatland; it is completely secure.
Good. Now, we introduce the third dimension into our 2-D world. Think
about someone who is an inhabitant of this world that we live in, our
3-D world. Suppose they walk by Flatland. They see Flatland as a flat
land (duh!) with lots of flat creatures and objects in it, right?
Now suppose that they come across the safe used by me. I have put a
necklace in my safe. Is my necklace safe from someone in the 3-D
world? No! A 3-D person can easily reach into my safe and take my
necklace, right? He or she does not need to break down the barrier of
the safe since he or she can just reach in from the top. Does that
make sense?
So, one way to understand the third dimension from the perspective of
the second dimension is that enclosed spaces like safes can broken
into by 3-D creatures without touching any of the barriers that 2-D
creatures see.
Let's think about how we could think about the fourth dimension from
the perspective of the third dimension in an analogous way. Think
about the safes we use in the 3-D world. They are, as we said before,
often hollow cubes, right? Once we put something in them and lock the
door, we can feel sure that no one will be able to get to them without
breaking down one of the sides of the safe, right? Well, in the same
way that a 3-D creature could break into a 2-D safe without breaking
down any of the sides, a 4-D creature can break into our 3-D safe
without breaking down any of the walls. Now, that is pretty wild, eh?
It just doesn't seem to make sense. However, it is one way to think
about the fourth dimension.
I could go on for a long time about similar ways to understand the
fourth dimension, but instead I will refer you to a book that I think
you might really like if you found what I said above to be
interesting. The book is called _Flatland_, and it was written about
100 years ago by Edwin A. Abbott. It describes in detail the world we
called "Flatland," and it discusses many more ways to understand the
fourth dimension from the perspective of the third dimension. If you
have access to the WWW, you can find the entire book at the following
It is fun to look through, even if you don't read the whole thing. I
hope you enjoy it.
If you have any more questions about the fourth dimension, please feel
free to write back. I think that dimensionality is one of the most
interesting things that mathematicians study. Good luck!
-Doctor Sydney, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/54683.html","timestamp":"2014-04-19T23:02:27Z","content_type":null,"content_length":"18426","record_id":"<urn:uuid:c469db39-cb24-491d-a826-66ee0fe24386>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
7 common errors
Some common errors arise again and again in statistics.
Here are seven to watch out for:
unclear basepoint for graphs
A TV advert used to proudly proclaim:
"X has 25% more active ingredient"
The screen however showed just the top of four test tubes. The words may have been true, but it looked a lot more than 25% - a truthful advert?
%increases and changes in %difference
If 300 is 50% bigger than 200, is 200 50% less than 300?
In 1990 product A had 10% of market share and now it has 15%. Is that a 50% increase or a 5% increase? Of course, the market may be only half as big now, so there may be less of product A sold.
Lesson - be very careful with your language.
beware extrapolation
Interpolation - estimating unknown values between known values - OK, but extrapolating - going outside the known limits is dangerous (albeit sometimes necessary!).
statistical significant
Real, non-random effects may nevertheless be very small
Assuming that a non-significant result means no difference is like Kate Winslett assuming she weighs nothing because there was no detectable change in the waterline of the Titanic when she jumped
Big effects may not be significant if sample size is low or variability high.
There may be a common cause, or it may simply be a fluke!
don't do too many tests!!
5% significant means will happen by chance 1 time in 20. If you do lots of tests, 1 in 20 will (on average) be 5% significant. So, if you need to do lots of tests look for better significance
|
{"url":"http://www.meandeviation.com/tutorials/stats/notes/7errors.html","timestamp":"2014-04-21T07:03:43Z","content_type":null,"content_length":"14113","record_id":"<urn:uuid:9bc81b6c-0b46-46f7-b1c8-6fe2d560ca25>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is it possible to extend this inequality about Euclidean distance &Frobenius norm to more general Bregman divergence such as relative entropy & von Neumann divergence?
up vote 4 down vote favorite
Motivation- A Special Case
Supposing $A,B\in\mathbb{S}^{m\times m}$ are symmetric positive semi-definite (SPD) matrices and $\mathbf{x}\in\mathbb{R}^m$ is a unit vector where $\|\mathbf{x}\|=1$, we found that the squared
Euclidean distance of two quadratic forms $\left(\mathbf{x}^\top A\mathbf{x}-\mathbf{x}^\top B\mathbf{x}\right)^2$ is bounded by the squared Frobenius norm of difference of the two matrices $\|A-B\|
Denoting the spectral decomposition of $A-B$ as $A-B=W\Phi W^\top$ where $\Phi=\mathrm{diag}\left(\phi_1,\phi_2,\ldots,\phi_m\right)$ is a diagonal matrix of eigenvalues, we have \begin{eqnarray} &&\
left(\mathbf{x}^\top A\mathbf{x}-\mathbf{x}^\top B\mathbf{x}\right)^2 =\left(\mathbf{x}^\top(A-B)\mathbf{x}\right)^2 =\left(\mathbf{x}^\top W\Phi W^\top\mathbf{x}\right)^2\\\ =&&\left(\sum_i{x_{W,i}^
2\phi_i}\right)^2 \leq\max_i{\phi_i^2}\leq\sum_i{\phi_i^2} =\|A-B\|_F^2 \end{eqnarray} where $W^\top\mathbf{x}=\mathbf{x}_W=\left[x\_{W,1}\;x\_{W,2}\;\ldots\;x\_{W,m}\right]^\top$ and $\mathbf{x}_W^\
Therefore, for $\forall \mathbf{x}\in\mathbb{R}^m\;\mathrm{s.t.}\;\|\mathbf{x}\|=1$, we have $\left(\mathbf{x}^\top A\mathbf{x}-\mathbf{x}^\top B\mathbf{x}\right)^2\leq\|A-B\|_F^2$.
Question- Could this be generalized?
However, the squared Euclidean distance is a special case of Bregman divergence $$ D_\varphi(\mathbf{x},\mathbf{y})=\varphi(\mathbf{x})-\varphi(\mathbf{y})-\nabla\varphi(\mathbf{y})^\top(\mathbf{x}-\
mathbf{y}) $$ where $\varphi$ is the convex seed function.
On the other hand, the squared Frobenius norm of difference of two matrices is a special case of Bregman matrix divergence $$ D_\phi(A,B)=\phi(A)-\phi(B)-\mathrm{tr}\left((\nabla\phi(B))^\top(A-B)\
right) $$ where $\phi(A)=(\varphi\circ\lambda)(A)$ is a compound matrix function in which $\lambda$ is the function that lists the eigenvalues of $A$ and $\varphi$ is the convex seed function.
In the example above, the seed function is $\varphi(\mathbf{x})=\mathbf{x}^\top\mathbf{x}$ and we can rewrite the inequality as $$ D_\varphi\left(\mathbf{x}^\top A\mathbf{x},\mathbf{x}^\top B\mathbf
{x}\right) \leq D_\phi(A,B) $$ where $\|\mathbf{x}\|=1$ and $\phi=\varphi\circ\lambda$. The function $\lambda$ lists the eigenvalues of the matrix argument.
With the property of Bregman matrix divergence, the inequality can also be written as \begin{eqnarray} D_\varphi\left(\mathbf{x}^\top\mathbf{V}\Lambda\mathbf{V}^\top\mathbf{x}, \mathbf{x}^\top\mathbf
{U}\Theta\mathbf{U}^\top\mathbf{x}\right) &=&D_\varphi\left(\sum_i(\mathbf{v}_i^\top\mathbf{x})^2\lambda_i,\sum_j(\mathbf{u}_j^\top\mathbf{x})^2\theta_j\right)\\\ &\leq&\sum_i\sum_j{(\mathbf{v}_i^\
top\mathbf{u}_j)^2D\_\varphi(\lambda_i,\theta_j)} \end{eqnarray} where $A=V\Lambda V^\top,B=U\Theta B^\top$ are spectral decompositions and $\mathbf{v}_i,\mathbf{u}_j$ are columns of $V,U$
My Question is: can this inequality be extended to general Bregman divergence and Bregman matrix divergence with different seed functions chosen?
Or under what condition such an inequality exists?
For example, if $\varphi(\mathbf{x})=\sum_i{x_i\log x_i-x_i}$,
then $D_\varphi$ is the relative entropy (KL-divergence) $$\mathrm{KL}(\mathbf{x},\mathbf{y})=\sum_i\left(x_i(\log x_i-\log y_i)-x_i+y_i\right),$$
and $D_\phi$ is the von Neumann divergence $$D_{vn}(A,B)=\mathrm{tr}\left(A\log A-A\log B-A+B\right).$$
In this case, does the following inequality holds for $\forall\mathbf{x}\in\mathbb{R}^m$ satisfying $\|\mathbf{x}\|=1$? $$ \mathrm{KL}\left(\mathbf{x}^\top A\mathbf{x},\mathbf{x}^\top B\mathbf{x}\
right) \leq D_{vN}(A,B) $$
I did many experiments about this inequality about relative entropy and von Neumann divergence with random generalized SPD matrices using Matlab and it always holds. However, does it really hold?
Could anyone please give me some help for this question or recommend some relevant papers? Any suggestion will be appreciated. Thank you very much!
I have found the proof of the case of $\varphi(x)=\sum_i{x_i\log x_i-x_i}$ and the inequality $\mathrm{KL}\left(\mathbf{x}^\top A\mathbf{x},\mathbf{x}^\top B\mathbf{x}\right)\leq D_{vN}(A,B)$
indeed holds. I will present the proof soon. – ppyang Apr 17 '12 at 14:45
add comment
1 Answer
active oldest votes
I found a proof of this problem for the case of $\varphi(x)=\sum_i{x_i\log x_i-x_i}$. If you find there is anything mistake in the proof, please let me know. Thank you!
The case of $\varphi(x)=\sum_i{x_i\log x_i-x_i}$ can be proved with the method similar to Lindblad, Completely positive maps and entropy inequalities, 1975 and Lindblad, Expectations and
entropy inequalities for finite quantum systems, 1974. The inequality can be strengthened as $$ \sum_i{\mathrm{KL}\left(\mathbf{x}\_i^\top A\mathbf{x}\_i,\mathbf{x}\_i^\top B\mathbf{x}\_i\
right)}\leq D_{vN}\left(A,B\right) $$
Actually, a very similar result has been proposed in some papers about quantum information theory, such as the two papers referred above. The referred result is that for any trace
preserving map $\Phi$, given by $\Phi(A)=\sum_{i=1}^n{V\_iAV\_i^\top}$ and $\sum\_{i=1}^n{V\_i^\top V\_i}=1$, we have that $\mathrm{tr}\left(\Phi(A),\Phi(B)\right)\leq D_\phi(A,B)$, where
$A,B$ are both density operators which are Hermitian positive definite matrices satisfying $\mathrm{tr}A=\mathrm{tr}B=1$ and $\varphi(x)=x\log x$.
We found that if the trace constraints $\mathrm{tr}A=\mathrm{tr}B=1$ are dropped and $\varphi(x)=x\log x$ is replaced with $\varphi(x)=x\log x-x$, the inequality still holds.
The proof is outlined as following:
1. The von Neumann divergence has the following additivity property with Kronecker product: $$D_{vN}(A\otimes P,B\otimes P)=D_{vN}(A,B)\cdot\mathrm{tr}P$$
up vote 1 2. Using the joint convexity and the additivity, we can prove that the von Neumann divergence has the monotonicity with partial trace as \begin{equation*} \begin{split} D\_{vN}(\mathrm{tr}
down vote \_2(A),\mathrm{tr}\_2(B)) =&D\_{vN}\left(\mathrm{tr}\_2(A)\otimes\frac{\mathbf{I}\_2}{m}, \mathrm{tr}\_2(B)\otimes\frac{\mathbf{I}\_2}{m}\right) /\mathrm{tr}\left(\frac{\mathbf{I}\_2}
{m}\right)\\\ =&D\_{vN}\left(\sum\_{j=1}^N{p_jW_jAW_j^+},\sum\_{j=1}^N{p\_jW\_jBW\_j^+}\right)\\\ \leq&\sum\_{j=1}^{N}{p\_jD\_{vN}\left(W\_jAW\_j^+,W\_jBW\_j^+\right)}\\\ =&D\_{vN}(A,B)
\end{split} \end{equation*}
3. For any trace preserving map $\Phi$, given by $\Phi(A)=\sum_{i=1}^n{V\_iAV\_i^\top}$ and $\sum\_{i=1}^n{V\_i^\top V\_i}=1$, it can be represented as a unitary operation+partial tracing.
Therefore, we have that \begin{equation*} \begin{split} D\_{vN}\left(\Phi(A),\Phi(B)\right) =&D\_{vN}\left(\mathrm{tr}\_2(W(A\otimes\mathbf{s}\mathbf{s}^\top)W^\top), \mathrm{tr}\_2(W(B
\otimes\mathbf{s}\mathbf{s}^\top)W^\top)\right)\\\ \leq&D\_{vN}\left(W(A\otimes\mathbf{s}\mathbf{s}^\top)W^\top, W(B\otimes\mathbf{s}\mathbf{s}^\top)W^\top\right)\\\ =&D\_{vN}\left(A,B\
right) \end{split} \end{equation*}
4. Then the sum of relative entropy of the quadratic forms could be represented as matrix divergence and bounded. \begin{equation*} \begin{split} \sum_i{\mathrm{KL}\left(\mathbf{x}\_i^\top
A\mathbf{x}\_i,\mathbf{x}\_i^\top B\mathbf{x}\_i\right)} =&\sum\_{i,j}{(\mathbf{x}\_i^\top\mathbf{x}\_j)^2 \mathrm{KL}(\mathbf{x}\_i^\top A\mathbf{x}\_i,\mathbf{x}\_j^\top B\mathbf{x}\
_j)}\\\ =&D_{vN}(\sum\_i{X\_iAX\_i^\top},\sum_i{X\_iBX\_i^\top})\\\ \leq&D_{vN}\left(A,B\right) \end{split} \end{equation*} where $X_i=\mathbf{x}_i\mathbf{x}_i^\top$.
If there is any mistake in the proof, please let me know. Any other suggestions are also welcomed. Thank you very much!
add comment
Not the answer you're looking for? Browse other questions tagged inequalities matrices linear-algebra convexity or ask your own question.
|
{"url":"http://mathoverflow.net/questions/93149/is-it-possible-to-extend-this-inequality-about-euclidean-distance-frobenius-nor?answertab=votes","timestamp":"2014-04-17T07:24:11Z","content_type":null,"content_length":"60503","record_id":"<urn:uuid:bf90e2ce-5ab8-4285-bed6-397f98eea2e1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of Physics at the University of Florida
Office: 2079 NPB
Bernard Whiting
PhD Melbourne University (1979)
Research Group
Theoretical Physics
Research Interests
Research interests: both theoretical end experimental approaches to the detection and understanding of gravitational waves, as predicted in General Relativity. This currently includes an active
involvement in the global effort to measure gravitational radiation from collapsing stars and colliding astrophysical compact objects. Black hole physics has occupied a large part of Dr Whiting's
research over a period of more than 30 years. His work encompasses both classical general relativity as well as a semi-classical approach to a quantum theory of gravity. It has addressed both
perturbative and non-perturbative effects in black hole thermodynamics, and involves ongoing work on the final state of black hole evaporation. Key results in classical general relativity are the
proof of mode stability of the Kerr black hole (J. Math. Phys. 30, 1301-5, 1989) and the Green's function decomposition introduced with Detweiler (Phys. Rev. D 67, 024025--1-5 2003).
Other interests include ringing (quasinormal) mode characteristics on the classical side; quantum field effects and the conceptual difficulties associated with quantum geometries on the quantum side;
and Lagrangian and Hamiltonian formulations (both Lorentzian and Euclidean) for a unified, consistent, field theoretic approach to understanding the significance of geometrical contributions in the
context of black hole thermodynamics. This work has also required the study of quantum mechanics for very simple problems of interest in describing the early universe adequately enough to be able to
evaluate its impact on our observations of the cosmos as we now find it. Through the UF involvement in the LIGO project, Dr. Whiting's most recent efforts have been directed towards examining very
practical measures for extracting exceedingly weak gravitational wave signals, especially those comparable to the microwave background of electromagnetic radiation, in the presence of substantial,
sensitivity limiting, noise.
Selected Publications
LIGO Scientific Collaboration and VIRGO Collaboration (B.P. Abbott et al.), "An Upper Limit on the Stochastic Gravitational-Wave Background of Cosmological Origin'', Nature {460}, 990-994 (2009).
Larry R. Price, Karthik Shankar and Bernard F. Whiting, "On the existence of radiation gauges in Petrov type II spacetimes'', Class. Quantum Grav.{24}, 2367-2388 (2007).
LIGO Scientific Collaboration (B. Abbott et al.), "Searching for a Stochastic Background of Gravitational Waves with LIGO'', Ap. J. {659}, 918-930 (2007).
LIGO Scientific Collaboration (B. Abbott et al.), "Analysis of LIGO data for gravitational waves from binary neutron stars'', Phys. Rev. D {69}, 122001--1-16 (2004).
Steven Detweiler and Bernard F. Whiting, "Selfforce via a Green's function decomposition'', Phys. Rev. D {67}, 024025--1-5 (2003).
Jorma Louko, Bernard F. Whiting and John L. Friedman, "Hamiltonian space-time dynamics with a spherical null dust shell'', Phys. Rev. D {57}, 2279-2298 (1998).
Stephens CR, 't Hooft G and Whiting BF. "Black Hole Evaporation without Information Loss'', Class. Quantum Grav. {11}, 621-47 (1994).
Braden HW, Brown JD, Whiting BF, and York Jr. JW. "Charged Black Holes in the Grand Canonical Ensemble'', Phys. Rev. D {42}, 3376-85 (1990).
Whiting BF and York Jr. JW. "Action Principle and Partition Function for the Gravitational Field in Black-Hole Topologies'', Phys. Rev. Lett. {61}, 1336-9 (1988).
Gibbons GW and Whiting BF. "Newtonian Gravity Measurements Impose Constraints on Unification Theories'', Nature {291}, pp. 636-8, June 1981.
|
{"url":"http://www.phys.ufl.edu/faculty/whiting.shtml","timestamp":"2014-04-20T15:50:29Z","content_type":null,"content_length":"15544","record_id":"<urn:uuid:72eb53b4-644b-44dc-b681-14db7b1becfa>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 25
, 1998
"... SeDuMi is an add-on for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in
SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This pape ..."
Cited by 736 (3 self)
Add to MetaCart
SeDuMi is an add-on for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi.
Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This paper describes how to work with this toolbox.
, 1997
"... Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (IgDMCP) is to find the unspecified elements of A that make A a
Euclidean distance matrix (IgDM). In this paper, we follow the successful approach in [20] and solve the IgDMCP by ..."
Cited by 69 (14 self)
Add to MetaCart
Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (IgDMCP) is to find the unspecified elements of A that make A a Euclidean
distance matrix (IgDM). In this paper, we follow the successful approach in [20] and solve the IgDMCP by generalizing the completion problem to allow for approximate completions. In particular, we
introduce a primal-dual interior-point algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the
efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed.
- Optim. Methods Softw , 1998
"... How to initialize an algorithm to solve an optimization problem is of great theoretical and practical importance. In the simplex method for linear programming this issue is resolved by either
the two-phase approach or using the so-called big M technique. In the interior point method, there is a more ..."
Cited by 18 (2 self)
Add to MetaCart
How to initialize an algorithm to solve an optimization problem is of great theoretical and practical importance. In the simplex method for linear programming this issue is resolved by either the
two-phase approach or using the so-called big M technique. In the interior point method, there is a more elegant way to deal with the initialization problem, viz. the self-dual embedding technique
proposed by Ye, Todd and Mizuno [30]. For linear programming this technique makes it possible to identify an optimal solution or conclude the problem to be infeasible/unbounded by solving its
embedded self-dual problem. The embedded self-dual problem has a trivial initial solution and has the same structure as the original problem. Hence, it eliminates the need to consider the
initialization problem at all. In this paper, we extend this approach to solve general conic convex programming, including semidefinite programming. Since a nonlinear conic convex programming problem
may lack the so-called stri...
, 1998
"... this document. The first chapter is about mathematical programming. We will start by describing how and why researchers were led to study special types of mathematical programs, namely convex
programs and conic programs. We will also provide a detailed discussion about conic duality and give a class ..."
Cited by 9 (0 self)
Add to MetaCart
this document. The first chapter is about mathematical programming. We will start by describing how and why researchers were led to study special types of mathematical programs, namely convex
programs and conic programs. We will also provide a detailed discussion about conic duality and give a classification of conic programs. We will then describe what are self-scaled cones and why they
are so useful in conic programming. Finally, we will give an overview of what can be modelled using a SQL conic program, keeping in mind our pattern separation problem. Since most of the material in
the chapter is standard, many of the proofs are omitted. The second chapter will concentrate on pattern separation. After a short description of the problem, we will successively describe four
different separation methods using SQL conic programming. For each method, various properties are investigated. Each algorithm has in fact been successively designed with the objective of eliminating
the drawbacks of the previous one, CONTENTS 3 while keeping its good properties. We conclude this chapter with a small section describing the state of the art in pattern separation with ellipsoids.
The third chapter reports some computational experiments with our four methods, and provides a comparison with other separation procedures. Finally, we conclude this work by providing a short
summary, highlighting the author's personal contribution and giving some interesting perspectives for further research. Chapter 1 Conic programming 1.1 Introduction
- Math. Program , 1998
"... In this paper we study the properties of the analytic central path of a semide nite programming problem under perturbation of a set of input parameters. Speci cally, we analyze the behavior of
solutions on the central path with respect to changes on the right hand side of the constraints, including ..."
Cited by 8 (2 self)
Add to MetaCart
In this paper we study the properties of the analytic central path of a semide nite programming problem under perturbation of a set of input parameters. Speci cally, we analyze the behavior of
solutions on the central path with respect to changes on the right hand side of the constraints, including the limiting behavior when the central optimal solution is approached. Our results are of
interest for the sake ofnumerical analysis, sensitivity analysis and parametric programming. Under the primal-dual Slater condition and the strict complementarity condition we show that the
derivatives of central solutions with respect to the right hand side parameters converge as the path tends to the central optimal solution. Moreover, the derivatives are bounded, i.e. a Lipschitz
constant exists. This Lipschitz constant can be thought of as a condition number for the semide nite programming problem. It is a generalization of the familiar condition number for linear equation
systems and linear programming problems. However, the generalized condition number depends on the right hand side parameters as well, whereas it is well-known that in the linear programming case the
condition number depends only on the constraint matrix. We demonstrate that the existence of strictly complementary solutions is important for the Lipschitz constant to exist. Moreover, we give an
example in which the set of right hand side parameters for which the strict complementarity condition holds is neither open nor closed. This is remarkable since a similar set for which the
primal-dual Slater condition holds is always open. Key words: analytic central path, semide nite programming, sensitivity, condition number.
- SYSTEMS & CONTROL LETTERS , 1999
"... Based on the techniques of Euclidean Jordan algebras, we prove complexity estimates for a long-step primal-dual interior-point algorithm for the optimization problem of the minimization of a
linear function on a feasible set obtained as the intersection of an ane subspace and a symmetric cone. This ..."
Cited by 6 (3 self)
Add to MetaCart
Based on the techniques of Euclidean Jordan algebras, we prove complexity estimates for a long-step primal-dual interior-point algorithm for the optimization problem of the minimization of a linear
function on a feasible set obtained as the intersection of an ane subspace and a symmetric cone. This result provides a meaningful illustration of a power of the technique of Euclidean Jordan
algebras applied to problems under consideration.
, 1999
"... We consider the problem of optimizing a linear function over the intersection of an a#ne space and a special class of closed, convex cones, namely the symmetric cones over the reals. This
problem subsumes linear programming, convex quadratically constrained quadratic programming, and semidefinite pr ..."
Cited by 5 (0 self)
Add to MetaCart
We consider the problem of optimizing a linear function over the intersection of an a#ne space and a special class of closed, convex cones, namely the symmetric cones over the reals. This problem
subsumes linear programming, convex quadratically constrained quadratic programming, and semidefinite programming as special cases. First, we derive some perturbation results for this problem class.
Then, we discuss two solution methods: an interior-- point method capable of delivering highly accurate solutions to problems of modest size, and a first order bundle method which provides solutions
of low accuracy, but can handle much larger problems. Finally, we describe an application of semidefinite programming in electronic structure calculations, and give some numerical results on sample
problems. vi Contents Dedication iii Acknowledgment iv Abstract vi List of Figures ix List of Tables x List of Symbols and Notations x 1 Conic Optimization Problems 1 1.1 Problem Formulation . . . .
. . . ...
, 1998
"... Primal-dual interior-point methods have proven to be very successful for both linear programming (LP) and, more recently, for semidefinite programming (SDP) problems. Many of the techniques that
have been so successful for LP have been extended to SDP. In fact, interior point methods are currently t ..."
Cited by 5 (3 self)
Add to MetaCart
Primal-dual interior-point methods have proven to be very successful for both linear programming (LP) and, more recently, for semidefinite programming (SDP) problems. Many of the techniques that have
been so successful for LP have been extended to SDP. In fact, interior point methods are currently the only successful techniques for SDP. Research supported by Natural Sciences Engineering Research
Council Canada. Email sgkruk@acm.org y Department of Mechanical Engineering, Sophia University, 7-1 Kioi-cho, Chiyoda-ku, Tokyo 102 Japan z Technische Universitat Graz, Institut fur Mathematik,
Steyrergasse 30, A-8010 Graz, Austria x EMS Program Director, ACE-42 E-Quad, Princeton University, Princeton NJ 08544, Tel: 609-2580876, Fax: 609-258-3796, E-mail rvdb@princeton.edu, http://
www.princeton.edu/~rvdb/ -- Research supported by Natural Sciences Engineering Research Council Canada. Email hwolkowi@orion.math.uwaterloo.ca, http://orion.math.uwaterloo.ca/~hwolkowi 0 This report
is av...
, 1999
"... Geometric optimization is an important class of problems that has many applications, especially in engineering design. In this article, we provide new simplified proofs for the well-known
associated duality theory, using conic optimization. After introducing suitable convex cones and studying their ..."
Cited by 5 (1 self)
Add to MetaCart
Geometric optimization is an important class of problems that has many applications, especially in engineering design. In this article, we provide new simplified proofs for the well-known associated
duality theory, using conic optimization. After introducing suitable convex cones and studying their properties, we model geometric optimization problems with a conic formulation, which allows us to
apply the powerful duality theory of conic optimization and derive the duality results valid for geometric optimization.
, 1998
"... For iterative sequences that converge to the solution set of a linear matrix inequality, we show that the distance of the iterates to the solution set is at most O(ffl 2 \Gammad ). The
nonnegative integer d is the so--called degree of singularity of the linear matrix inequality, and ffl denotes th ..."
Cited by 4 (0 self)
Add to MetaCart
For iterative sequences that converge to the solution set of a linear matrix inequality, we show that the distance of the iterates to the solution set is at most O(ffl 2 \Gammad ). The nonnegative
integer d is the so--called degree of singularity of the linear matrix inequality, and ffl denotes the amount of constraint violation in the iterate. For infeasible linear matrix inequalities, we
show that the minimal norm of ffl--approximate primal solutions is at least 1=O(ffl 1=(2 d \Gamma1) ), and the minimal norm of ffl--approximate Farkas-- type dual solutions is at most O(1=ffl 2 d \
Gamma1 ). As an application of these error bounds, we show that for any bounded sequence of ffl--approximate solutions to a semi-definite programming problem, the distance to the optimal solution set
is at most O(ffl 2 \Gammak ), where k is the degree of singularity of the optimal solution set. Keywords: semi-definite programming, error bounds, linear matrix inequality, regularized duality. AMS
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=215601","timestamp":"2014-04-16T05:46:28Z","content_type":null,"content_length":"38816","record_id":"<urn:uuid:bfb7346f-e6f3-45a6-93da-4ad86b2016fa>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of vectors in a vector of vectors
I have a 2d vector that is not necessarily square. I want to know how many rows and columns it has (obviously, it isn't really 2d, but I'm thinking of it as an array).
The number of elements of each vector within this vector can be found with vec[i].size(), but what about the number of vector elements within the original vector? Can I use vec.size()? Or will that
return the total number of elements in the 2d vector?
Try it, see what happens. It should be pretty easy.
I was hoping the OP could figure that out himself.
I think one can learn more by testing their own theories, rather than just getting the answer from someone.
I am not trying to offend anyone, just my Opinion :+) Cheers
Went with what I said in the OP and everything worked out, thanks all the same.
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/beginner/82434/","timestamp":"2014-04-20T21:07:43Z","content_type":null,"content_length":"9658","record_id":"<urn:uuid:fd7a1bb4-26b5-490d-9c43-125f36ca26a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SAT Skills Insight
Select a score band
Algebra and Functions
Skills needed to score in this band
SKILL 1: Solve problems involving complex fractions
SKILL 2: Solve problems involving functions defined with unfamiliar symbols in one or more variables
SKILL 3: Identify, apply, and represent transformations of functions, graphically and algebraically (e.g., vertical shift)
SKILL 4: Apply properties of non-integer exponents
SKILL 5: Solve multistep problems involving algebraic inequalities
SKILL 6: Solve word problems involving rate of change in nonlinear or piecewise-linear settings
SKILL 7: Identify and use the relationship between the slope of a line and algebraic rate of change
SKILL 8: Interpret and solve word problems using multistep proportional reasoning
SKILL 9: Transform an equation or expression by raising it to a power
1. 1
Solve problems involving complex fractions
2. 2
Solve problems involving functions defined with unfamiliar symbols in one or more variables
3. 3
Identify, apply, and represent transformations of functions, graphically and algebraically (e.g., vertical shift)
The graphs of the functions
4. 4
Apply properties of non-integer exponents
5. 5
Solve multistep problems involving algebraic inequalities
6. 6
Solve word problems involving rate of change in nonlinear or piecewise-linear settings
The function above can be used to model the population of a certain endangered species of animal. If decades after the year 1900, which of the following is true about the population of the
species from 1900 to 1920?
7. 7
Identify and use the relationship between the slope of a line and algebraic rate of change
The graph above shows the amount of water remaining in a tank each time a pail was used to remove
8. 8
Interpret and solve word problems using multistep proportional reasoning
9. 9
Transform an equation or expression by raising it to a power
|
{"url":"http://sat.collegeboard.org/practice/sat-skills-insight/math/band/700/skill/2","timestamp":"2014-04-20T16:29:45Z","content_type":null,"content_length":"89275","record_id":"<urn:uuid:2e65d044-801b-4b72-a6b4-d90dc9a58587>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Determine whether each sequence is geometric. If so, find the common ratio 5, 202, 35, 50. Please explain thoroughly :P
Best Response
You've already chosen the best response.
are there multiple sequences here? I don't see anything here, I just see four numbers. Are those four numbers part of a sequence and are those the first four terms in order?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f5e8748e4b0602be4391922","timestamp":"2014-04-20T14:08:12Z","content_type":null,"content_length":"27759","record_id":"<urn:uuid:36656005-edf4-45c4-af89-a24f9aa7e89a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Application: Diffie-Hellman key exchange
Application: Diffie-Hellman key exchange
How do two people, Amelia and Ben, share a secret over an open channel of communications? It may seem incredible at first, but it is possible and is the remarkable discovery of Whitfield Diffie, now
at Sun Microsystems, and Martin Hellman, an electrical engineering professor at Stanford University, in 1976.
The idea is to first determine a shared secret key, which one could use (for example) to encrypt the messages between them. Its security depends on the difficulty of the discrete log problem,
discussed in the previous section.
• Pick a ``large'' prime number
• Exchange data as follows:
□ Amelia picks a random secret
□ Ben picks a random secret
• Compute keys as follows:
□ Amelia receives
□ Ben receives
More details are given in [MOV], §12.6.
David Joyner 2007-09-03
|
{"url":"http://www.permutationpuzzles.org/AAAbook/node47.html","timestamp":"2014-04-16T16:14:14Z","content_type":null,"content_length":"5496","record_id":"<urn:uuid:75758974-5140-4cc1-9607-daee2f9afd95>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Foil Wrapped Around a Spindle
Date: 12/17/97 at 17:18:00
From: Liz Brown
Subject: Integration?
I have a problem in an engineering work environment. I have a tightly
rolled sheet of foil wrapped around a spindle. I know the diameter of
the spindle (Do) and the external diameter of the combined spindle and
surrounding foil (Di) and the thickness of a sheet of foil (t). I need
to know the length of foil (L) needed to generate the external
I have been given a formula by word of mouth
L = pi (square(Do) - square(Di) ) / 4t
I need to know if this is an approximation of a more accurate formula
as I am dealing with a thickness of 0.01 mm and the length calculation
needs to be accurate for production purposes. Is it possible to prove
from first principles?
I would be grateful of any help you can give me. I did A level Maths
but that was at least 15 years ago. Thank you.
Date: 12/22/97 at 15:53:20
From: Doctor Mark
Subject: Re: Integration?
Hi Liz,
If I understand your problem correctly, (Do) should be the diameter of
the *spindle and surrounding foil* (so that it is Do(uter)), while
(Di) should be the diameter of the *spindle alone* (so that it is
Di(nner) - not that I am expecting you to eat it!). With that change
in the definitions, then the formula you were given seems correct to
me, though it is 1) an approximation of a sort and 2) more complicated
than is necessary.
I expect that the diameter of the spindle is much larger than the
thickness of the foil. Is that right? That would be the case if the
0.01 mm represents the thickness of the foil, and if Di is of the
order of a few mm or larger. If so, then the formula you were given is
correct to a *very* good approximation.
However, it is should also be the case that the thickness t of the
foil should be the difference between Do and Di,
t = Do - Di
and so your formula actually reduces (by factoring the polynomial as:
a^2 - b^2 = (a+b)(a-b), with a = Do, and b = Di) to
L = pi ( [Di + Do]/2 )
That formula could more easily be derived by drawing a picture (which
we can use to get the original formula as well). Draw two circles with
the same center, one of radius Di (the i[nner] circle--the spindle),
one of radius Do (the o[uter] circle - the spindle plus foil). The
foil is represented by the region outside the inner circle and inside
the outer circle.
If you now draw a circle, with the same center as the first two, with
its boundary (the circumference) halfway in between the two circles,
then the diameter of this circle should be the average of the inner
and outer diameters, i.e., of Di and Do. The circumference of this new
circle will be pi times that average, and this circumference is (to a
very good approximation, if t is small compared to Di) just L, given
by the formula above this paragraph.
Your original formula comes from finding the area of that region in
between the two circles, which is:
area between the two circles = area of outer circle -
area of inner circle
= pi [ (radius of outer circle)^2 - (radius of inner circle)^2 ]
= pi [ (Do/2)^2 - (Di/2)^2 ] = pi [ {(Do)^2}/4 - {(Di)^2}/4 ]
= pi [ (Do)^2 - (Di)^2 ]/4
If t is very small compared to Di and Do, then the region between
the circles looks sort of like a very thin rectangle of length L
and width t, wrapped around the circle of radius Di. Equating the
area between the two circles to the the area of the rectangle, which
is (length) x (width) = Lt, we get:
Lt = pi [ (Do)^2 - (Di)^2 ]/4
Divide both sides by t, and this is your original formula.
Personally, I would use the L = pi*(Do + Di)/2 formula, since it is
less susceptible to small errors in the measurement of Do and Di. In
fact, since Do and Di are almost the same, I would probably use the
even simpler formula
L = pi(Di).
That is, the length of the foil is just the circumference of the
spindle. I suspect that (again, if the thickness of the foil is a lot
less than the diameter of the spindle) this would be good enough for
most practical purposes.
None of what I said is going to be true if, in fact, the foil is *not*
really thin compared to the diameter of the spindle. Write back if
that is what you need.
Hope this has been of some help.
-Doctor Mark, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/56474.html","timestamp":"2014-04-21T10:58:18Z","content_type":null,"content_length":"9443","record_id":"<urn:uuid:9df9550f-36ea-48b6-a607-349131212665>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advanced High School Mathematics
Integers and polynomials form number systems which have the following things in common:
• The two number systems closed under addition, subtraction and multiplication.
• Objects in the respective number systems can be factored into “primes.” You’re familiar with what an prime integer is. A prime polynomial is just a polynomial that doesn’t factor further. To make
this more precise we have to specify what sorts of coefficients the factors can have.The simplest case is if the factors are allowed to have complex coefficients. In this case, according to the
Fundamental Theorem of Algebra and the factor theorem a polynomial with complex coefficients can be factored into a constant times linear factors; so the “prime polynomials” are just the linear
polynomials (ax – b) where a and b are complex numbers.The next simplest case is the case where the factors are allowed to have real coefficients. In this case, using the fundamental theorem of
algebra and the fact that the roots of a polynomial with real coefficients come in complex conjugate pairs one can prove that every polynomial with real coefficients can be factored into linear
polynomials and irreducible quadratic polynomials with real coefficients, so the “prime polynomials” are the linear polynomials (ax – b) where a and b are real and the quadratic polynomials Ax^2
+ Bx + C which have real coefficients and non-real roots.Things become more complicated if we require that the coefficients be rational numbers. Most polynomials with rational coefficients do not
factor further into polynomials with rational coefficients; I mentioned this the fifth paragraph of my post titled The Rational Root Theorem. So in this case most polynomials are prime, though
obviously there are exceptions (to find one just take two polynomials with rational coefficients and multiply them together!)
• Both integers and polynomials obey unique prime factorization.One needs to be a little bit careful here. A straightforward point is that two factorizations are considered to be the same if they
differ only in the order of the factors, e.g. (x – 1)(x + 2) is considered to be the same factorization as (x + 2)(x – 1). More subtly, two factorizations are considered to be the same if the
factors differ by units. The units of a number system like the integers or the polynomials are the elements which have inverses that are also integers/polynomials. The units in the set of
integers are 1 and -1 (the latter of which is its own inverse). The units in the set of polynomials are the nonzero constant polynomials (the inverse of f(x) = a is g(x) = 1/a). So for example
the factorization 15 = (3)(5) is considered to be the same as 15 = (-3)(-5). Similarly the factorization of 2x^2 – 12x + 10 given by (2x – 2)(x – 5)
is considered to be the same as (x – 1)(2x – 10). Once one makes these qualifications, an integer has only one factorization into primes numbers and a polynomial has only one factorization into
prime polynomials.
• Long division is possible.Given an integer ‘a’ and an integer ‘b’ one can write a = b*q + r in exactly one way with ‘r’ is between 0 and b. Here ‘q’ is called the “quotient” of ‘a’ upon division
by ‘b’ and ‘r’ is called the remainder.Similarly, if f(x) is a polynomial and g(x) is a polynomial then one can write f(x) = g(x)q(x) + r(x)in exactly one way with the degree of r(x) between 0
and the degree of g(x); again, q(x) is called the quotient and r(x) is called the remainder.
• Neither the integers nor the polynomials are closed under division but they have natural extensions that are closed under division: the smallest number system that the integers can be extended to
which is closed under division is the rational numbers and the smallest number system that the polynomials can be extended to which is closed under division is the set of rational functions.
Combining this with unique prime factorization one can say that every rational number/rational function factors uniquely into prime integers/polynomials (possibly with negative exponents to
account for the denominators).
• Both rational numbers and rational functions have partial fraction decompositions. In class we talked about how to find the partial fraction decomposition of something like f(x) = 1/[(x - 1)(x +
2)] by writing it as A/(x – 1) + B/(x +2) and solving for A and B.More generally, given a rational function f(x)/g(x) with f(x) having smaller degree than g(x), one can factor g(x) into prime
polynomials and write f(x)/g(x) as a sum of terms each one of which has denominator consisting of only a single prime polynomial (possibly raise to a power). This is covered thoroughly in Section
11.6 of the text.What we didn’t talk about in class is how one can write a number like 1/15 as a sum of two terms rational number search with denominator a prime factor of 15. We try to write1/15
= x/3 + y/5.where ‘x’ and ‘y’ are integers. As in the case of rational functions, we clear the denominators to get 1 = 5x + 3y. The question now is how to find ‘x’ and ‘y’ that work. In contrast
with the case of rational functions the situation here is more subtle: the methods that we discussed in class that work for rational functions don’t work here. In this particular case we can
eyeball the solution x = 2 and y = -3 to decompose 1/15 as1/15 = 2/3 – 3/5but this evades the general issue: suppose that n is a product of two primes p and q and that we want to write 1/n = x/p
+ y/q for some integers a and b: is this possible and if so how do we find ‘a’ and ‘b’ that work?
Equivalently, given primes p and q how can we find integers x and y such that
1 = px + qy ?
The answer to this question was known to the ancient Greeks: it comes from the so-called Euclidean algorithm. There’s much to say here but I’ll refrain from doing so now; postponing the
discussion for later. Instead I’ll just remark that the fact that it’s more difficult to prove a statement about integers than about polynomials is in fact typical in mathematics. For example,
Andrew Wiles proved Fermat’s Last Theorem around 1995 but the the polynomial analog had already been proven around 1900.
A book that I like which expands on some of the themes that I mention above is Ronald Irving’s Integers, Polynomials and Rings. The analogy between integers and polynomials runs still deeper than
what I have discussed here but we’ve reached a suitable breaking point. I hope this will be useful for you in organizing some of what we’ve been doing in class.
Exponential growth and order of magnitude reasoning
In Unit 7 we’re covering the exponential and logarithm functions. One of the features of exponential functions that takes the most time to get used to is how fast they grow.
A famous illustration of the surprisingly fast rate of exponential growth is the Wheat and chessboard problem has been popularized in several children’s books which you may have encountered before;
for example One Grain Of Rice: A Mathematical Folktale. The premise of the story is that a wise man is offered a gift by a king and the wise man asks to be given rice for 64 days, one grain on the
first day, two grains on the second day, four grains on the first day, and so on. The king readily agrees as the amount of rice as it initially seems like the wise man is asking for a few grains of
rice but as the days go on the king discovers that the wise man was implicitly asking for far more rice than the kingdom could possibly produce.
One day in 4th period precalculus I commented that the number of terms in the formula for the determinant of a 100 by 100 matrix is 100! which is bigger than the number of atoms in the observable
universe. Many students were incredulous; claiming that the number of atoms in the universe was so big this couldn’t possibly be true. I used Wolfram Alpha to compute that 100! is approximately 10^
(158) and those students who were incredulous remained so.
I think that the source of confusion is not a lack of intuition about chemistry or physics or astronomy but rather a lack of understanding how big a number 10^(158) is. Hopefully I can give some
sense of this via a plausibility argument for my claim from class via a Fermi Calculation.
[Disclaimer: I have no subject matter expertise in astronomy or astrophysics, the numbers that I give below are just the numbers that I found with a quick internet search from apparently reputable
sources; it's possible that the expert consensus for some of these numbers is very different from the numbers quoted or that I've misunderstood something crucial. Despite this, I think that the
analysis below will carry some pedagogical value.]
Number of Atoms in the Observable Universe: Many of you are in chemistry and so are familiar with the fact that Avogadro number is the number of particles in a mole of a substance. So, e.g. there are
Avogadro number of hydrogen atoms in 1 grams of hydrogen. The sun is mostly made up of hydrogen. The more massive the elements are that compose the sun, the fewer atoms there are in the sun, so if we
assume that the sun is made up entirely of hydrogen we’ll get an upper bound on the number of atoms in the sun of the sun. The mass of the sun is less 10^(34) grams and Avogadro’s number is less than
10^(24) so the number of atoms in the sun is less than (10^(24))(10^(34)) = 10^(58).
We can use this as an estimate of the mass of the solar system. In class some of you commented that the calculation above neglects the masses of other objects other than stars like Earth. But the
mass of Earth is a billionth the mass of the sun, and indeed the mass of Jupiter (the largest planet in the universe) is a thousandth of the mass of the sun. The mass of the asteroid belt is only 4%
of that of the moon which is in turn less than a 10 billionth the mass of the sun. So the sun is by far the dominant mass in the solar system; everything else combined is less than 1% of the mass of
the sun. Intuitively this makes sense; if other objects were of comparable mass then the dynamics of the solar system would be more complicated than what they are; everything wouldn’t rotate around
the sun in such a clean fashion.
Now, I claimed that 100! is bigger than the number of atoms the observable universe, not just in the solar system. One issue is that there are stars that are bigger than the sun. The most massive
stars known stars seem to have ~ 300 times the mass of the sun. But suns aren’t the most massive stellar objects in the universe, that honor belongs to supermassive black holes which are “on the
order of hundreds of thousands to billions of solar masses.” To be conservative let’s imagine that every star is a supermassive black hole of a trillion (10^12) solar masses. Assuming that such a
black hole is made out of hydrogen atoms; (I actually don’t know what kind of matter a black hole is considered to be!) the number of atoms is no more than:
(10^(58))(10^(12)) = 10^(70)
Okay, now the universe doesn’t just have one stellar object, it has many stellar objects. Nevertheless, there are no more than a trillion (10^(12)) stars in the Milky Way Galaxy. Even the most
massive galaxy discovered has no more than a quadrillion (10^(15)) stars. So the largest number of atoms that a galaxy could have is bounded above by
(10^(70))(10^15) = 10^(85).
Now, the current best estimate for the number of galaxies in the observable universe is about 500 billion, to be conservative let’s use the number of ten trillion (10^16). Then an upper bound on the
number of atoms that the observable universe could have is
(10^(85))(10^(16)) = 10^(101).
This number is less than 100! ~ 10^(158); not only a little bit smaller but an unimaginable amount.
I’ll note that National Solar Observatory webpage allegedly has an article estimating the number of atoms in the universe as 10^(78).
Conclusion: Our natural intuitions are not well calibrated to thinking accurately about exponentially large or exponentially small quantities; accurately thinking about them requires a combination of
care and re-calibrated intuition gained from experience with working with such quantities.
The Rational Root Theorem
The rational root theorem states :
Let $P(x) = a_{n}x^n + a_{n-1}x^{n-1} + ... + a_{1}x + a_{0}$ be a polynomial with integer coefficients. Suppose that $a = \frac{p}{q}$ is a root of $P(x)$ where $p$ and $q$ have no
common factors. Then $a_{0}$ is divisible by $p$ and $a_{n}$ is divisible by $q$.
Since each of $a_{n}$ and $a_{0}$ have finitely many factors, for a given polynomial this theorem generates a finite list of rational numbers with the property that all rational roots of the
polynomial are on the list. Thus, the rational root theorem gives a finite algorithm that can be used to find all rational roots of a given polynomial.
Some high school textbooks suggest that in order to find the roots of a polynomial with integer coefficients, one should use the rational root theorem to find all candidate rational roots of a
polynomial equation and then plug them into the polynomial to determine whether they’re roots. The assumption implicit in this is that it will often happen that a polynomial with integer coefficients
has a rational root. While this is often case for the high school textbook problems which have been contrived, this is seldom the case for general polynomials with integer coefficients and as such
the textbook presentation of the application of the rational root theorem to find roots of a polynomial is misleading.
Polynomials with integer coefficients almost never have rational roots
Mathworld has a nice article with a picture illustrating which degree 5 polynomials $x^5 + px + q$ with $p, q$ integers factor further into polynomials with rational coefficients and which ones
don’t; the former type corresponding to yellow boxes. From the picture it’s clear that the vast majority of polynomials of the type under consideration do not factor further into polynomials with
rational coefficients. But if a polynomial does not factor further into polynomials with rational coefficients it can have no linear factors with rational coefficients so by the factor theorem it has
no rational root.
The limited data illustrated is by no means a proof, however, the picture should be enough to dispel the notion that the rational root theorem provides a good method of finding the roots of a
polynomial with integer coefficients. In fact, it has been proven that in a statistical sense, almost no polynomial with integer coefficients factors into polynomials with rational coefficients at
all, this is even stronger than saying that a polynomial doesn’t have a linear factor with rational coefficients..
For practical applications one doesn’t need to find the exact values of the roots of a polynomial
It suffices to find a decimal approximation to the value of the root of a polynomial. While the number of decimals needed varies, according to Wikipedia
While the decimal representation of π has been computed to more than a trillion (10^12) digits,^[21] elementary applications, such as estimating the circumference of a circle, will rarely require
more than a dozen decimal places. For example, the decimal representation of π truncated to 11 decimal places is good enough to estimate the circumference of any circle that fits inside the Earth
with an error of less than one millimetre, and the decimal representation of π truncated to 39 decimal places is sufficient to estimate the circumference of any circle that fits in the observable
universe with precision comparable to the radius of a hydrogen atom.
There are methods such as Newton’s method that almost always work to find very good decimal approximations to the roots of a polynomial. Newton’s method uses calculus but no more than the basics.
|
{"url":"http://eulerianmath.wordpress.com/","timestamp":"2014-04-21T12:08:01Z","content_type":null,"content_length":"50512","record_id":"<urn:uuid:e5919f54-2477-4625-aea5-6024cf1d9986>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of statistically independent
Independent component analysis
) is a computational method for separating a
signal into additive subcomponents supposing the mutual
statistical independence
of the non-Gaussian source signals. It is a special case of
blind source separation
When the independence assumption is correct, blind ICA separation of a mixed signal gives very good results. It is also used for signals that are not supposed to be generated by a mixing for analysis
purposes. A simple application of ICA is the “cocktail party problem”, where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room.
Usually the problem is simplified by assuming no time delays and echoes. An important note to consider is that if N sources are present, at least N observations (i.e., microphones) are needed to get
the original signals. This constitutes the square (J = D, where D is the input dimension of the data and J is the dimension of the model). Other cases of underdetermined (J < D) and overdetermined (J
> D) have been investigated.
The statistical method finds the independent components (aka factors, latent variables or sources) by maximizing the statistical independence of the estimated components. Non-Gaussianity, motivated
by the central limit theorem, is one method for measuring the independence of the components. Non-Gaussianity can be measured, for instance, by kurtosis or approximations of negentropy. Mutual
information is another popular criterion for measuring statistical independence of signals.
Typical algorithms for ICA use centering, whitening (usually with the eigenvalue decomposition), and dimensionality reduction as preprocessing steps in order to simplify and reduce the complexity of
the problem for the actual iterative algorithm. Whitening and dimension reduction can be achieved with principal component analysis or singular value decomposition. Whitening ensures that all
dimensions are treated equally a priori before the algorithm is run. Algorithms for ICA include infomax, FastICA, and JADE, but there are many others also.
Most ICA methods are not able to extract the actual number of source signals, the order of the source signals, nor the signs or the scales of the sources.
ICA is important to blind signal separation and has many practical applications. It is closely related to (or even a special case of) the search for a factorial code of the data, i.e., a new
vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent.
Mathematical definitions
Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA. Nonlinear ICA should be considered as a separate case.
General definition
The data is represented by the random vector $x=(x_1,ldots,x_m)$ and the components as the random vector $s=(s_1,ldots,s_n)$. The task is to transform the observed data $x$, using a linear static
transformation W as
s = W x
into maximally independent components
measured by some function
of independence.
Generative model
Linear noiseless ICA
The components $x_i$ of the observed random vector $x=(x_1,ldots,x_m)^T$ are generated as a sum of the independent components $s_k$, $k=1,ldots,n$:
$x_i = a_{i,1} s_1 + ldots + a_{i,k} s_k + cdots + a_{i,n} s_n$
weighted by the mixing weights $a_{i,k}$.
The same generative model can be written in vectorial form as $x=sum_{k=1}^{n} s_k a_k$, where the observed random vector $x$ is represented by the basis vectors $a_k=(a_{1,k},ldots,a_{m,k})^T$. The
basis vectors $a_k$ form the columns of the mixing matrix $A=(a_1,ldots,a_n)$ and the generative formula can be written as $x=As$, where $s=(s_1,ldots,s_n)^T$.
Given the model and realizations (samples) $x_1,ldots,x_N$ of the random vector $x$, the task is to estimate both the mixing matrix $A$ and the sources $s$. This is done by adaptively calculating the
$w$ vectors and setting up a cost function which either maximizes the nongaussianity of the calculated $s_k = (w^T*x)$ or minimizes the mutual information. In some cases, a priori knowledge of the
probability distributions of the sources can be used in the cost function.
The original sources $s$ can be recovered by multiplying the observed signals $x$ with the inverse of the mixing matrix $W=A^{-1}$, also known as the unmixing matrix. Here it is assumed that the
mixing matrix is square ($n=m$). If the number of basis vectors is greater than the dimensionality of the observed vectors, $n, the task is overcomplete but is still solvable. Linear noisy ICA With
the added assumption of zero-mean and uncorrelated Gaussian noise nsim N(0,operatorname{diag}(Sigma)), the ICA model takes the form x=As+n. Nonlinear ICA The mixing of the sources does not need to be
linear. Using a nonlinear mixing function f(cdot|theta) with parameters theta the nonlinear ICA model is x=f(s|theta)+n. Identifiability The identifiability of independent component analysis requires
that: At most one of the sources s_k can be Gaussian, The number of observed mixtures, m, must be at least as large as the number of estimated components n: m ge n. It is equivalent to say that the
mixing matrix A must be of full rank, for inverse to exist. See also Blind deconvolution Blind signal separation (BSS) Factor analysis Factorial codes Hilbert spectrum Image processing Non-negative
matrix factorization (NMF) Nonlinear dimensionality reduction Principal component analysis (PCA) Projection pursuit Redundancy reduction Signal processing Singular value decomposition (SVD) Varimax
rotationReferences Pierre Comon (1994): Independent Component Analysis: a new concept, Signal Processing, Elsevier, 36(3):287--314 (The original paper describing the concept of ICA) A. Hyvärinen, J.
Karhunen, E. Oja (2001): Independent Component Analysis, New York: Wiley, ISBN 978-0-471-40540-5 Introductory chapter J.V. Stone, (2005): A Brief Introduction to Independent Component Analysis in
Encyclopedia of Statistics in Behavioral Science, Volume 2, pp. 907–912, Editors Brian S. Everitt & David C. Howell, John Wiley & Sons, Ltd, Chichester, 2005 ISBN 978-0-470-86080-9 T.-W. Lee (1998):
Independent component analysis: Theory and applications, Boston, Mass: Kluwer Academic Publishers, ISBN 0 7923 8261 7 External links What is independent component analysis? by Aapo Hyvärinen
Nonlinear ICA, Unsupervised Learning, Redundancy Reduction by Jürgen Schmidhuber, with links to papers FastICA as a package for Matlab, in R language, C++ ICALAB Toolboxes for Matlab, developed at
RIKEN High Performance Signal Analysis Toolkit provides C++ implementations of FastICA and Infomax Free software for ICA by JV Stone. ICA toolbox Matlab tools for ICA with Bell-Sejnowski,
Molgedey-Schuster and mean field ICA. Developed at DTU. Demonstration of the cocktail party problem EEGLAB Toolbox ICA of EEG for Matlab, developed at UCSD. FMRLAB Toolbox ICA of fMRI for Matlab,
developed at UCSD Discussion of ICA used in a biomedical shape-representation context$
|
{"url":"http://www.reference.com/browse/statistically+independent","timestamp":"2014-04-20T22:24:51Z","content_type":null,"content_length":"90193","record_id":"<urn:uuid:c1211d38-6aac-4a4b-bf4f-ccbc83208ec6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
January 29th 2008, 01:47 AM #1
Junior Member
Oct 2007
The following table shows the number of items of certain product imported into the United Kingdom is given below in thousands of units for the Years
Years 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964
Consumption 170 210 188 98 83 131 205 182 90 92
of Cotton
(Thousands of bales).
i) 5-years moving total
ii) 5-years moving average.
All calculations should be in Excel.
The following table shows the number of items of certain product imported into the United Kingdom is given below in thousands of units for the Years
Years 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964
Consumption 170 210 188 98 83 131 205 182 90 92
of Cotton
(Thousands of bales).
i) 5-years moving total
ii) 5-years moving average.
All calculations should be in Excel.
Excel automatically compiles these results for you by graphing it. However, please note that a simple moving average (SMA) is the unweighted mean of the previous n data points. Therefore, you
calculate the mean of the last five years of data for each point. Likewise, you sum the values of the last five years for the moving total.
Excel automatically compiles these results for you by graphing it. However, please note that a simple moving average (SMA) is the unweighted mean of the previous n data points. Therefore, you
calculate the mean of the last five years of data for each point. Likewise, you sum the values of the last five years for the moving total.
I would have put the five year moving avaerage for year $n$ tobe:
that is I would always prefer the central moving average.
(of course Excel just gives the rolling MA)
It depends where it is centered around. Simple moving averages look at past data, because that data is given and known. You cannot use unknown values in the future for a moving average.
January 29th 2008, 05:12 AM #2
January 29th 2008, 06:37 AM #3
Grand Panjandrum
Nov 2005
January 29th 2008, 06:40 AM #4
|
{"url":"http://mathhelpforum.com/business-math/27040-calculations.html","timestamp":"2014-04-19T17:45:42Z","content_type":null,"content_length":"42736","record_id":"<urn:uuid:4165e52d-a498-4294-9b05-f8e52a9c26fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ridgewood, NJ Precalculus Tutor
Find a Ridgewood, NJ Precalculus Tutor
...Some of my recent successful students are graduating imminently from the prestigious engineering MBA offered at Wharton University of Penn. Hello,I've been tutoring all aspects of the ASVAB for
over 2 years. I have found my knowledge of advanced mathematics, English and other standardized tests can be directly applied to help potential students achieve their goals in this test.
55 Subjects: including precalculus, English, calculus, reading
...I have tutored SAT prep (reading, writing, and math) both privately and for the Princeton Review. I earned a BA from the University of Pennsylvania and an MA from Georgetown University. I have
tutored GRE prep both privately and for the Princeton Review.
20 Subjects: including precalculus, English, algebra 2, grammar
...I have worked over 20 years in research in the oil, aerospace, and investment management industries. I also have extensive teaching experience -- both as a mathematics tutor and an adjunct
professor. I have a Ph.D. in chemical engineering from the California Institute of Technology, and a minor concentration in applied mathematics.
11 Subjects: including precalculus, calculus, algebra 1, SAT math
...In addition, I have experience teaching algebra, geometry, trigonometry, pre-calculus, and SAT math prep. In my math SAT prep course, students demonstrated an increase on average of 200 points
on their SAT math section. Prior to becoming a teacher, I applied my engineering education to work primarily in operations and manufacturing, including time at two Fortune 500 Companies.
9 Subjects: including precalculus, geometry, statistics, algebra 1
...I also have a Mathematics Minor and additional Statistics classes. I worked for 5 years in Marketing Analytics which utilized my Econometrics and Statistical background. In high school I
graduated in the Top 10, and had the highest SAT score in my graduating class, with a perfect score in Math.
13 Subjects: including precalculus, calculus, statistics, algebra 1
|
{"url":"http://www.purplemath.com/Ridgewood_NJ_Precalculus_tutors.php","timestamp":"2014-04-20T07:02:44Z","content_type":null,"content_length":"24467","record_id":"<urn:uuid:7ba3642b-06ea-490e-b8c7-9c38a8e301e7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Borel-Bott-Weil theorem
Borel-Bott-Weil theorem
Let $G$ be a semisimple Lie group, and $\lambda$ be an integral weight for that group. $\lambda$ naturally defines a one-dimensional representation $C_{\lambda}$ of the Borel subgroup $B$ of $G$, by
simply pulling back the representation on the maximal torus $T=B/U$ where $U$ is the unipotent radical of $G$. Since we can think of the projection map $\pi\colon G\to G/B$ as a principle $B$-bundle,
to each $C_{\lambda}$, we get an associated fiber bundle $\mathcal{L}_{\lambda}$ on $G/B$, which is obviously a line bundle. Identifying $\mathcal{L}_{\lambda}$ with its sheaf of holomorphic sections
, we consider the sheaf cohomology groups $H^{i}(\mathcal{L}_{\lambda})$. Realizing $\mathfrak{g}$, the Lie algebra of $G$, as vector fields on $G/B$, we see that $\mathfrak{g}$acts on the sections
of $\mathcal{L}_{\lambda}$ over any open set, and so we get an action on cohomology groups. This integrates to an action of $G$, which on $H^{0}(\mathcal{L}_{\lambda})$ is simply the obvious action
of the group.
The Borel-Bott-Weil theorem states the following: if $(\lambda+\rho,\alpha)=0$ for any simple root $\alpha$ of $\mathfrak{g}$, then
for all $i$, where $\rho$ is half the sum of all the positive roots. Otherwise, let $w\in W$, the Weyl group of $G$, be the unique element such that $w(\lambda+\rho)$ is dominant (i.e. $(w(\lambda+\
rho),\alpha)>0$ for all simple roots $\alpha$). Then
$H^{{\ell(w)}}(\mathcal{L}_{\lambda})\cong V_{\lambda}$
where $V_{\lambda}$ is the unique irreducible representation of highest weight $\lambda$, and $H^{i}(\mathcal{L}_{\lambda})=0$ for all other $i$. In particular, if $\lambda$ is already dominant, then
$\Gamma(\mathcal{L}_{\lambda})\cong V_{\lambda}$, and the higher cohomology of $\mathcal{L}_{\lambda}$vanishes.
If $\lambda$ is dominant, than $\mathcal{L}_{\lambda}$ is generated by global sections, and thus determines a map
$m_{\lambda}\colon G/B\to\mathbb{P}\left(\Gamma(\mathcal{L}_{\lambda})\right).$
This map is an obvious one, which takes the coset of $B$ to the highest weight vector $v_{0}$ of $V_{\lambda}$. This can be extended by equivariance since $B$fixes $v_{0}$. This provides an alternate
description of $\mathcal{L}_{\lambda}$.
For example, consider $G=\mathrm{SL}_{2}\mathbb{C}$. $G/B$ is $\mathbb{C}P^{1}$, the Riemann sphere, and an integral weight is specified simply by an integer $n$, and $\rho=1$. The line bundle $\
mathcal{L}_{n}$ is simply $\mathcal{O}(n)$, whose sections are the homogeneous polynomials of degree $n$. This gives us in one stroke the representation theory of $\mathrm{SL}_{2}\mathbb{C}$: $\Gamma
(\mathcal{O}(1))$ is the standard representation, and $\Gamma(\mathcal{O}(n))$ is its $n$th symmetric power. We even have a unified decription of the action of the Lie algebra, derived from its
realization as vector fields on $\mathbb{C}P^{1}$: if $H,X,Y$ are the standard generators of $\mathfrak{sl}_{2}\mathbb{C}$, then
$\displaystyle H$ $\displaystyle=x\frac{d}{dx}-y\frac{d}{dy}$
$\displaystyle X$ $\displaystyle=x\frac{d}{dy}$
$\displaystyle Y$ $\displaystyle=y\frac{d}{dx}$
Mathematics Subject Classification
no label found
|
{"url":"http://planetmath.org/borelbottweiltheorem","timestamp":"2014-04-21T09:41:36Z","content_type":null,"content_length":"104372","record_id":"<urn:uuid:115e41f3-5acf-4972-9f89-114a52881755>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2nd order of differential equation.
March 13th 2011, 11:25 AM
2nd order of differential equation.
y'' -3y + 2y = e^2ix
Will this imaginary constant make some trouble for me?
I have tried this
Y = Ae2ix
and the derivative of that
Y' = 2Ae^2ix
Y''= 4Ae^2ix
I cant figure out how to get the full solution, because I already have the homogenous equation solved. what about this, what will be the constants?
March 13th 2011, 12:55 PM
mr fantastic
y'' -3y + 2y = e^2ix
Will this imaginary constant make some trouble for me?
I have tried this
Y = Ae2ix
and the derivative of that
Y' = 2Ae^2ix Mr F says: This is wrong. It should be 2iAe^2ix.
Y''= 4Ae^2ix Mr F says: This is wrong too. It should be -4Ae^2ix.
I cant figure out how to get the full solution, because I already have the homogenous equation solved. what about this, what will be the constants?
Substitute the correct derivatives into the DE and simplify:
-4A -6iA +2A = 1.
Solve for A.
|
{"url":"http://mathhelpforum.com/differential-equations/174458-2nd-order-differential-equation-print.html","timestamp":"2014-04-18T06:18:46Z","content_type":null,"content_length":"4744","record_id":"<urn:uuid:07082e09-bd12-4983-9451-94f488b257d3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ternary relation
ternary relation
A ternary relation (or triadic relation) is a relation in three variables (instead of the more common two of a binary relation).
The elementary theory of categories (in the strict sense) may be defined with a single ternary relation (and several axioms) in untyped predicate logic with equality; the relation holds of morphisms
$f$, $g$, and $h$ if and only if $f$ is the composite (in a fixed order) of $g$ and $h$.
Revised on November 16, 2009 09:51:16 by
Toby Bartels
|
{"url":"http://www.ncatlab.org/nlab/show/ternary+relation","timestamp":"2014-04-17T15:31:14Z","content_type":null,"content_length":"13026","record_id":"<urn:uuid:67aeb44d-f925-4e37-88dd-f754473458a6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
proving geometric theorems by vector method
I am learning vectors in which there is a section in which geometric theorems are proved with the help of vectors. However while solving problems I often face difficulty on how to proceed ,where to
use dot product, cross and etc.Is there any systematic manner on how to prove these ? Please help.
|
{"url":"http://www.physicsforums.com/showthread.php?p=1055048","timestamp":"2014-04-20T08:48:23Z","content_type":null,"content_length":"22432","record_id":"<urn:uuid:bb8f1dc9-cef2-4feb-a1b9-196d733bc660>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Growth mixture modeling and distal outcomes
Anonymous posted on Friday, July 01, 2005 - 9:59 am
I am running a growth mixture model and wanted to assess some distal outcomes, I put them in the usevar list like an example in the manual and got an estimate of the means in each class; however, I
would like to know if they differ from each other, is there a way to test this statistically?
Thank you!!
bmuthen posted on Saturday, July 02, 2005 - 6:00 pm
Run the model with and without holding the means equal across classes to get a chi-square test as 2 times the difference in log likelihood for the two models.
Anonymous posted on Friday, July 08, 2005 - 3:01 pm
I am trying to predict from 3 latent classes to 4 continuous outcomes. I found the previous posting about how to determine if outcomes differ from one another to be useful. However, since I have 3
different classes, how can I determine which classes differ? I can imagine holding the means of the outcomes equal across two classes while allowing them to be free in a third and examining
log-likelihood values. However, I am concerned that this will lead to multiple tests and will inflate my Type I error rate. Do you have any guidance on this?
Thank you!
Linda K. Muthen posted on Friday, July 08, 2005 - 5:53 pm
I wouldn't worry too much about this. You can take a conservative approach to avoid criticism by using a smaller p-value.
Anonymous posted on Wednesday, July 20, 2005 - 2:35 pm
Following on the previous posting. I have 4 latent classes. The largest group is the best performing group. If I hold the distal outcome to be equal for all 4 classes, it is not surprising that null
hypothesis will be rejected as the other 3 classes differ from the always good class. Do you have any suggestion on pair-wise testing the differences in distal outcome in the 3 smaller classes? By
the way, for binary distal outcome, where can I find the standard error for calculating confidence interval for odds ratio?
Thank you so much!!
bmuthen posted on Wednesday, July 20, 2005 - 2:49 pm
For the pair-wise testing, you can compute the quantities from the estimated model and Tech3 information using the approach of correlated t tests (Tech3 gives the covariance between estimates in
different classes). I think the OR confidence interval is presented in the output (at least in v 3.12).
lulu posted on Wednesday, November 02, 2005 - 1:22 pm
I have a question about the confidence intervals for the parameter estimates. Mplus gives symmetric confidence intervals by default. I am wondering is that based on normal approximation, i.e. the
by the way, I was using growth mixture modeling
Linda K. Muthen posted on Wednesday, November 02, 2005 - 6:36 pm
Jungeun Lee posted on Monday, July 16, 2007 - 6:27 pm
When we want to compare means across classes, are we supposed to examine log likelihood manually? Or, is there a mplus command which leads us to get a chi-square test? Thanks!
Bengt O. Muthen posted on Monday, July 16, 2007 - 6:34 pm
You can use Model Test to get a Wald chi-square test in a single run, or you can do 2 runs and compute chi-square as the 2*LL diff with and without means held equal across the classes.
Jungeun Lee posted on Wednesday, July 18, 2007 - 4:52 pm
Thanks for your response! I have a three class growth mixture model and wrote a syntax to get a Wald chi-square test. Mplus gave me error message. Could you let me know what I am missing here?
[tcdel16] (m1);
[tcdel16] (m2);
[tcdel16] (m3);
Bengt O. Muthen posted on Wednesday, July 18, 2007 - 5:50 pm
This should work. Please send your input, output, data, and license number to support@statmodel.com.
Bengt O. Muthen posted on Wednesday, July 18, 2007 - 5:52 pm
Actually, you should try deleting the last Model Test line
since that is redundant and causes a singularity problem.
Jungeun Lee posted on Monday, July 23, 2007 - 10:57 am
Thanks for the response! I deleted the line and got a Wald test. Down below is the output that I got from it. Can I say that all those means are different from each other based on the output? Since
the line 0=m2-m3 was deleted, I am not sure that the output reflects this relationship also or not... I think that I am not super clear about why 0=m2-m3 adds redundancy to the test.
Wald Test of Parameter Constraints
Value 260.266
Degrees of Freedom 2
P-Value 0.0000
Bengt O. Muthen posted on Monday, July 23, 2007 - 7:09 pm
Your statements
imply that m2=m1 and that m3=m1. Therefore m3=m2.
So instead of estimating 3 parameters you estimate 1, leading to 2 df. The Wald test rejects the hypothesis that these parameters are equal.
Loes Keijsers posted on Monday, October 01, 2007 - 7:21 am
Dear Linda and Bengt,
I have done a mixture analyses over 13 measurements of the development of delinquency. For each of these trajectories, I would like to examine whether they relate differentially to the development of
parenting . That is, I have 10 measurements of parenting and I would like to do a multivariate growth model including the intercept and slopes for parenting and the intercept and slopes for the same
measure of delinquency.
Is this possible? Using the delinquency measure in the mixture analyses, and then subsequently using the growth factors of this delinquency measure in a multigroup multivariate model, with groups
formed by latent trajectories? Are there maybe others who have used this approach before?
Thank you in advance for your help.
Loes K
Linda K. Muthen posted on Monday, October 01, 2007 - 10:35 am
Are the ten measures of parenting a repeated measure of the same variable over ten time points or ten different variables?
Loes Keijsers posted on Tuesday, October 02, 2007 - 2:44 pm
Dear Linda,
The ten measures of parenting are a repeated measure of the same variable.
Linda K. Muthen posted on Tuesday, October 02, 2007 - 6:58 pm
You can include a growth model the for parenting variable as part of the GMM. It is not a good idea to estimate a model in steps if it can be estimated simultaneously. So I would do a parallel
process growth mixture model.
Loes Keijsers posted on Wednesday, October 03, 2007 - 8:04 am
Thank you for your help!
Michelle posted on Wednesday, June 17, 2009 - 12:01 pm
Hi -
I am working on an LCGA model of healthy aging measured at 4 timepoints (4 waves of data across 20 years), with three latent classes and three known classes (based on age). I would like to predict
mortality as a distal catgeorical outcome; mortality is recorded at each wave, so essentially this is a time-varying distal outcome. Can I predict death at each wave, or do I need to collapse this
variable into a "mortality-at-any-time" variable? If I can use the time-varying measure, how would I write this in the commands? All the examples I have seen have been for a single distal outcome.
Finally, I am wondering how to instruct MPlus to handle the missing data for this model. Essentially, people become "missing" on the healthy aging measure once they have died - it seems to me this is
not MAR, and I am wondering how to instruct MPlus to handle this.
Michelle posted on Wednesday, June 17, 2009 - 1:48 pm
Addendum to previous post - the question about the missing data would be two-fold. We do have some people who are missing (as might be dealt with using MAR techniques) but then we also essentially
have a dichotomous outcome (physically healthy or not) that would be censored when someone dies. Is there a way to handle two types of missing-ness? Or should I just create a dataset where people are
removed after they die - i.e. they would have different numbers of observations of the outcome measure?
Bengt O. Muthen posted on Wednesday, June 17, 2009 - 3:01 pm
You are right that it is important to think carefully about missing data due to death. This is a big topic and there are many aproaches you can take using Mplus. I recently gave a talk on this but
haven't written it up yet. MAR may hold if previous observed values on the outcome is what predicts death. To be on the safe side, however, you want to explore "NMAR" techniques - this essentially
brings in dropout information into the model. For example, you can augment your growth mixture model with a dropout model by adding a survival model for the time of death. Survival can for example be
a function of trajectory class. By adding this dropout information, you are making the MAR assumption more plausible.
Michelle posted on Thursday, June 18, 2009 - 6:24 am
Thanks - this is helpful. I look forward to reading more about this - will the talk be posted here on the website?
Linda K. Muthen posted on Thursday, June 18, 2009 - 9:49 am
Yes but we don't know when yet.
Anne Chan posted on Wednesday, April 14, 2010 - 11:16 am
I am trying to look for the ways to conduct pairwise test to check if there are signficant differenes between each pair of classes generated from mixture growth modeling in terms of a continuous
distal outcome. May I ask:
(1) Do I have to do it by saving the class membership into the data file. And then use the class membership as a observed variable for further test?
(2) If I test the differences by running the model with and without holding the means equal across classes and check if the chi-square are significantly differences between models, does it mean I
have to run the test for several times so as to check the difference between each test?
(3) I understand there is an alternative way to do it (by MODEL TEST). However, I checked the user guide and could not find a example to follow. May I ask can you point me to an example? Also, is
that (checking if the each pair of classes are difference in term of a continuous distal variable) covered in the online video?
Thanks a lot!
Bengt O. Muthen posted on Wednesday, April 14, 2010 - 1:10 pm
You should use Model Test. See chapter 16 - pages 558-559 in the latest UG version. With 2 classes and one distal with means labelled as m1 and m2 in the Model Command, you say:
Model Test:
John Woo posted on Monday, December 06, 2010 - 4:04 pm
My GMM model has a categorical distal outcome (binary) included in usevariable--i.e., I am not using auxiliary (e).
My model also specifies the distal outcome as a function of sex, race, and SES.
When I run this model, I get a threshold value for my distal outcome for each class.
Question #1: Should I think of these values as the means of distal outcome when sex=0, race=0, and SES=0?
Question #2: If I wanted to get the means of distal outcome for different covariate values, can I still use the threshold of the distal outcome from my original model and use the following equation?
Note that I do not have separate "b0" (i.e., intercept coeff) in the above equation. I don't see it in the output.
Question #3: If I use Model Test to test the difference in means of distal outcome, can I use original model to infer the difference in means about the second model (i.e., the model with different
covariate values)?
Thank you in advance.
Bengt O. Muthen posted on Monday, December 06, 2010 - 5:59 pm
Q1: Yes. Except in the logit scale. The mean for the distal u is P(u=1).
Q3: If you want to test in the logit scale you simply work with different combinations of threshold, b1, b2, b3 (for different SES). Or you can test in the probability scale using your Q2 expression.
John Woo posted on Monday, December 06, 2010 - 7:58 pm
Dear Bengt,
Thank you.
One more question though.
I am using TYPE=IMPUTATION (using five imputed datasets), and I do not see the Wald test results in my output.
Is there a specific "TECH" I need to specify for output?
Thank you.
Linda K. Muthen posted on Tuesday, December 07, 2010 - 1:59 pm
You need to use MODEL TEST to obtain the Wald Test. See the user's guide for further information.
John Woo posted on Tuesday, December 07, 2010 - 3:37 pm
my question was whether the results for MODEL TEST are available when using TYPE=IMPUTATION. I know there are some functions (such as cprob) that are not available in the outputs for running imputed
When I ran my model yesterday using TYPE=IMPUTATION and MODEL TEST, I got clean results (i.e., no error messages), except I did not get any Wald test results.
Linda K. Muthen posted on Tuesday, December 07, 2010 - 4:15 pm
MODEL TEST is available in TYPE=IMPUTATION. I ran it and got results. Are you using Version 6.1? It comes under the heading of
Wald Test of Parameter Constraints
John Woo posted on Tuesday, December 07, 2010 - 6:28 pm
Ok, I see the result now.
I was expecting the Wald Test result for the set of pairwise tests (i.e., Ho: m1=m2, Ho: m1=m3, ... etc).
But, it seems that what I get is the test for the Ho: m1=m2=m3=...
To get the pairwise results, I guess I will just run the model several times each time using a different pair.
P.S. Instead of using Wald Test, could I use the difference in proportion test using the predicted probabilities and estimated class counts? That is,
t = (p1-p2)/((p1*(1-p1)/n1)+(p2*(1-p2))/n2))^0.5
where p1 = predicted prob for distal u for class 1
p2 = predicted prob for distal u for class 2
n1 = final class count for class 1
n2 = final class count for class 2
Bengt O. Muthen posted on Wednesday, December 08, 2010 - 10:25 am
Yes, you have to run Wald several times.
You can express any test function you want in Model Constraint.
Ramzi Nahhas posted on Tuesday, April 19, 2011 - 10:44 am
I am using LCA with covariates, many of which are categorical. I have created dummy variables for these covariates (n-1 dummy variables for an n-level covariate). Using MODEL TEST, I can get a
multiple df Wald test for each covariate. Is there any way to run more than one Wald test at the same time? (rather than having to change the MODEL TEST statement and re-run for each covariate)
Linda K. Muthen posted on Tuesday, April 19, 2011 - 12:10 pm
No, this is not currently possible.
Ramzi Nahhas posted on Tuesday, April 19, 2011 - 12:28 pm
Thanks for the quick reply. I would love it if the ability to specify more than one MODEL TEST became a feature in a future update. (1)
Also, (2) the ability to specify categorical covariates with LCA would be nice (i.e. the program would automatically create the dummy variables and also, if used in MODEL TEST, it would know to do a
multiple df test automatically).
In order of preference for future consideration, I'd want (1).
Andre Plamondon posted on Thursday, April 21, 2011 - 7:35 am
I have a question regarding the addition of covariates.
I have a GMM with 3 classes in which I S and Q are constrained at 0 but S varies in only two classes.
Is this correct? :
i s q| y1@0 y2@1 y3@2 y4@3;
c on x1 x2;
s on x1 x2;
s on x1 x2;
Bengt O. Muthen posted on Thursday, April 21, 2011 - 8:00 am
That looks right. Check the results and Tech1 to see that you get what you want.
K Frampton posted on Thursday, June 02, 2011 - 11:38 am
I am performing a wald test of parameter constraints with model test to identify differences in a distal variable across classes.
1. Can you please tell me what it means when the value = ********** (p = 0.00)? How can I find out what this value is?
2. Also,what does it mean when the value = infinity, and the p value= NaN?
Thank you!
Linda K. Muthen posted on Thursday, June 02, 2011 - 2:59 pm
The asterisks indicate that the value is too large to be printed in the space allocated for printing. Infinity means a very large number. NaN means not a number.
Please send your output and license number to support@statmodel.com so I can see why you are getting these messages.
K Frampton posted on Friday, June 03, 2011 - 11:19 am
Thanks Linda - I figured it out. I was simultaneously asking for tech11 (I kept it in from when I was identifying classes). When I deleted this, it worked fine.
Suresh Ramanathan posted on Thursday, October 27, 2011 - 5:51 pm
Dear Bengt/Linda,
I have 10 repeated measures on felt emotions to predict consumption, a continuous distal outcome, for two
groups of people. Specifically, I would like to test the hypothesis that the linear and quadratic trends are different for the two groups, and that these differences in trajectories predict
Following Bengt's recommendation to me in another thread, I used the examples 8.6 and 8.8 to create the GMM below:
a) is this is the right approach, given my research hypotheses?
b) how do I find out what is the effect of the linear and quadratic factors on the outcome?
DATA: FILE IS als.dat;
VARIABLE: NAMES ARE Subject Cond subno litedark leftone lefttwo ap1-ap10 ag1-ag10;
USEVARIABLES ARE leftone ap1-ap10;
CLASSES = cg (2) c(2);
KNOWNCLASS = cg (litedark = 1, litedark = 2);
ANALYSIS: TYPE = MIXTURE;
i s q| ap1@0 ap2@1 ap3@2 ap4@3 ap5@4 ap6@5 ap7@6 ap8@7 ap9@8 ap10@9;
c on cg;
OUTPUT: TECH1 TECH8 CINTERVAL;
Bengt O. Muthen posted on Thursday, October 27, 2011 - 6:21 pm
Your i, s, q means will vary across all 2 x 2 classes in this setup, but that is probably what you want. I assume your distal is "leftone" and its mean will vary over the 2 x 2 classes, so that is
the effect of growth on distal here (also including an effect of membership of your cg). I don't think you can disentangle the influence of linear and quadratic factors on the distal because they
interact. That's why it is better to let the 2 mixture classes of c influence the distal as is done here. So in sum, I think this probably gives what you might want.
Anna Wolf posted on Monday, April 08, 2013 - 5:30 am
Dear Drs,
I was hoping to get some assistance with comparing intercepts and slopes across classes.
I’ve run a latent class growth analysis with a dummy covariate (e.g. treatment vs control groups). There are three classes.
My understanding from previous posts is that I should use Wald chi-2 testing via Model Test. I’m just not exactly sure of what the syntax would look like.
I think I need to added the following to my syntax:
I've also added the labels: (p1), (p2), (p3) for the thee 'slope on treatment' commands for each class. Does this test the mean slope differences between the classes?
If so, how do I also go about comparing the intercepts across classes?
Thanks in advance for your help!
Linda K. Muthen posted on Monday, April 08, 2013 - 11:57 am
It looks like you have done this correctly. For the intercepts, you would label them and do the same test.
Anna Wolf posted on Wednesday, April 10, 2013 - 5:41 pm
Thanks Linda for your speedy reply!
So, just to clarify, the Wald Test of Parameter Constraints result for the above syntax would compare the slopes of both class 2 (p2) and class 3 (p3) with class 1 (p1). Is that correct?
Thus, a significant Wald result would mean that there is a significant difference between the slopes of both class 2 and 3 compared to class 1. Is that right?
Ultimately, I am wanting to compare the slopes of all three classes separately (e.g. class 1 (p1) with class 2 (p2), class 1 (p1) with class 3 (p3), and class 2 (p2) with class 3 (p3)). Is it still
correct to just add the syntax:
to compare class 1 with class 2, then repeat accordingly for the direct (pairwise) comparisons for the other two classes in separate analyses? e.g.
Linda K. Muthen posted on Thursday, April 11, 2013 - 8:58 am
The sytax above tests the equality of the coefficients jointly. A significant Wald test means there is a difference somewhere. It doesn't pinpoint where.
The syntax below tests each pair separately if there is only one MODEL TEST in the input.
Back to top
|
{"url":"http://www.statmodel.com/discussion/messages/13/726.html?1365695908","timestamp":"2014-04-17T16:37:18Z","content_type":null,"content_length":"86417","record_id":"<urn:uuid:35dc1085-695e-4659-acac-4900ee3ff09d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Suppose the probability that a U S resident has traveled to Canada is 0.18 to Mexico is 0.09 and to both countries is... - Homework Help - eNotes.com
Suppose the probability that a U S resident has traveled to Canada is 0.18 to Mexico is 0.09 and to both countries is 0.04. What is the probability that an American chosen at random has traveled
together Canada or Mexico?
Let a U S resident travelling to Canada be an event A and to Mexico event B.
Corresponding probabilities are P(A) and P(B) respectively.
The probability that a U S resident travelling to both countries is given by P(A `nn` B).
We have to find the probability that an American chosen at random has travelled together to Canada or Mexico which is given by P(A `uu ` B).
Addition theorem of probability for two events states that “If A and B are two events associated with a random experiment, then P(A `uu` B) =P(A) + P(B) - P(A `nn` B).”
Putting the values:
P(A `uu` B) = 0.18+0.09-0.04
= 0.23
Therefore, the probability that an American chosen at random has travelled together Canada or Mexico is 0.23.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/suppose-probability-that-u-s-resident-has-traveled-441482","timestamp":"2014-04-17T19:55:45Z","content_type":null,"content_length":"25534","record_id":"<urn:uuid:94e18a8c-f540-4869-9cdf-2ae847aa10c3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adaptive surface data compression
Adaptive surface data compression
Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis Street, Boston, MA 02115, USA; Telecommunications Institute, University of
Erlangen-Nuremberg, Cauerstraβe 7, 91058 Erlangen, Germany; Department of Oral and Maxillofacial Surgery, University of Erlangen-Nuremberg, Glückstraβe 11, 91054 Erlangen, Germany
Signal Processing 01/1997; DOI:10.1016/S0165-1684(97)00047-9
ABSTRACT Three-dimensional (3D) visualization techniques are becoming an important tool for medical applications. Computer-generated 3D reconstructions of the human skull are used to build
stereolithographic models, which can be used to simulate surgery or to create individual implants. Anatomy-based 3D models are used to simulate the physical behaviour of human organs. These 3D models
are usually displayed by a polygonal description of their surface, which requires hundreds of thousands of polygons. For interactive applications this large number of polygons is a major obstacle. We
have improved an adaptive compression algorithm that significantly reduces the number of triangles required to model complex objects without losing visible detail and have implemented it in our
surgery simulation system. We present this algorithm using human skull and skin data and describe the efficiency of this new approach.ZusammenfassungComputerbasierte dreidimensionale
Visualisierungstechniken haben im letzten Jahrzehnt Einzug in die Medizin gehalten. Aus den computergenerierten dreidimensionalen Rekonstruktionen des Gesichtsschädels werden unter anderem mittels
Stereolithographie reale Modelle erstellt, an denen geplante chirurgische Eingriffe simuliert werden können, oder aber die 3D-Rekonstruktionen dienen dazu, patientenangepaβte Implantate herzustellen.
Die Geometrie solch komplexer 3D Modelle wird im allgemeinen mit Hilfe hunderttausender einzelner, planarer Polygone beschrieben. Eine interaktive Darstellung dieser Modelle ist oftmals nicht mehr
möglich. In dieser Arbeit beschreiben wir ein erweitertes adaptives Verfahren zur signifikanten Reduzierung von Polygonoberflächen, ohne daβ damit ein Detailverlust in der Darstellung verbunden ist.
Dieses Reduzierungsverfahren wurde in ein Operationsplammgssystem integriert und umfassend verifiziert. An zwei medizinischen Datensätzen, der 3D Rekonstruktion der Hautoberfläche und des
Gesichtsschädels, wird die Leistungsfähigkeit dieses neuen Verfahrens aufgezeigt.RésuméLes techniques de visualisation tri-dimensionnelle sont en train de devenir un outil important pour les
applications médicales. Des reconstructions 3D du squelette humain générées par ordinateur sont utilisées pour construire des modèles stéréolithographiques qui peuvent être utilisés pour simuler une
opération chirurgicale ou pour créer des implants individuels. Des modèles tri-dimensionnels basés sur l'anatomie sont utilisés pour simuler le comportement physique d'organes humains. Ces modèles 3D
sont en général affichés à l'aide d'une description polygonale de leur surface, ce qui nécessite des centaines de milliers de polygones. Pour les applications interactives, ce grand nombre de
polygones est un obstacle majeur. Nous avons amélioré un algorithme de compression adaptatif qui réduit significativement le nombre de triangles requis pour modéliser des objets complexes sans perdre
de détails visibles, et nous l'avons implémenté dans notre système de simulation de chirurgie. Nous présentons cet algorithme en utilisant des données de squelette et de peau humains et décrivons
l'efficacité de cette nouvelle approche.
[show abstract] [hide abstract]
ABSTRACT: In three-dimensional accurate radiotherapy treatment planning system, one of the key steps in the whole planning is to reconstruct and visualize the surface model of tumor target and
organs at risk (OAR) from a serial of cross-sectioned contour points rapidly and accurately. This study designed a fast 3D-reconstruction and visualization pipeline comprising the following four
main steps: (1) pre-processing of contour points dataset; (2) extraction and simplification of Iso-surface; (3) linear transformation of surface model; (4) Smoothing the surface model. An open
source Visualization Toolkit (VTK) was applied to implement this method and a friendly user interface was developed on the Visual C++ development platform. Several clinic patients' CT datasets
were chosen for test data. The results show that this method could effectively avoid the “ladder effect” of standard marching cubes (MC) algorithm due to inconsistency between slice spacing and
image resolution. The reconstruction surface is simplified to speed up the rendering time. Furthermore, it has been successfully integrated into domestic Accurate/Advanced Radiotherapy Treatment
Planning System (ARTS) for clinical use.
Image and Signal Processing (CISP), 2010 3rd International Congress on; 11/2010
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
58 Downloads
Available from
Dec 24, 2012
|
{"url":"http://www.researchgate.net/publication/223837023_Adaptive_surface_data_compression","timestamp":"2014-04-19T20:17:16Z","content_type":null,"content_length":"220227","record_id":"<urn:uuid:47ae54b9-72b7-490b-b9ac-6f359ed19c01>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dirac delta function
Dirac delta function
The Dirac delta “function” $\delta(x)$, or distribution is not a true function because it is not uniquely defined for all values of the argument $x$. Similar to the Kronecker delta symbol, the
notation $\delta(x)$ stands for
$\delta(x)=0\;\text{for}\;xeq 0,\;\text{and}\;\int_{{-\infty}}^{\infty}\delta% (x)dx=1$
Notes: However, the limit of the normalized Gaussian function is still meaningless as a function, but some people still write such a limit as being equal to the Dirac distribution considered above in
the first paragraph.
An example of how the Dirac distribution arises in a physical, classical context is available on line.
Remarks: Distributions play important roles in Dirac’s formulation of quantum mechanics.
• 1 W. Rudin, Functional Analysis, McGraw-Hill Book Company, 1973.
• 2 L. Hörmander, The Analysis of Linear Partial Differential Operators I, (Distribution theory and Fourier Analysis), 2nd ed, Springer-Verlag, 1990.
• 3 Originally from The Data Analysis Briefbook (http://rkb.home.cern.ch/rkb/titleA.html)
DiracSequence, DiracMeasure, Distribution4
Mathematics Subject Classification
no label found
Attached Articles
|
{"url":"http://planetmath.org/DiracDeltaFunction","timestamp":"2014-04-17T12:41:08Z","content_type":null,"content_length":"51288","record_id":"<urn:uuid:bfd096a6-760e-4519-9d06-601e02d8ccb4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Yeadon, PA Math Tutor
Find a Yeadon, PA Math Tutor
...I have a proven track record of increasing students' ACT and SAT scores and improving their skills. My approach is tailored specifically to the student, so no two programs are alike. My
expertise allows me to quickly identify students' problem areas and most effectively address these in the shortest amount of time possible.
19 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have taught students from kindergarten to college age, and I build positive tutoring relationships with students of all ability and motivation levels. All of my students have seen grade
improvement within their first two weeks of tutoring, and all of my students have reviewed me positively. T...
38 Subjects: including calculus, composition (music), ear training, elementary (k-6th)
...I am a semi-professional photographer who has had art galleries for my High Dynamic Range (HDR) work. I have done product photography with models, long-exposure night time shots and event
photography and landscapes. Using a Canon DSLR, the main parameters to control for a shot are the aperture, shutter speed and ISO.
35 Subjects: including prealgebra, biology, calculus, elementary (k-6th)
Hello, my name is Amanze I graduated from Temple University with a B.S in Biology, and I offer tutoring in biology, chemistry and math. I currently work as a development scientist developmenting
analytical test for cancer detection. I have experiencing tutoring college level math and chemistry and...
6 Subjects: including trigonometry, geometry, prealgebra, algebra 1
...I have taught at the middle school and high school level (currently at a magnet high school teaching advanced, mentally gifted and AP classes) as well as tutored elementary grades and up. I
was nominated for the Lenfest teaching award and am a winner of the Yale Educator award. I am flexible in...
17 Subjects: including algebra 1, SAT math, English, prealgebra
Related Yeadon, PA Tutors
Yeadon, PA Accounting Tutors
Yeadon, PA ACT Tutors
Yeadon, PA Algebra Tutors
Yeadon, PA Algebra 2 Tutors
Yeadon, PA Calculus Tutors
Yeadon, PA Geometry Tutors
Yeadon, PA Math Tutors
Yeadon, PA Prealgebra Tutors
Yeadon, PA Precalculus Tutors
Yeadon, PA SAT Tutors
Yeadon, PA SAT Math Tutors
Yeadon, PA Science Tutors
Yeadon, PA Statistics Tutors
Yeadon, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/yeadon_pa_math_tutors.php","timestamp":"2014-04-16T21:59:36Z","content_type":null,"content_length":"23928","record_id":"<urn:uuid:70bb2444-5431-4f63-82d7-46f7e30f6a3c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
5. A student mistakenly measures the length of a radius to be 24 inches. The actual radius is 25 inches. What is the absolute error? A. 1 inch B. 2 inches C. 24 inches D. 25 inches 6. A student
mistakenly measures the length of a radius to be 24 inches. The actual radius is 25 inches. What is the relative error? A. 1 B. 0.01 C. 0.04 D. 0.5
• 9 months ago
• 9 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51c9d448e4b011c79f623e2f","timestamp":"2014-04-17T21:35:05Z","content_type":null,"content_length":"35043","record_id":"<urn:uuid:4e3efe26-14a2-4379-9570-4ac1cbfc3ae0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A)In The Figure, A Frictionless Roller Coaster ... | Chegg.com
A)In the figure, a frictionless roller coaster of mass 1000 kg tops the first hill at height, h = 36 m with speed 8.65 m/s. How much work does its weight do on it from that point to point A?
B)How much work does its weight do on it from that point to point B?
C)How much work does its weight do on it from that point to point C?
D)If the gravitational potential energy of the coaster-Earth system is taken to be zero at point C, what is its value when the coaster is at point B?
E)If the gravitational potential energy of the coaster-Earth system is taken to be zero at point C, what is its value when the coaster is at point A?
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/figure-frictionless-roller-coaster-mass-1000-kg-tops-first-hill-height-h-36-m-speed-865-m--q1750159","timestamp":"2014-04-20T19:29:39Z","content_type":null,"content_length":"21677","record_id":"<urn:uuid:0aec05ac-79f3-4f49-b6f9-54046c36a5b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
diagram 2
The dependent variable of the regression equation, the Market variable for Bank Holding Company (BHC), i, during time, t, equals the summation of the following items:
• Alpha;
• Beta-one times BOPEC Change for BHC, i, during time, t minus one;
• Beta-two times Firm Size for BHC, i, during time, t minus one;
• Beta-three through Beta-eleven, each times a different Financial Accounting variable for BHC, i, during time, t minus one - a summation of each of these terms;
• Error term for BHC, i, during time, t.
|
{"url":"http://www.fdic.gov/bank/analytical/working/wp2003_04/diagram2.html","timestamp":"2014-04-17T07:08:03Z","content_type":null,"content_length":"1236","record_id":"<urn:uuid:55adabb4-ebd3-47b4-b372-919c29db99e2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Queens, NY Science Tutor
Find a Queens, NY Science Tutor
...Let me help you wrap your mind around it!I have BA with Departmental Honors in Physics from Vassar College. During two years of college I worked for the physics department in a position
similar to that of a TA, leading hour-long review sessions twice a week for introductory physics students and ...
1 Subject: physics
I am recently retired from the New York City school system after teaching science for over twenty years. I hold a doctorate in biology and conducted research in fat and alcohol metabolism. I can
tutor all areas of biology, as well as high school chemistry and algebra.
25 Subjects: including philosophy, zoology, genetics, ecology
I offer tutoring for standardized tests and academic coursework. I am a former Princeton Review teacher (both group lessons and tutoring) and have tutored privately as well. I received my BA from
the University of Pennsylvania and MA from Georgetown University.
20 Subjects: including ACT Science, English, reading, grammar
...I love disseminating information to students. It gives me pleasure. I look forward to teaching you math, science, and general knowledge.
21 Subjects: including organic chemistry, ACT Science, physics, physical science
I am a private tutor and academic consultant with 10+ years of experience in the cutthroat academic climate of New York City’s elite public and private schools. I provide general scholastic
tutoring, homework help, enrichment in individual subjects, standardized test prep, and core academic aptitud...
36 Subjects: including philosophy, ACT Science, English, writing
Related Queens, NY Tutors
Queens, NY Accounting Tutors
Queens, NY ACT Tutors
Queens, NY Algebra Tutors
Queens, NY Algebra 2 Tutors
Queens, NY Calculus Tutors
Queens, NY Geometry Tutors
Queens, NY Math Tutors
Queens, NY Prealgebra Tutors
Queens, NY Precalculus Tutors
Queens, NY SAT Tutors
Queens, NY SAT Math Tutors
Queens, NY Science Tutors
Queens, NY Statistics Tutors
Queens, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Queens_NY_Science_tutors.php","timestamp":"2014-04-17T07:14:54Z","content_type":null,"content_length":"23580","record_id":"<urn:uuid:d2670974-3314-4ead-9340-ffe3b1aff2d5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
multicomp.reverse {HH}
Force all comparisons in a "multicomp" object to have the same sign.
Force all comparisons in a "multicomp" object to have the same sign. If the contrast "A-B" has a negative estimate, reverse it show the contrast "B-A" with a positive estimate.
multicomp.reverse(y, estimate.sign = 1, ...)
"multicomp" object
If estimate.sign==1, reverse the negatives. If estimate.sign==-1, reverse the positives. Both the names of the comparisons and the numerical values are reversed. If estimate.sign==0, return the
other arguments not used.
The result is a "multicomp" object containing the same contrasts as the argument but with the sign of the contrasts changed as needed.
Heiberger, Richard M. and Holland, Burt (2004b). Statistical Analysis and Data Display: An Intermediate Course with Examples in S-Plus, R, and SAS. Springer Texts in Statistics. Springer. ISBN
Heiberger, R.~M. and Holland, B. (2006). "Mean--mean multiple comparison displays for families of linear contrasts." Journal of Computational and Graphical Statistics, 15:937--955.
S-Plus use the multicomp functions and R uses the multcomp package.
See Also
MMC, multicomp.order
## see example in multicomp.order
Documentation reproduced from package HH, version 3.0-3. License: GPL (>= 2)
|
{"url":"http://www.inside-r.org/packages/cran/HH/docs/multicomp.reverse","timestamp":"2014-04-19T05:25:09Z","content_type":null,"content_length":"14986","record_id":"<urn:uuid:ea25f61f-55f3-4f12-9009-d72c7efe0af5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Applied and Computational Mathematics Division
We provide leadership within NIST in the application of applied and computational mathematics to the solution to science and engineering problems arising in measurement science and related
applications. This is accomplished via a program of R&D in mathematical and computational techniques and tools, collaboration with NIST and external scientists, and the dissemination of reference
data and software.
The ITL Mathematical and Computational Sciences Division was renamed Applied and Computational Mathematics on June 20, 2010
Details of technical activities of the Division are provided in our Yearly Reports: 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000.
You are invited to attend our Division Seminar Series.
|
{"url":"http://nist.gov/itl/math/index.cfm","timestamp":"2014-04-19T04:52:37Z","content_type":null,"content_length":"27017","record_id":"<urn:uuid:b8f2a8a8-eff6-43d8-bbc0-fc118ef3d983>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Draw Complex Line Bundles
up vote 18 down vote favorite
I am giving a presentation soon on the Classification of Complex Line Bundles and I would like to have some very "basic" visualizations to use as examples.
Background and Context
I am considering the Cech-cohomology of a principal $ \mathbb{C}^{*} $ bundle, where my sheaf $\underline{\mathbb{C}}_M^{*}$ is the sheaf of smooth $\mathbb{C}^{*}$ valued functions on the manifold
$M$. Using the exponential sequence of sheaves $$ 0 \to \mathbb{Z}(1) \to \underline{\mathbb{C}}_M \to \underline{\mathbb{C}}_M^{*} \to 0$$ we get an isomorphism (via properties of cohomology and the
connecting homomorphism) $$H^1(M, \underline{\mathbb{C}}_M^{*}) \cong H^2(M, \mathbb{Z}(1)) $$
It turns out that $H^1(M, \underline{\mathbb{C}}_M^{*}) $ is also isomorphic to the group of isomorphism classes of principal-$\mathbb{C}^{*}$ bundles over $M$. Since the principal- $\mathbb{C}^{*}$
bundles are in one-to-one correspondence with the complex line bundles, it should be evident how this all relates to my title.
My Questions
(1) Given the above information, and some knowledge of cohomology, there should be only a trivial principal- $\mathbb{C}^{*}$ bundle on the circle $S^1$. How can we see this visually?
*See my example/analogue below.
(2) Similarly, how can we visualize a non-trivial principal- $\mathbb{C}^{*}$ bundle on the standard 2-dimensional torus?
So consider a circle bundle on $S^1$, then we can consider a section of the bundle like so:
Now, given two sections on adjacent trivializations,
We can imagine deforming one section into another, to get our transition functions. Now, I can also believe that any such family of sections can be deformed into a global section, so again I want to
know why this necessarily doesn't work on the Hopf bundle via pictures.
Great question! The Hopf circle bundle $S^3 \to S^2$ is much more familiar, and there are many expositions on how to vizualize it, just google "Hopf fibration image" or something like that. –
Johannes Ebert Jun 25 '12 at 7:32
I appreciate the popularity of the Hopf circle bundle, but I would like something I can REALLY feel and see. So, even though I agree that mathematicians understand the Hopf Fibration quite
thoroughly, I'm not convinced it's easy to see :/ I'm still trying though! – cheyne Jun 25 '12 at 17:18
EDIT/UPDATE: I've decided the Hopf Circle Bundle is really my only chance at visualizing something close to a non-trivial $\mathbb{C}^{*}$-bundle! In particular, I want to see why I CAN define
transition functions for a family of local sections, but why I could never manipulate these sections to form a global one! – cheyne Jun 25 '12 at 21:23
add comment
2 Answers
active oldest votes
To see that the Hopf bundle is not trivial, one considers its restrictions to the subspaces $N,S\subset S^2 = \mathbf{CP}^1$, with $N = \mathbb{C}^*\cup\{\infty\}$ and $S = \mathbb{C}$
where it is trivial. One should be able to write down sections given this description. Then the transition function $\mathbb{C}^* \to S^1$ can be written down. You can think of this as
a map $\mathbb{C}^* \to \mathbb{C}^\ast$, and hence calculate the integral of it around $S^1 \subset \mathbb{C}^*$. This gives you the index of the transition function, which is
You can then calculate the index of any transition function for the Hopf bundle, using the fact that it will be a Cech cocycle equivalent to the one you've written down.
Lastly, you can calculate the index of the transition function for the trivial $S^1$-bundle on $S^2$, and find this is not equal to that for the Hopf bundle.
up vote 2
down vote In fancier language, because $\mathbb{C}^*$ is not simply connected, one finds that you cannot deform the Hopf bundle's transition function, which is not null-homotopic, to the
accepted transition function for the trivial bundle, which is null-homotopic.
In pictures, one has that the transition function for the Hopf bundle, restricted to the circle, loops once around the origin, but the trivial bundle's transition function is constant.
If people are happy with believing that continuously deforming the transition function gives equivalent bundles (one could motivate this by writing down Cech coboundaries on $\mathbb
{CP}^1$ that give the equivalence), and that transition functions which cannot be deformed to each other give inequivalent bundles (this is the important point), then this is pretty
much the best picture you'll get.
Thank you, David. I am trying to visualize your last paragraph, but I think I get what you're saying. I'm going to try and better understand your setup, then really believe the last
paragraph before I come back to considering this problem answered. Thanks again! – cheyne Jun 27 '12 at 16:58
David: I finally had time to follow what you said step by step. I found that the transition function between my two natural sections was "1/z" and I am just working through the
details of formally showing that this is not equivalent to the identity in this context. Thanks a lot!! – cheyne Jul 11 '12 at 20:44
add comment
To answer (1), the idea would be to try and construct a trivialization "by hand". For principal bundles, that means a section. Think of a circle as an interval with the endpoints identified.
Constructing the section on an interval is easy (path-lifting property of a bundle) but then you have to check that you can ensure the endpoints meet up. This follows from the fact that
you've chosen your bundle to be one for a connected group.
To answer (2) it might make sense to get the basic idea of why bundles are classified by cohomology classes. The idea is very simple and beautiful and comes from what's known as Whitney's
Embedding Theorem in manifold theory. This is the theorem that says that every manifold can be realized as a submanifold of euclidean space. Moreover, if the euclidean space has large-enough
dimension, your embedding is unique up to isotopy. The classification of vector bundles is highly analogous -- every vector bundle is the pull-back of a map to a classifying bundle. What this
up vote means is that up to a fibrewise isomorphism, you can think of all the fibers of your vector bundle as sitting as vector subspaces of some very large Euclidean space. Provided this Euclidean
9 down space is large enough, this gives a bijective correspondence between bundles over your space and homotopy-classes of maps from the space to the appropriate Grassmann manifold. In the case of
vote complex line bundles (what you're interested in), the corresponding Grassmann manifold is a $K(\mathbb Z,2)$-space, and such maps correspond to elements of $H^2$ of your space, this is a
classical obstruction-theoretic argument of Serre's.
The complex line bundles over a torus are just the trivial line bundle over a torus connect-sum with the line bundle over $S^2$ having euler class $1$ (a punctured $\mathbb CP^2$). I suppose
you could view this as the normal bundle of an embedded torus in a connect-sum of $\mathbb CP^2$'s.
Hello Ryan, I appreciate your input and your breadth of knowledge of bundles. However, I don't have any issues with proving any of the statements in my question. I literally want pictures
to go along with the statements. Now that I provided pictures for the circle bundle over $S^1$, I think (1) makes a bit more sense. I still can't draw appropriate pictures for (2) and use
the pictures to show that I can't "line up the endpoints" (please don't take that literally) in the case of the sections of the sphere into the Hopf Bundle. – cheyne Jun 27 '12 at 1:28
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry gt.geometric-topology cohomology sheaf-cohomology principal-bundles or ask your own question.
|
{"url":"http://mathoverflow.net/questions/100531/how-to-draw-complex-line-bundles","timestamp":"2014-04-18T18:16:25Z","content_type":null,"content_length":"67027","record_id":"<urn:uuid:cc5f8e0f-951c-4821-96bf-e8ddd3a4e2ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graphs of Two Variable Functions
Many types of economic problems require that we consider two variables at the same time. A typical example is the relation between price of a commodity and the demand or supply of that commodity. The
relation can be described algebraically by a two variable function or equation. But it is often useful to represent the relation in a two-dimensional graph. Such a graph is known as a scatter
diagram. This is a useful device because if there is a simple relationship between the two variables, it is readily observable once the data are plotted. As the proverb says, “a picture is worth a
thousand words.”
To represent a function graphically we use two perpendicular lines called axes. Their point of intersection is called the origin. This method of representation is called the Cartesian coordinate
system or plane. The numerical value of one variable is measured along the bottom or horizontal axis. The horizontal axis is called the x axis. The numerical value of the other variable is measured
along the side or vertical axis. The vertical axis is called the y axis. The four sections into which the graph is divided are called quadrants. Units of length are indicated along the two axes.
Note that there are four quadrants. If x is positive we move to the right. If x is negative we move to the left. If y is positive we move up. If y is negative we move down.
Coordinates allow us to look at the relationship between pairs of numbers and points in the plane. Coordinates give the location of a point, P, in relation to the origin.
We have an x coordinate and a y coordinate.
Let (x, y) represent the point whose coordinates are the numbers x and y. Note that the x coordinate comes first. The two coordinates tell us how far we must go first along the x axis and then along
the y axis until the point is reached.
Plotting coordinates: some examples
Example 1
Find the following points
point a (2, 4)
point b (4,-6)
point c (-2,-6)
point d (-6, 2)
Example 2
p = the price per dollar of a crate of vegetables
f(p) = the supply in thousands of crates
A store manager has the following data which relate the price of a crate of vegetables to the supply:
│price │4│6 │8 │10│14│
│supply │2│14│23│27│29│
What do the data tell us? The graph shows that as we might expect an increase in price is associated with an increase in supply.
|
{"url":"http://www.columbia.edu/itc/sipa/math/graphs.html","timestamp":"2014-04-19T08:03:26Z","content_type":null,"content_length":"4128","record_id":"<urn:uuid:d6afcf82-52ad-4295-9fba-067b2b1cfa63>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about undefined terms on educationrealist
Geometry: Starting Off
By educationrealist
The first day or two of geometry is always point line plane. We never really use it again. Geometry has mostly been subordinated to algebra in high school, as I’ve written before, and my geometry
class is best thought of as algebra applications with geometry. Or is it the other way round? Purists see geometry as the medium for introducing proofs, logic, and construction. To which I say pish
tosh. Most of them are never going to see those subjects again. “But if they don’t learn rigorous logic in geometry, they won’t be able to learn advanced math!” Yeah, that’s moronic nonsense. What is
“solve for x”, if not a proof?
But I love history, so I always start by telling them to put their pencils down and just listen as I explain the significance of Euclid’s Elements and the wonder of a book written 2300 years ago.
Three hundred years ago is older than our country. Euclid wrote Elements 300 years before the birth of Christ, so Christ’s contemporaries (the educated ones) thought of Euclid much the way we think
of Alexander Hamilton or George Washington. Take seven additional chunks of people looking back 300 years and here we are. A book written that long ago was “in print” over 1000 years before “print”
existed, and since then, is second only to the Bible in published editions—not just in the English language, which had to wait another 100 years after the Latin version was published, but in all
As to writing another book on geometry [to replace Euclid] the middle ages would have as soon thought of composing another New Testament.–Augustus de Morgan
Why? Because he* nailed it. For over 2000 years, his model met the world’s requirements, and when the world finally found limits to his model, it wasn’t because he was wrong.
Euclid was nagged by his “fifth postulate”, which is easier to sketch than describe:
That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on
which are the angles less than the two right angles.
If you’re not a mathematician—and I am not—you’re like, um, duh? What else is going to happen? The lines will meet up. But Euclid and other early mathematicians knew the fifth postulate wasn’t the
same as the other four, and that’s almost certainly why he established the first 28 theorems without reference to it. For the next couple millennia, mathematicians tried to prove the fifth postulate
using the other four, and failed. This collected history of effort around that single postulate ultimately led to the realization that there other, non-Euclidean geometries, many of which (if I
understand this properly) begin with the negation of the fifth postulate. This discovery rocked the world, robbed it of a truth previously assumed absolute, and ultimately contributed a bit to
Einstein’s theory of relativity.
And 2300 years ago, Euclid needed the fifth postulate to complete his model, furrowed his brow and said, “Yeah, hmm. Something’s not right about that one.”
I tell my students that I’m not a mathematician, and that they don’t need to be, either, in order to realize what a stunning achievement Elements is, and to realize the significance of math in our
world that thousands of years ago, mathematicians knew enough to be bothered by a postulate that seemed obvious but was yet somehow different from the others needed for the model. That they, my
students, are studying an incredibly old math, one that holds up for our ordinary requirements to this day, but also created the foundation for deeper, more complex models. That if they don’t like
math, don’t like geometry, to at least appreciate it from a historical standpoint.
I am probably fooling myself a little bit, but the kids always seem interested. Which is all I’m looking for. Just to show I’m not making this up, here are my board notes:
(Yes, my board work sucks. It’s something I build as I go through a lecture most of the time, a document in progress. I’ve started taking pictures of my boardwork to get a better sense of what I
said, what I emphasized, and what I could do to improve boardwork next time.)
Then I go onto undefined terms—not just the terms for geometry, but the meaning of undefined terms. Here, again, Euclid nails the building blocks for his model. (Geometry books give point, line, and
plane as the three undefined terms, but I also spend time on “congruence” and “between”.)
Then I show how the building blocks of the undefined terms allow us to define everything else in the Eucliden model. I usually use ray, segment, and angle just to give the the idea.
This year, I decided I wanted to do more with 3-dimensional graphing (xyz) and introduced it as part of this lecture. First, the students learned to represent three dimensional planes without a
coordinate system, and see for themselves what happened when two planes intersected. The kids had fun with that; here’s one of the best:
Then we went into more formal xyz graphing. I’m including more 3-d graphing this year to help prepare students for 3 variable systems next year, and also to give the students more variety in
visualizing images. Click on the board work below to see that I draw in the rectangular prism, which helps students grasp the difference between 2-axis graphing, in which any two points are a
diagonal in a rectangle, and 3-axis graphing, in which any two points are the vertices of a rectangular prism. I heard a lot of “ahas” as I went through this. Not sure what the next 3-d graphing
activity will be, but I think I’ve started with a good foundation.
So that was the first day, really. Then I went into the meat of unit one: angle types, angle pairs, perimeter and area formulas, and as always, using these relationships to set up equations and solve
with the ever loved algebra.
Here’s the first test. I think I caught all the glitches after I captured this. But I’m sure I missed something; I have a pathological tolerance for typos.
*I’m assuming it was just Euclid. More fun that way.
12 Comments | tags: 3d graphing, Euclid, fifth postulate, geometry, undefined terms | posted in math, pedagogy, teaching
|
{"url":"http://educationrealist.wordpress.com/tag/undefined-terms/","timestamp":"2014-04-17T16:04:22Z","content_type":null,"content_length":"38856","record_id":"<urn:uuid:676ac375-5838-4d81-a90e-9f3cac0d67ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A self-duality of strong starlikeness
In this note, we will show that a simply connected bounded domain $D$ is strongly starlike of order $\alpha<1$ if and only if so is $D^\vee,$ where $D^v$ is the analytic inversion of the exterior of
$D,$ namely, $D^v=\{w\in\mathbb{C}:\, 1/w\in\widehat{\mathbb{C}} \setminus\bar D\}.$ This fact neatly explains the relationship between some known properties of strongly starlike domains and provides
several new characterizations for those domains.
submission: January 28, 2000
revision: February 3, 2000
revision: June 1, 2001
revision: November 22, 2001
|
{"url":"http://www.cajpn.org/pp00/0003.html","timestamp":"2014-04-18T03:11:18Z","content_type":null,"content_length":"1424","record_id":"<urn:uuid:8c4f1fbd-7ca2-4d32-94f2-b098a2d36ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Willow Springs, IL
Chicago, IL 60618
...I have scored above the 90th percentile on the ACT, SAT, and GRE, so I do know the material thoroughly myself, but the method of taking the test can make all the difference for a student looking
for a few more points. I also tutored
at that location, working...
Offering 10+ subjects including algebra 1, algebra 2 and calculus
|
{"url":"http://www.wyzant.com/geo_Willow_Springs_IL_Math_tutors.aspx?d=20&pagesize=5&pagenum=5","timestamp":"2014-04-16T20:31:55Z","content_type":null,"content_length":"59159","record_id":"<urn:uuid:aa3e5af5-cc79-45bf-a9e6-fec1a9e995e4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/zordoloom/medals","timestamp":"2014-04-19T22:55:05Z","content_type":null,"content_length":"102210","record_id":"<urn:uuid:462cf53d-f444-41df-a8b0-be10e56fadd4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Galton & simulation
September 27, 2010
By xi'an
Stephen Stigler has written a paper in the Journal of the Royal Statistical Society Series A on Francis Galton’s analysis of (his cousin) Charles Darwin’ Origin of Species, leading to nothing less
than Bayesian analysis and accept-reject algorithms!
“On September 10th, 1885, Francis Galton ushered in a new era of Statistical Enlightenment with an address to the British Association for the Advancement of Science in Aberdeen. In the process of
solving a puzzle that had lain dormant in Darwin’s Origin of Species, Galton introduced multivariate analysis and paved the way towards modern Bayesian statistics. The background to this work is
recounted, including the recognition of a failed attempt by Galton in 1877 as providing the first use of a rejection sampling algorithm for the simulation of a posterior distribution, and the
first appearance of a proper Bayesian analysis for the normal distribution.”
The point of interest is that Galton proposes through his (multi-stage) quincunx apparatus a way to simulate from the posterior of a normal mean (here is an R link to the original quincunx). This
quincunx has a vertical screen at the second level that acts as a way to physically incorporate the likelihood (it also translates the fact that the likelihood is in another “orthogonal” space,
compared with the prior!):
“Take another look at Galton’s discarded 1877 model for natural selection (Fig. 6). It is nothing less that a workable simulation algorithm for taking a normal prior (the top level) and a normal
likelihood (the natural selection vertical screen) and finding a normal posterior (the lower level, including the rescaling as a probability density with the thin front compartment of uniform
Besides a simulation machinery (steampunk Monte Carlo?!), it also offers the enormous appeal of proposing the derivation of the normal-normal posterior for the very first time:
“Galton was not thinking in explicit Bayesian terms, of course, but mathematically he has posterior $mathcal{N}(0,C_2)proptomathcal{N}(0,A_2)times f(x=0|y)$. This may be the earliest appearance
of this calculation; the now standard derivation of a posterior distribution in a normal setting with a proper normal prior. Galton gave the general version of this result as part of his 1885
development, but the 1877 version can be seen as an algorithm employing rejection sampling that could be used for the generation of values from a posterior distribution. If we replace $f(x)$
above by the density $mathcal{N}(a,B_2)$, his algorithm would generate the posterior distribution of Y given X=a, namely $mathcal{N}(aC_2/B_2, C_2)$. The assumption of normality is of course
needed for the particular formulae here, but as an algorithm the normality is not essential; posterior values for any prior and any location parameter likelihood could in principle be generated
by extending this algorithm.”
This historical entry is furthermore interesting at another (anecdotal) level in that it is reminding me of a visit I made as a teenager to the Birmingham museum of Sciences where I saw a quincunx
for the first time and got fairly intrigued by the stable law exhibited therein… (In a completely unrelated manner, let me point out the excellent pastiche of Dickensian dark novels called The
Quincunx by Charles Palliser.)
Filed under:
accept-reject algorithm
Bayesian statistics
Charles Darwin
Charles Palliser
Francis Galton
The Quincunx
for the author, please follow the link and comment on his blog:
Xi'an's Og » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/galton-simulation/","timestamp":"2014-04-16T10:30:37Z","content_type":null,"content_length":"45341","record_id":"<urn:uuid:849caf03-9753-4045-b265-3092feb05b29>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NBOS Online Exchange
Re: [nbos] [AS] Travel Times"Walter M. Scott III" Sat Aug 29th, 2009
>...I am trying to figure out some travel times at slower than light.
>...Anyone know of any easy way to compute these times?
>...For example, a sleeper ship that is going from earth to 61 Cygni.
Assuming acceleration of 3/4 Gs, I was just trying to figure out how
long it would take to travel.
-Doug Jessee
This is a bit hard to write in plain text, but
Where d=distance, a= acceleration, t= time **2 means squared
So, solve for time:
t=root(2d/a) where root means the square root
Now, you are going to accelerate for 1/2 the distance, then decelerate
for the second half, so you use half the distance, then multiply x 2 for
the total time. Watch your units of measure! BUT, this does not take
relativistic effects into account.
Nbossoftware mailing list
|
{"url":"http://www.nbos.com/nox/index.php?action=5000&msgid=2174","timestamp":"2014-04-17T12:29:42Z","content_type":null,"content_length":"3981","record_id":"<urn:uuid:57a0e9e7-973f-4312-825e-00894431ea4e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inversion in mergesort
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Jul 2013
Rep Power
I am trying to implement the number of inversion in an array with entries {2,4,1,3,5}
On paper, this array will yield 3 inversions (2,1) (4,1) (4,3)
but with my code, it yields 5. where am I getting it wrong?
int mergeSort(int arr[], int array_size)
int *temp = (int *)malloc(sizeof(int)*array_size);
return mergesort(arr, temp, 0, array_size - 1);
int mergesort(int arr[], int temp[], int left, int right)
int mid;
int inv_count = 0;
if (right > left)
mid = (right + left)/2;
inv_count = mergesort(arr, temp, left, mid);
inv_count += mergesort(arr, temp, mid+1, right);
inv_count += merge(arr, temp, left, mid+1, right);
return inv_count;
int merge(int arr[], int temp[], int left, int mid, int right)
int i, j, k;
int inv_count = 0;
i = left; /* i is index for left subarray*/
j = mid; /* i is index for right subarray*/
k = left; /* i is index for resultant merged subarray*/
while ((i <= mid - 1) && (j <= right))
if (arr[i] <= arr[j])
temp[k++] = arr[i++];
temp[k++] = arr[j++];
inv_count = inv_count + (mid - i);
/* Copy the remaining elements of left subarray
(if there are any) to temp*/
while (i <= mid - 1)
temp[k++] = arr[i++];
/* Copy the remaining elements of right subarray
(if there are any) to temp*/
while (j <= right)
temp[k++] = arr[j++];
/*Copy back the merged elements to original array*/
for (i=left; i <= right; i++)
arr[i] = temp[i];
return inv_count;
Your program works as you expect.
#if 0
-*- mode: compilation; default-directory: "/tmp/" -*-
Compilation started at Tue Jul 2 19:34:39
a=./c && make -k $a && $a
cc -Wall -g c.c -o c
INVERSIONS: 3
Compilation finished at Tue Jul 2 19:34:39
#define DIM(A) (sizeof((A))/sizeof(*(A)))
int merge(int arr[], int temp[], int left, int mid, int right) {
int i, j, k;
int inv_count = 0;
i = left; /* i is index for left subarray*/
j = mid; /* i is index for right subarray*/
k = left; /* i is index for resultant merged subarray*/
while ((i <= mid - 1) && (j <= right))
if (arr[i] <= arr[j])
temp[k++] = arr[i++];
else {
temp[k++] = arr[j++];
inv_count = inv_count + (mid - i);
/* Copy the remaining elements of left subarray (if there are any) to temp*/
while (i <= mid - 1)
temp[k++] = arr[i++];
/* Copy the remaining elements of right subarray (if there are any) to temp*/
while (j <= right)
temp[k++] = arr[j++];
/*Copy back the merged elements to original array*/
for (i=left; i <= right; i++)
arr[i] = temp[i];
return inv_count;
int mergesort(int arr[], int temp[], int left, int right) {
int mid;
int inv_count = 0;
if (right > left) {
mid = (right + left)/2;
inv_count = mergesort(arr, temp, left, mid);
inv_count += mergesort(arr, temp, mid+1, right);
inv_count += merge(arr, temp, left, mid+1, right);
return inv_count;
int mergeSort(int arr[], int array_size) {
int *temp = (int *)malloc(sizeof(int)*array_size);
return mergesort(arr, temp, 0, array_size - 1);
void display(int*a,int n) {
int i;
for (i = 0; i < n; ++i)
int a[] = {2,4,1,3,5};
int main() {
display(a, DIM(a));
printf("INVERSIONS: %d\n",mergeSort(a,DIM(a)));
display(a, DIM(a));
return 0;
[code]Code tags[/code] are essential for python code and Makefiles!
How about if you test it over this set of .txt file? What is the number of inversion that you got? Here is a link to the .txt file: http://txtup.net/HaGYX. I got an overflow and I tried using
long long on the inv_counter but that didn't solve the problem either..
The largest value in that file is 99973, well within the range of long or even of a 32-bit int.
I took your program completely as-is, compiled it and got no errors or warnings, and ran it against your own data file. this is what I got:
C:TEST\help_desk>gcc -Wall mergesort.c
INVERSIONS: 3
Why are you the only one who is always having problems with the exact-same code and data that we can use without a problem?
What compiler and OS are you using? I just used MinGW gcc version 2.95.3-6 (mingw special) under WinXP Pro.
And, no, we cannot read your mind for that information.
Last edited by dwise1_aol; July 3rd, 2013 at 10:22 AM.
Why are you the only one who is always having problems with the exact-same code and data that we can use without a problem?
A pertinent question. It was I who was dumb enough to complete the program. This may be part of the answer.
[code]Code tags[/code] are essential for python code and Makefiles!
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Jul 2013
Rep Power
|
{"url":"http://forums.devshed.com/programming-42/inversion-mergesort-947904.html","timestamp":"2014-04-17T16:44:30Z","content_type":null,"content_length":"70639","record_id":"<urn:uuid:41aefe50-72bc-414c-a661-af441423cc5b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wildomar Geometry Tutor
Find a Wildomar Geometry Tutor
...Just this past year I tutored groups of students in SAT preparation, both in math and verbal, and a few years ago I tutored English for an organization in San Diego. Altogether, I've been
tutoring for standardized tests for 30 years and know the patterns and concepts that are tested quite thoroughly. I am confident that I can get the material across in simplified, understandable,
29 Subjects: including geometry, Spanish, reading, SAT math
...I am currently the President of a local Toastmasters Club in Menifee, CA. I have taught Drama at both the middle and high school level. I have been speaking and teaching to students for over 20
34 Subjects: including geometry, English, reading, accounting
...I am currently a substitute for Murrieta Valley USD, which allows me to tutor students in need of help. I have 2 school aged children so I usually tutor out of my home. I also like to have
access to all my teaching resources.
14 Subjects: including geometry, calculus, NCLEX, Praxis
...I have an assistant teachers certificate and I have much experience prior to this website. I have worked in multiple classrooms of varies ages. I have worked with children with a range of
disabilities that include ADHD, ADD, Autism, Opits syndrome, and processing disorders.
26 Subjects: including geometry, English, chemistry, biology
I have a BS in Mathematics with an option in Education and I am currently enrolled in both the teaching credential program and the Master's of Math Education at California State Long Beach. I have
been tutoring for over 15 years in all levels of math. I have been commended for my ability to make math more simple and relatable to the student.
18 Subjects: including geometry, chemistry, physics, calculus
|
{"url":"http://www.purplemath.com/Wildomar_Geometry_tutors.php","timestamp":"2014-04-18T08:23:36Z","content_type":null,"content_length":"23902","record_id":"<urn:uuid:e650967e-41aa-4a85-a566-25d300a00870>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
differential enterprises
What's New
In the Monte Carlo tab, add the probability density function dynamic plot which updates as you drag the cursor beams to select probability level and days forward. This is the same data from the monte
carlo results as the cumulative distribution function (top yellow background graph), but in derivative form, so the user can get a better idea of the probabilities involved and the shape of the tails
of the graph (the extremes).
The noisiness (jitter) of this new graph is due to the approximate and random nature of the monte carlo procedures. We don't see this jitter on the top cumulative probability graph because the
integration operation which creates this top graph is essentially a filtering operation.
Note: the horizontal axis of this new probability graph is relative probability, not time. The vertical axis is price, as it is for the other curves.
The corresponding price/probability selection from the top graph is noted on this new bell-shaped curve by a small square dot. This dot is the only intersection that matters for this new graph; other
intersections of the new graph (e.g. with the blue actual price data trace or any of the other curves) are meaningless, and are merely an artifact of overlaying 2 types of graphs (price=f(time) and
price=f(probability)) on one set of axes.
A light gray horizontal line indicator is also drawn on the bottom graph to better show the price level that the user has selected on the top graph.
Minor updates:
Change Theory tab name to Theory/Help. Add button under Theory/Help tab for a Calibration case study / tutorial.
App Description
Stock price risk analyzer app for the common man.
Estimates future price distribution using random walk theory.
Background discussion: E. Fama article on early random walk studies from the 1960's:
New model calibration tutorial:
Example use case & training guide for studying "AAPL to $320" can be found at:
The app uses prior data from the stock in question for volatility estimates.
User can control how far back in time to use historical data to capture only the current "epoch" of a company or of the market as a whole if desired.
Built-in backtesting, verification, and model tuning tools.
-- Details --
This app models daily stock returns as a stable stochastic process and estimates a future price distribution by Monte Carlo re-sampling from an "empirical distribution" of a user-specified subset of
prior (known) daily returns.
Be sure to press the Run Monte button on the Monte Carlo tab after changing settings or downloading a new data set. The MC is not run automatically after each change, because it can be a bit time
consuming if you want the computation done for many days forward.
This app downloads historical data from Yahoo Finance as base data to resample. Prices are converted to daily returns [P(t)/P(t-1)] before resampling. The user can choose how far back to resample. By
estimating a probability distribution of future prices at the user-specified investment horizon in this manner, we can give risk-of-loss estimates in thumb-rule fashion, to a first approximation.
Reports out estimated price and %loss estimates at the commonly used levels of 1st percentile and 5th percentile (1% and 5% risk). Also reports out median (50th percentile) price estimates at the
given number of days forward. Calculations may be performed on Yahoo daily Closing or Adjusted daily Closing price data. An artificial shock filter is provided, which can be used to reject the
resampling of prior returns that are artificially large (due to splits or other artificial re-valuations that do not affect the underlying value of the asset). Theory of operation is described in
detail under the Theory tab.
The stochastic model may be tuned or calibrated only by adjusting the maximum number of days backwards to sample. One may want to tune the model differently for a different number of days forward
Version 2.0:
New Stochastic Model Validation features:
On the Monte Carlo tab, you can withhold any number of recent days from the model and then plot the results of the stochastic risk forecast as lower-bound envelopes at 1% and %5 estimated probability
(risk) levels.
Validate tab:
This allows you to perform an exhaustive validation on your model by withholding several points, computing the model, comparing the forward prediction of the model versus the actual reserved data,
and repeating this in increasing time sequence for all withheld points.
New cursor beam on new plots:
A vertical "Cursor Beam" is provided that you can drag across the new plots in the Monte Carlo tab and the Validate tab to show the plotted values from several curves at once, with the values
color-coded to the curves.
Version 3.0:
Show the full price probability plot linked to the days-forward setting of the Monte Carlo graph. This is a slice thru the probability surface generated by the Monte Carlo procedure.
Version 3.2:
Allow an optional display of 300 of the many monte carlo random walk traces as an overlay on the monte carlo tab's graphs.
The app provider makes no claims as to the suitability of this app for any purpose whatsoever, and the user should consult an investment advisor before making investment decisions.
App Screen Shots
(click to enlarge)
App Changes
September 28, 2013 Price Decrease: $0.99 -> FREE!
June 08, 2013 Price Decrease: $0.99 -> FREE!
May 05, 2013 Price Decrease: $0.99 -> FREE!
December 31, 2010 Initial Release
Other Apps From differential enterprises
|
{"url":"http://www.148apps.com/app/412346415","timestamp":"2014-04-19T12:16:02Z","content_type":null,"content_length":"21707","record_id":"<urn:uuid:73d9a585-241b-4c02-9cea-2eaf0f0b99f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Open University
Engineers who wish to update their skills or learn new technologies without pursuing a specific degree may enroll in the School of Engineering’s Open University program.
Open University allows the students to:
• Examine a field of study before enrolling in a degree program
• Enhance your professional training
• Attend on a regular or noncontinuous basis, depending on your own needs
• Get a head start by enrolling in classes while waiting for admission/transfer to a degree program
Transfer to a Degree Program:
If you would like to apply to a degree program later on, you will need to follow the same procedures required of the degree-seeking applicants, except that you will not need to pay the application
fee again. The general GRE test requirement for admission to the master's degree program can be waived if the following two conditions are satisfied, effective winter 2008:
• The student has completed a set of required courses in the department to which they are applying.
• The student has achieved a GPA of 3.5 or above in these courses.
PLEASE NOTE: Official transcripts will still be required as part of the admissions process.
In general, up to 16 units taken in the Open University program can be transferred into the M.S. program. However, students who wish to do so should make sure that these courses conform to the
departmental requirements in their chosen area of specialization.
Required Courses for Waiving the GRE, by department:
• Applied Mathematics
Any set of AMTH courses for a total of 8 units are acceptable.
• Civil Engineering
A minimum of 12 units of course work in either of the two option tracks detailed below; all course work must be completed in one of the two specified optional tracks:
• Structural Engineering
CENG 205, CENG 206, CENG 222, CENG 233, CENG 234, CENG 236, CENG 237, CENG 239, CENG 246
General Civil Engineering
CENG 218, CENG 219, CENG 237, CENG 242, CENG 249, CENG 250, CENG 251, CENG 260, CENG 269
• Computer Engineering
Computer engineering requires three specific 4 unit courses for each of its degrees:
• M.S. Computer Engineering Degree (12 units)
1. COEN 210: Computer Architecture (4 units)
2. COEN 279:(Cross-listed as AMTH 377) Design and Analysis of Algorithms (4 units)
3. COEN 283: Operating Systems (4 units)
M.S. Software Engineering Degree (12 units)**
1. COEN 260: Truth, Deduction and Computation (4 units)
2. COEN 275: OO Analysis, Design and Programming (4 units) or COEN 359: Design Patterns (4 units)
3. COEN 285: Software Engineering (4 units)
**The above only waives the GRE requirement and not other requirements of admission. A student applying to the MSSE program who does not hold a B.S in a computing-related field is required to
take the GRE subject test. That requirement cannot be waived by the taking the above three courses and earning a GPA>=3.5
• Electrical Engineering
Four courses must be chosen from the following set (for a total of 8 units):
1. Electromagnetic field theory (ELEN 201 or 202)
2. Control systems (ELEN 232, 236, or 333)
3. Electronics (ELEN 252, 261, 264, or 270)
4. Networks (ELEN 211)
5. Digital systems (ELEN 500 or 603)
• Engineering Management
The completion of 2 Engineering Management courses and 2 technical courses (total 8 units) with a minimum GPA of 3.5 is required for GRE waiver consideration; however, the final decision will be
made by the department chair.
• Mechanical Engineering
The admission requirement for the transfer to the Master's Program is a total of 8 or more units with a GPA of 3.5 or above from the courses and course sequences listed under each topic area.
Most of the offerings in ME are 2-unit courses. All courses considered must be in one topic area, and sequences as listed must be completed for the units to count towards the total. MECH 202 must
be one of the courses applied to computing the GPA.
• Dynamics & Controls
□ MECH 202, Mathematical Methods in Mechanical Engineering
□ MECH 214-215, Advanced Dynamics I and II
□ MECH 217, Introduction to Control I
□ MECH 218, Guidance and Control I
□ MECH 305, Advanced Vibrations I
□ MECH 323-324, Modern Control Systems I and II
□ MECH 202, Mathematical Methods in Mechanical Engineering
□ MECH 256, Introduction to Biomaterials
□ MECH 281, Fracture Mechanics and Fatigue
□ MECH 330, Atomic Arrangements, Defects, and Mechanical Behavior
□ MECH 331, Phase Equilibria and Transformations
□ MECH 332, Electronic Structure and Properties
□ MECH 333, Experiments in Materials Science
□ MECH 345, Modern Instrumentation and Experimentation
□ MECH 202, Mathematical Methods in Mechanical Engineering
□ CENG 205-206-207, Finite Element Methods I, II, and III
□ MECH 275, Design for Competitiveness
□ MECH 285, Computer-Aided Design of Mechanisms
□ MECH 325-326, Computational Geometry for Computer-Aided Design and Manufacture I and II
□ MECH 415, Optimization in Mechanical Design
□ MECH 202, Mathematical Methods in Mechanical Engineering
□ MECH 207-208-209, Advanced Mechatronics I, II, and III
□ MECH 218-219, Guidance and Control I and II
□ MECH 315-316, Digital Control I and II
□ MECH 323-324, Modern Control System Design I and II
□ MECH 337-338, Robotics I and II
□ MECH 202, Mathematical Methods in Mechanical Engineering
□ MECH 225, Gas Dynamics I
□ MECH 228, Equilibrium Thermodynamics
□ MECH 238, Convective Heat and Mass Transfer I
□ MECH 266, Fundamentals of Fluid Dynamics
□ MECH 270, Viscous Flow I
|
{"url":"http://www.scu.edu/engineering/graduate/programs-2013/open-university.cfm","timestamp":"2014-04-19T17:41:16Z","content_type":null,"content_length":"49503","record_id":"<urn:uuid:a97bd93a-abca-48af-9118-ec60679b4875>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Marysville, WA SAT Math Tutor
Find a Marysville, WA SAT Math Tutor
...Despite the name, the Science portion of the ACT Science test does NOT actually contain Science. I am an expert at teaching techniques for managing this portion of the ACT and routinely see
huge score improvements. I have a BA is Education and was certified in the state of Wisconsin to teach preK-12th grade.
36 Subjects: including SAT math, English, reading, writing
...I am comfortable teaching note-taking skills, organization techniques, and test preparation skills. I can help students plan their time well, organize their backpack and binders, or can help
them stay on task and focused while working. I have tutored the ISEE and SSAT for about nine years, at each of the different levels.
32 Subjects: including SAT math, English, reading, geometry
...I have also worked in a couple of assembly languages. I am open to teach any language that also has value in the real world. I have worked as an IT industry professional for over 30 years.
43 Subjects: including SAT math, chemistry, physics, calculus
...These include: chemistry, math, algebra, biology, statistics, finance. I have also needed tutoring myself in subjects that I was not very good at. In tutoring others I always take the student
where they are.
17 Subjects: including SAT math, chemistry, algebra 2, algebra 1
...The most important part of the SAT writing is writing a great essay. I have the template that the SAT readers are looking for so my students always score high on the essay. I also have
strategies for the other 2 writing sections including vocabulary.
5 Subjects: including SAT math, ASVAB, SAT reading, SAT writing
Related Marysville, WA Tutors
Marysville, WA Accounting Tutors
Marysville, WA ACT Tutors
Marysville, WA Algebra Tutors
Marysville, WA Algebra 2 Tutors
Marysville, WA Calculus Tutors
Marysville, WA Geometry Tutors
Marysville, WA Math Tutors
Marysville, WA Prealgebra Tutors
Marysville, WA Precalculus Tutors
Marysville, WA SAT Tutors
Marysville, WA SAT Math Tutors
Marysville, WA Science Tutors
Marysville, WA Statistics Tutors
Marysville, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Marysville_WA_SAT_math_tutors.php","timestamp":"2014-04-16T13:37:41Z","content_type":null,"content_length":"23922","record_id":"<urn:uuid:6c13dacc-5c60-40ed-9abd-5a0bd68f0ed6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Klüppelberg, Claudia and Severin, M. (2001): Prediction of outstanding insurance claims. Collaborative Research Center 386, Discussion Paper 258
Metadaten exportieren
Autor(en) recherchieren
Building reserves for outstanding liabilities is an important issue in the financial statement of any insurance company. In this paper we present a new model for delay in claim settlement and to
predict IBNR (incurred but not reported) claims. The modelling is based on a data set of a portfolio of car liability data, which describes the claim settlement of a car insurance portfolio. The data
consists of about 5000 realisations of claims, all of which incurred in 1985 and were followed until the end of 1993. In our model, the total claim amount process (S(t))_{t>0} is described by a
Poisson shot noise model, i.e. S(t) = ∑_{n=1}^{N(t)} X_n(t-T_n), t≥0 where X_1(.), X_2(.),... are i.i.d. copies of the claim settlement process X(.) and the occurence times (T_i)_{i ∈ N} of the
consecutive claims are random variables such that N(t) = #{n ∈ N; T_n ≤ t}, t≥0, is a Poisson process, which is assumed to be independent of X(.). The observed times of occurrences of claims are used
to specify and to estimate the intensity measure of the Poisson process N(.). Motivated by results of an exploratory data analysis of the portolio under consideration, a hidden Markov model for X(.)
is developed. This model is fitted to the data set, parameters are estimated by using an EM algorithm, and prediction of outstanding liabilities is done and compared with the real world outcome.
|
{"url":"http://epub.ub.uni-muenchen.de/1638/","timestamp":"2014-04-20T10:51:42Z","content_type":null,"content_length":"27475","record_id":"<urn:uuid:d92d780d-5b34-4063-b099-d2c9b1fb4cc3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cluster Risk Parity back-test
March 4, 2013
By systematicinvestor
In the Cluster Portfolio Allocation post, I have outlined the 3 steps to construct Cluster Risk Parity portfolio. At each rebalancing period:
• Create Clusters
• Allocate funds within each Cluster using Risk Parity
• Allocate funds across all Clusters using Risk Parity
I created a helper function distribute.weights() function in strategy.r at github to automate these steps. It has 2 parameters:
• Function to allocate funds. For example, risk.parity.portfolio, will use use risk parity to allocate funds both within and across clusters.
• Function to create clusters. For example, cluster.group.kmeans.90, will create clusters using k-means algorithm
Here is the example how to put it all together. Let’s first load historical prices for the 10 major asset classes:
# Load Systematic Investor Toolbox (SIT)
# http://systematicinvestor.wordpress.com/systematic-investor-toolbox/
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
# Load historical data for ETFs
tickers = spl('GLD,UUP,SPY,QQQ,IWM,EEM,EFA,IYR,USO,TLT')
data <- new.env()
getSymbols(tickers, src = 'yahoo', from = '1900-01-01', env = data, auto.assign = T)
for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)
bt.prep(data, align='remove.na')
Next, let’s run the 2 versions of Cluster Portfolio Allocation using Equal Weight and Risk Parity algorithms to allocate funds:
# Code Strategies
periodicity = 'months'
lookback.len = 250
cluster.group = cluster.group.kmeans.90
obj = portfolio.allocation.helper(data$prices,
periodicity = periodicity, lookback.len = lookback.len,
min.risk.fns = list(
C.EW = distribute.weights(equal.weight.portfolio, cluster.group),
C.RP=distribute.weights(risk.parity.portfolio, cluster.group)
models = create.strategies(obj, data)$models
Finally, let’s examine the results:
# Create Report
strategy.performance.snapshoot(models, T)
The Cluster Portfolio Allocation produce portfolios with better risk-adjusted returns and smaller drawdowns.
To view the complete source code for this example, please have a look at the bt.cluster.portfolio.allocation.test1() function in bt.test.r at github.
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/cluster-risk-parity-back-test/","timestamp":"2014-04-18T03:24:08Z","content_type":null,"content_length":"38143","record_id":"<urn:uuid:032769c8-1f22-405f-8202-14615931d1f3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sample Lessons – Grade 5
Sample Fifth Grade Math Lessons
• Over 1,300 lessons, with millions of paths through the curriculum
• Adaptive: lessons, hints, level of difficulty, pace, sequence, and much more are adapted for each student
• Virtual manipulatives help students solve problems multiple ways
• Standards: Aligned with the Common Core State Standards, TEKS, SOL, WNCP and Ontario Curriculum
*The following demo lessons are currently only compatible on desktop browsers, but the full lesson are available on the Apple iPad app*
Decimal Numbers to the Thousandths on the Number Line Play this 5th grade math lesson
In this DreamBox lesson, students learn to locate both positive and negative decimal numbers on a number line. This scrolling number line allows students to “zoom in” and “zoom out” to a specific
range on the number line using magnifying glasses that scale the number line by either 10 times or 100 times. In order to deeply understand decimals and place value to the thousandths, students need
to understand the relative magnitude of decimal numbers. For example, even though 3 centimeters (cm) and 2.001 cm are both greater than 2 cm, they have very different locations on a number line
relative to 2 cm. On a standard ruler, it is easy to see the difference between the mark at 2 cm and the mark at 3 cm. On that same ruler, it is virtually impossible to create a 2.001 cm mark because
it is so close to the 2 cm mark. Lessons using this manipulative deepen students' understanding of a rational number as a point on the number line.
Place Value of Decimal Numbers to the Thousandths Play this 5th grade math lesson
This DreamBox lesson uses a virtual manipulative called the Decimal Dials to teach base-ten relationships and place value for rational numbers. In this lesson, students are able to adjust a different
Dial for each place value: ones, tenths, hundredths, and thousandths. Similar to how the hour hand and minute hand on a clock move in concert, every Dial is linked to the movement of the others. In
this particular lesson, one of the Dials is broken. To solve these problems, students need to understand that "a digit in one place represents 10 times as much as it represents in the place to its
right and 1/10 of what it represents in the place to its left" (5.NBT.1).
Division within 10,000 with Remainders Play this 5th grade math lesson
This DreamBox lesson helps students understand multidigit division problems through an engaging packing context. Students add a “helper equation” as they build partial quotients to mentally compute
an answer. Though students can explore different grouping strategies, they have a limited number of “helper” equations for each problem. Students learn to compute and interpret remainders as well as
calculate exact fractional answers such as 1,110 divided by 20 equals 55 1/2.
Add & Subtract Decimals with the Number Line Play this 5th grade math lesson
In this DreamBox lesson, students use an open number line model to solve for an unknown decimal value in equations such as 1.02 + x = 4.06. Students create their own addition or subtraction equations
to move along to number line and solve the problem. Students deepen their number sense because they are able to choose an efficient solution strategy based on the numbers in the problem. They can use
place value strategies, 'friendly' numbers, or landmark numbers. In this lesson, students not only learn ways to solve for an unknown decimal value in an equation; they also develop flexible
thinking, improve the efficiency of their strategies, and strengthen their mental math abilities.
Multiplying Fractions Play this 5th grade math lesson
This DreamBox lesson teaches students how to mentally multiply fractions as well as how to represent fraction products as rectangular areas. Our Drop Zone virtual manipulative allows students to
divide unit squares into fractional pieces and then find fractional pieces of those pieces. Students connect fraction products with a visual array model to understand scaling and mental strategies
for multiplying fractions.
Multiplication Standard Algorithm Play this 5th grade math lesson
Through a series of DreamBox lessons, students learn not only to fluently multiply multidigit whole numbers using the standard algorithm, but also how to first estimate products in order to have a
sense of whether an answer is reasonable. As with all DreamBox lessons involving standard algorithms, this lesson develops conceptual understanding of the steps involved with place value strategies.
Because of these lessons on the multiplication standard algorithm, students are able to estimate large products, execute the algorithm, and explain the steps involved.
Want to try the DreamBox Learning experience firsthand?
Try answering some problems incorrectly to see the many ways lessons dynamically adapt. If you’re unsure what to do, click the HELP button!
|
{"url":"http://www.dreambox.com/fifth-grade-math-lessons","timestamp":"2014-04-18T05:36:12Z","content_type":null,"content_length":"59146","record_id":"<urn:uuid:68fbad5e-d5c3-448c-866e-8fd48aa98203>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cotorsion group
From Encyclopedia of Mathematics
An Abelian group Extension of a group). For
Cotorsion groups can also be characterized by their injective property with respect to those exact sequences Exact sequence).
Epimorphic images of cotorsion groups are cotorsion, and so are the extensions of cotorsion groups by cotorsion groups. A direct product of groups is cotorsion if and only if each summand is
Examples of cotorsion groups are: 1) divisible (i.e., injective) Abelian groups, like Divisible group); and 2) algebraically compact groups, like finite groups and the additive group of the Compact
group. A torsion Abelian group is cotorsion if and only if it is a direct sum of a divisible group and a bounded group (the Baer–Fomin theorem), and a torsion-free Abelian group is cotorsion exactly
if it is algebraically compact. Ulm subgroups of cotorsion groups are cotorsion, and the Ulm factors of cotorsion groups are algebraically compact.
For a reduced cotorsion group
A cotorsion group is said to be adjusted if it is reduced and contains no non-trivial torsion-free summand. The cotorsion hull of a reduced torsion group is adjusted, and the correspondence [a2]
states that every cotorsion group
Some authors use "cotorsion" as "cotorsion in the above sense + reduced" .
A general reference is [a1]. See [a3] for a generalization to cotorsion modules over commutative domains.
See also Cotorsion-free group.
[a1] L. Fuchs, "Infinite abelian groups" , 1 , Acad. Press (1970)
[a2] D.K. Harrison, "Infinite abelian groups and homological methods" Ann. of Math. , 69 (1959) pp. 366–391
[a3] E. Matlis, "Cotorsion modules" , Memoirs , 49 , Amer. Math. Soc. (1964)
How to Cite This Entry:
Cotorsion group. L. Fuchs (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Cotorsion_group&oldid=18282
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
|
{"url":"http://www.encyclopediaofmath.org/index.php?title=Cotorsion_group","timestamp":"2014-04-19T07:02:52Z","content_type":null,"content_length":"21791","record_id":"<urn:uuid:44fe1299-d6ef-48df-ac93-3700b5d21918>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
It is not the equations that are the problem
In the latest issue of PNAS there are several articles which the mathematically minded among you (and are involved in the mathematisation of your own fields) might enjoy.
James Gibbons asks the equations be not thrown out with theory bathwater.
Chitnis and Smith argue that mathematical illiteracy impedes progress in biology.
Adam Kane has some suggestions for improving mathematically heavy papers.
Andrew Fernandes argues that there is no evidence of equations impeding communication among biologists.
Finally, Fawcett and Higginson whose paper started all these comments, give their version:
It is not equations that are the problem; it is equations without sufficient accompanying text to explain the assumptions and implications for a broad biological audience. We do not recommend
the indiscriminate removal of equations from scientific papers.
Explaining the mathematics in sufficient detail for a broad audience can, however, require considerable space. As a pragmatic solution acknowledging the constraints many journals impose on
article length, we suggested that authors might move some of their equations to an appendix. Our viewpoint is that essential equations capturing the assumptions and structure of a model should be
presented in the main text, whereas less fundamental equations, such as those describing intermediate steps to solutions, should be presented in an appendix.
A nice set of articles worth your time.
|
{"url":"http://mogadalai.wordpress.com/2012/11/07/it-is-not-the-equations-that-are-the-problem/","timestamp":"2014-04-21T12:24:28Z","content_type":null,"content_length":"46206","record_id":"<urn:uuid:77159694-0261-4386-8858-4e1636cb4c18>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Hyperasymptotics for nonlinear ODEs. II: The first Painlevé equation and a second-order Riccati equation.
(English) Zbl 1206.34077
Summary: This paper is a sequel to the author’s first part [Proc. R. Soc. Lond., Ser. A, Math. Phys. Eng. Sci. 461, No. 2060, 2503–2520 (2005; Zbl 1186.34076)] in which we constructed hyperasymptotic
expansions for a simple first-order Riccati equation. In this paper we illustrate that the method also works for more complicated nonlinear ordinary differential equations, and that in those cases
the Riemann sheet structure of the so-called Borel transform is much more interesting.
The two examples are the first Painlevé equation and a second-order Riccati equation. The main tools that we need are transseries expansions and Stokes multipliers. Hyperasymptotic expansions
determine the solutions uniquely. Some details are given about solutions that are real-valued on the positive real axis.
34E05 Asymptotic expansions (ODE)
34M40 Stokes phenomena and connection problems (ODE in the complex domain)
34M55 Painlevé and other special equations; classification, hierarchies
34M60 Singular perturbation problems in the complex domain
|
{"url":"http://zbmath.org/?q=an:05213583","timestamp":"2014-04-17T06:47:06Z","content_type":null,"content_length":"21738","record_id":"<urn:uuid:14b87f7f-dd04-4baf-a5c2-9c6396080fc2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Ask Dr. Math Archives: High School Coordinate Plane Geometry
This page:
coordinate plane
Dr. Math
See also the
Internet Library:
coordinate plane
About Math
basic algebra
linear algebra
linear equations
Complex Numbers
Discrete Math
Fibonacci Sequence/
Golden Ratio
conic sections/
coordinate plane
practical geometry
Negative Numbers
Number Theory
Square/Cube Roots
Browse High School Coordinate Plane Geometry
Stars indicate particularly interesting answers or good places to begin browsing.
How can I draw a square with 10 square units on dot paper?
I have two 2D coordinate systems S1 and S2 arbitrarily positioned on a plane... determine the relative positions of S2 and S1.
What's the easiest way to find the distance between a point and a line in three dimensions: When the line is defined by two points in space, and when the line is defined by angles from the
cartesian axes?
I have an angle of -6 degrees which I need to express in terms of a single 0-360 revolution.
How can I compute the angle formed by two sides of a frustrum of a pyramid?
If you know the coordinates of the vertices, how do you calculate the area of a polygon?
The Bermuda triangle is shown on a graph with the points A(1,2) B(4,8) and C(8,1). Determine the area of the triangle two different ways.
My teenage son asked me for the formula for the area of intersection of two arbitrary circles.
Using coordinate system to find the areas of polygons.
Two trains A and B each of length 100m travel in opposite directions in parallel tracks. The speeds are 20m/s and 30m/s respectively. A boy sitting in the front end of train A throws a ball to a
boy sitting in the front end of train B when they are at the closest distance. The speed of the ball is 2m/s. The ball, instead of reaching the boy, hits the rear end of the train. Find the
distance between the parallel tracks.
How does barycentric calculus compare with trilinear or cartesian calculus?
A cue ball is launched at an angle of 45 degrees from the lower left corner of a pool table and ends up in the lower right corner. What rule will predict which corner the ball will hit? What
patterns are involved?
Are there any equations that could be used to solve for a plank of known width?
How do I figure the center point (Xc,Yc,Zc) of a circle given 2 or more points on its circumference and its radius?
How do you find the Cartesian coordinates of a circle's centers if you know two points on its perimeter?
How do you construct a line tangent to a circle through a point outside the circle? How do you do it with only a straightedge?
Given two coordinate points in the Cartesian plane, locate a third point perpendicular to the line joining points 1 and 2 and a certain distance from either point.
Given two perpendicular lines AC and BD in a plane and a point E directly above their intersection, find the length of BC.
If a coordinate pair contains a zero, which quadrant is it in? For example, is the point (2,0) in the 1st or 4th quadrant?
ABC is a right-angled triangle labeled counter-clockwise with its point C lying on the line y=3x. A is (2,1) and B is (5,5). Find the two possible coordinates of C.
Find all possible values of k so that (-1,2), (-10,5), and (-4,k) are the vertices of a right triangle.
My teacher asked for the coordinates of two points that lie on the x- axis. I wrote (0,4) and (0,98)...
What is the polar coordinate system and how does it differ from the rectangular coordinate system?
Can you explain pitch, roll, and yaw to me? Are there other systems for measuring an object's position in space?
What equation can be used to determine the new point when you rotate a point some number of degrees around another point?
What is meant by 'degrees of freedom'?
How does one derive the distance formula from the Pythagorean theorem?
Can you explain how to derive the formula for the dot product?
How do you derive the trilinear coordinates of the orthocenter of a triangle?
Given a triangle with vertices (A,B), (C,D), and (E,F), how do you find the area in determinant form?
How can I determine which of five equations describes the set of all points (x,y) in the coordinate plane that are a distance of 5 from the point (-3,4)?
Applying the Pythagorean Theorem to find the diameter of the circle circumscribed around a triangle with side lengths 25, 39, and 40.
How far from the line 3x + 6y = 10 is the point (1,2)?
I have two endpoints of a line segment with coordinates A(2, 7) and B(-4, -2). I am looking for the coordinates of the points that divide AB into 3 equal parts.
How do I get the equation of an ellipse, given four points and the inclination of the major axis?
I wish to draw a line departing at a given angle from the long axis of an ellipse and bisecting the perimeter of the ellipse at right angles to the tangent at that point...
Given the center of the circle, the angle of the arc, the radius of the circle, and the starting point of the arc, determine the end point of the arc using cartesian coordinates.
There are only two rectangles whose area is exactly the same as their perimeter if the dimensions of each are whole numbers. What are the dimensions?
Given the following points in polar coordinates, find the rectangular coordinates. (22, 51 degrees)
Can lattice points be the vertices of an equilateral triangle?
Page: 1 2 3 [next>]
|
{"url":"http://mathforum.org/library/drmath/sets/high_coord_plane.html?start_at=1&s_keyid=39892033&f_keyid=39892035&num_to_see=40","timestamp":"2014-04-16T23:11:40Z","content_type":null,"content_length":"23133","record_id":"<urn:uuid:4a7df014-2eae-4571-822d-d90729a12bce>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Research Programme 1.1: Number Theory and Algebra
Programme leader: H.W. Lenstra
• Staff (situation at November 1, 2008)
(situation at November 1, 2008)
permanent staff
• prof.dr. R.J.F. Cramer
• dr. J.H. Evertse
• prof.dr. J.P. Hogendijk
• prof.dr. H.W. Lenstra
• dr. R.M. van Luijk
• dr. B. de Smit
• prof.dr. P. Stevenhagen
PhD students
• N.J. Bouman M.Sc.
• drs. J. Bouw
• drs. J.F. Brakenhoff
• drs. J.L.A.H. Daems
• drs. B.E. van Dalen
• drs. W.H. Ekkelkamp
• drs. R. de Haan
• O.R. Johnston
• drs. W.J. Palenstijn
• ir. I. Smeets
• drs. T.C. Streng
• ir. M.M.J. Stevens
• A. Timofeev Dipl. Math
• E.L. Toreao Dassen M.Sc.
guest researchers
• drs. H.M. Matthijsse
• G. dalla Torre M.Sc.
Description of the project
The main focus of the research programme is number theory. Number theory studies the properties of integers, with a historically strong emphasis on the study of diophantine equa-tions, that is,
systems of equations that are to be solved in integers. The methods of number theory are taken from several other branches of mathematics. Traditionally, these include algebra and analysis, and in
recent times algebraic geometry has become increasingly im-portant. Another recent development is the discovery that number theory has significant im-plications in more applied areas, such as
cryptography, theoretical computer science, the the-ory of dynamical systems, and numerical mathematics. This discovery led to the rise of algo-rithmic and computational number theory, which occupies
itself with the design, analysis, and efficient implementation of arithmetical algorithms. The overall result has been a unification rather than a diversification of number theory. For example, the
applications in cryptography depend heavily on algebraic geometry, and algebraic number theory, which used to stand on itself, is now pervading virtually all of number theory. Themes of the programme
reflect the research areas mentioned. They include finding points on algebraic curves, applications of group theory and algebraic number theory, the theory of finite fields, diophantine
approxima-tion, words and sequences, discrete tomography, primality tests and factorization methods, and the development of efficient computer algorithms. The algebra portion of the programme is
strongly oriented towards the applications of algebra in number theory and arithmetic geometry and towards algorithmic aspects. Themes include Galois theory and various aspects of group theory and
ring theory. The research programme also includes cryptology and the history of mathematics. Main themes in cryptology are the applications of number theory and algebra to the design of
cryp-tographic schemes, and foundational issues are considered as well. In the history of mathe-matics, the emphasis is on the edition and translation of early Islamic mathematical and as-tronomical
|
{"url":"http://www.math.leidenuniv.nl/nl/research_groups/cluster1.1","timestamp":"2014-04-18T05:30:20Z","content_type":null,"content_length":"10738","record_id":"<urn:uuid:4e7b3299-4538-454a-859f-b692e4f806aa>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AIC Seminar Series
Logics of Formal Inconsistency
Notice: hosted by Richard Waldinger
Date: 2009-07-09 at 16:00
Location: EJ228 (SRI E building) (Directions)
According to the classical consistency presupposition, contradictions have an explosive disposition: Whenever they are present in a theory, anything goes, and no sensible reasoning can thus take
place. A logic is paraconsistent if it disallows such presupposition, and allows instead for some inconsistent yet non-trivial theories to make perfect sense. The Logics of Formal Inconsistency,
LFIs, form a particularly expressive class of paraconsistent logics in which the metatheoretical notion of consistency can be internalized at the object-language level. As a consequence, the LFIs
are able to recapture consistent reasoning by the addition of appropriate consistency assumptions. So, for instance, while typical classical rules such as disjunctive syllogism (from A and
(not-A)-or-B, infer B) are bound to fail in a paraconsistent logic (because A and (not-A) could both be true for some A, independently of B), they can be recovered by an LFI if the set of
premises is enlarged by the presumption that we are reasoning in a consistent environment (in this case, by the addition of (consistent-A) as an extra hypothesis of the rule).
Our talk will provide an introduction to the class of LFIs, as well as an illustration of how rich this class is, in that it naturally subsumes most logics originating from both the Brazilian and
the Polish schools of paraconsistency and is characterized by the same kind of derivability adjustment theorem that gives foundation to the logics originating from the Belgian school of
JOÃO MARCOS is a logician and holds a PhD in Mathematics from the TU-Lisbon (Portugal) and a PhD in Philosophy from the State University of Campinas (Brazil). He is currently a member of the
Group for Logic, Language, Information, Theory and Applications (LoLITA), an Associate Professor at the Department of Informatics and Applied Mathematics (DIMAp) of UFRN (Brazil), and
collaborates with the Security and Quantum Information Group (SQIG) / IT (Portugal). His main topics of investigation center around non-classical logics (paraconsistent, many-valued, modal,
intuitionistic), formal semantics (possible-translations semantics, possible-worlds semantics, society semantics), abstract logic, and the philosophy of logic.
Please arrive at least 10 minutes early in order to sign in and be escorted to the conference room. SRI is located at 333 Ravenswood Avenue in Menlo Park. Visitors may park in the visitors lot in
front of Building E, and should follow the instructions by the lobby phone to be escorted to the meeting room. Detailed directions to SRI, as well as maps, are available from the Visiting AIC web
©2014 SRI International 333 Ravenswood Avenue, Menlo Park, CA 94025-3493
SRI International is an independent, nonprofit corporation.
Privacy policy
|
{"url":"http://www.ai.sri.com/seminars/detail.php?id=278","timestamp":"2014-04-20T05:43:13Z","content_type":null,"content_length":"10382","record_id":"<urn:uuid:b711492f-b103-4565-8bf6-f4915477754a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Propeller Speed Limits
Saturday, September 29th, 2012 11:36 am GMT -6 Saturday, September 29th, 2012 11:36 am GMT -6Saturday, September 29th, 2012 11:36 am GMT -6
Can we come up with an easy system for matching a propeller to a motor and an airplane?
The Challenge
I started flying electric RC model airplanes about twenty years ago. The advances in the technologies since that time have been breathtaking. But we still fumble when it comes to putting together an
efficient electric power system.
The task has gotten a little easier over time for a couple of reasons. The incredible popularity of electric flight helps. There is a lot of knowledge available on what types of power systems work
better. Tools like my free power system calculator help, too.
But I have been wondering for a long time if we can come up with a better set of simple guidelines to make the job easier. My power system rule was an attempt at doing that for a power system as a
Of course, there is no substitute for actual performance measurements of the components. The goal here is not ”100% accurate” but “good enough to serve as a starting point for further fine tuning”.
Work In Progress
The community that has built up around this website has a huge amount of knowledge. I continue to be amazed at the depth of knowledge evident in many of the comments posted with the articles.
What I am about to describe is a work in progress. Right now it is just an interesting idea. I am hoping to engage our collective wisdom and ingenuity. Let us see how far we can take this, shall we?
The Idea
Look at the performance graph for a propeller (RPM vs. efficiency). All propellers exhibit clear drop-off points in efficiency at very low and very high RPMs. What causes them? Can these drop-off
points be predicted? Could we come up with a simple rule for predicting the best RPM range for a given propeller diameter?
Gas and Electric Propellers
Propellers made for gas engines are generally a lot thicker than propellers made for electric motors. They need to be stronger to better handle the higher RPMs that many gas engines turn at. Just
like flying with a thicker wing, the thicker propeller blades behave differently. The focus here is on propellers for electric motors. It would not be hard to generalize my observations to gas
propellers, but for the sake of simplicity I am not going to do so.
Low RPM
Propellers turning slowly run into the same low Reynolds number efficiency issues that our model airplane wings run into. Just like our model airplane wings, it can be hard to predict when the big
drop in efficiency is going to happen. From my experience, a thin electric propeller blade is probably going to be in trouble at a Reynolds number below 100,000.
High RPM
At high RPMs, an entirely different performance problem comes into play. It is due to Mach, or speed of sound, effects. The entire propeller blade does not need to be turning at close to the speed of
sound for you to notice a drop in performance.
You see, the air speeds up as it goes over the top of an airfoil. That is a direct consequence of the lift generation process. So Mach effects are felt first based on the highest speed of the air
going over the airfoil.
Putting it into more practical terms, a gas propeller blade could start to experience a drop in efficiency at a Mach number as low as half the speed of sound. That is a Mach number of 0.5.
A thinner electric propeller produces less lift. Therefore, it will be efficient up to a higher Mach number. For the sake of argument, let us say that is Mach 0.7.
The typical plastic electric propeller cannot efficiently handle high RPMs. So bear with me. If it makes you feel better, imagine it is a thin carbon fiber propeller.
Some Numbers
I put together the attached sample spreadsheet to help me put all of this into perspective. It assumes that you are at sea level under standard atmospheric conditions. It takes the performance
measurements at the propeller’s 75% radius, in accordance with propeller theory. The aspect ratio is assumed to be 15, though I know that small diameter electric propellers tend to have a lower
aspect ratio.
For each propeller diameter, I found RPM values that were close to 100,000 Reynolds and to 0.7 Mach. For the smaller diameter propellers I also added a row with the typical RPMs that those propellers
are normally flown at.
The propeller pitch does not affect the Reynolds or Mach numbers, so it can be ignored.
Do you see what I see? If the propeller diameter is less than 10 inches (25 cm), getting high efficiency out of a model airplane propeller is pretty much mission impossible. We are not going to be
running our electric slow flyer motors (with 8 inch propellers) at 15,000 RPM to get decent efficiency.
With a four inch (10 cm) diameter propeller, the high efficiency window is tiny and at an extremely high RPM (60,000). It is just not going to happen.
Once the diameter gets bigger, we are in much better shape. A 10 inch propeller starts getting decent efficiency at about 10,000 RPM. That is doable. A 16 inch prop has a 11,000 range in RPM speed
where it will work well.
If you thought that the main challenge with the smaller models was making the wing efficient, think again. The propeller is probably in deeper trouble.
Next Steps?
Coming up with a table with a recommended RPM range per propeller diameter is now easy. Again, my goal here is to come up with a set of rough rules of thumb. Matching the number of battery cells to
the motor’s Kv is not out of the question. Using an estimate of the model’s cruising airspeed to pick the propeller pitch speed is also doable.
How do you estimate the cruising airspeed of a model airplane? I am not sure.
Could we use the propeller diameter and estimated RPM to compute the expected power requirement? Maybe. Then using the battery pack voltage, we could compute the current in amps.
This is all highly speculative. The point of this article is more to stimulate some brainstorming than to provide answers.
More than any time before, I want to hear what you think. It is a lot easier to say that something is impossible than to figure out a solution. How would you solve this problem?
Propeller Speed Limits Spreadsheet (xls)
|
{"url":"http://rcadvisor.com/propeller-speed-limits","timestamp":"2014-04-17T13:19:05Z","content_type":null,"content_length":"84280","record_id":"<urn:uuid:e3bfd016-9220-4159-878b-d8dd72b08f7d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Specializing early
up vote 7 down vote favorite
Topic: this is a mathematics education question (but applies to other sciences too).
Assumptions: my first assumption is that most mathematical concepts used in research are not intrinsically more complicated to grasp than high-school and undergraduate maths, the main difference is
the amount of prerequisites (and hence time and experience) involved. My second assumption is that some undergraduate topics currently taught compulsarily are a bit of a burden for someone focussed
on a particular topic.
Now of course cognitive development is a constraint, but upon reaching the age of high-school, I would think that a fairly large proportion of the scientifically-enclined students could really
understand things usually taught much later and indeed become active at research level within a few years, provided some shortcuts are introduced.
Early specialization: I'm wondering if a balanced curriculum already exists (or is planned) to provide such early specialization. What I'm looking for is this: a one-week panorama of maths (or
physics, or biology) would be organized at the beginning, and then the students would decide which subtopic to study. For example someone interested by group theory (or quantum optics, or genetics)
would thus start with basics at the age 15 or 16, and gradually learn more stuff and skills, but for a few years with a strong emphasis on things directly relevant for the chosen subtopic.
So for example the student specializing in group theory would only learn differential calculus and manifolds in passing in the context of Lie groups, and would skip most undergraduate real and
functional analysis until it becomes relevant for his/her research topic, if at all. Of course other general courses would still be taught (history, sciences, programming, foreign languages...), but
at least 50% of the student's week would be devoted to the research topic, ensuring satisfying progress.
Question: do you know of any active or planned educative curriculum (at a high-school or university, or maybe a specific home-schooling program) as outlined above? As an example of successful early
specialization see e.g. the winners of the Siemens Foundation Prizes, but I haven't been able to learn much about their specific curriculum if any.
Note: Skipping grades in school to enter university earlier is not the point, I'm really interested in a subtopic-oriented curriculum.
27 I do not know about such programs, but really hope they do not exist. Mathematics is big and beautiful, with lots of connections between its remotest branches. Why be cruel to the children and
deprive them of that? – Boris Bukh Feb 27 '10 at 12:24
2 Also, community wiki please? – Boris Bukh Feb 27 '10 at 12:26
15 Thanks for spelling out your assumptions. I vehemently disagree with them, and with your conclusions. Far from ensuring progress, I think this would be disastrous for future researchers, and even
worse for the vast majority of students who do not become researchers. – Douglas Zare Feb 27 '10 at 17:52
1 I like your idea. Not the whole of it, but at least the spirit. I feel that our education system focuses more on breadth than depth. Implementing a program like yours could be a reasonable
counterbalance. But maybe not in high-school, and maybe not in the States, where inequities between people are already huge (I feel your program would only increase those). – maks Feb 28 '10 at
@TS: You say that intellectual power is near its height at ages 16-20. Could you a) explain what you mean by "intellectual power" and b) provide some evidence for this? A little googling shows:
4 (i) Youngest world chess champion: Kasparov (22); (ii) youngest Fields Medalist: Serre (27), youngest Nobel Prize winner: Bragg (25) [shared with his father!]. Kasparov retained the championship
for many years and, based on his ratings, was significantly stronger in his late 20s and early 30s than in his early 20s. – Pete L. Clark Feb 28 '10 at 7:48
show 8 more comments
3 Answers
active oldest votes
As far as getting high school students involved in research by learning rapidly a narrow range of mathematics but in some depth, this is actually done in the mathematics section of the
Research Science Institute program at MIT for students who have completed their junior year. Last year there were four projects in representation theory; I recall that one of them did
not now linear algebra until some two weeks before the program (but learned quickly and completed a very successful project).
up vote 6 Sadly, I do not know of many other opportunities- RSI is a small program, and only a portion of it is for mathematics. I believe the PROMYS program supervises some research projects, but
down vote it is primarily for learning mathematics. Incidentally, many of the winners of competitions such as Siemens begin their projects at RSI.
Also, alumni of the RSI program do not necessarily end up specializing in the same fields that they did their projects (if they do eventually choose to pursue a career in mathematics,
which does not always happen). It does give an exposure to a certain field, though.
2 Not quite as research oriented, but still something for high schoolers: Math Camp mathcamp.org – Charles Siegel Feb 27 '10 at 13:26
8 Certainly the RSI program is very successful, and it seems like a good idea for there to be more programs like it. However, this is a key point -- RSI is a summer program which
excellent, enthusiastic students take in addition to their normal coursework. These students aren't missing out on any of the standard, non-specialized coursework by participating in
the program. So there is no tradeoff here, as the OP is suggesting. – Pete L. Clark Feb 27 '10 at 17:14
1 I participated in the PROMYS program as a student in its first two years, and think it was great. There was no research component, and presenting a motivated, rigorous college course
in number theory to high school students is reasonable whether or not the students want to study number theory or even mathematics later. – Douglas Zare Feb 27 '10 at 17:57
Mathcamp, RSI, and PROMYS (of which I have experience with the latter two) all share the property that Pete mentions - they are an enrichment experience to be taken in addition to the
1 standard curriculum. While I'm on the subject, I have to bring up an issue with this paradigm: because much of the "subtopic-oriented" material at Mathcamp and PROMYS is taught very
quickly by counselors, students sometimes confuse exposure to material with mastery, and may enter college feeling that they know more about e.g. elementary group theory than they
actually do. – Qiaochu Yuan Mar 2 '10 at 4:00
2 @Qiaochu: I never went to a summer math program before RSI (probably a mistake on my part), but I at least can say that the RSI program made me acutely realize how distantly the peaks
of Mount Bourbaki lay from my accumulated knowledge, and how much more climbing awaited me in college. – Akhil Mathew Mar 2 '10 at 23:11
show 3 more comments
I'm not sure how good an idea this would be. I happen to be in a position where I read many applications of students wishing to do a PhD in my group. Some applicants have a very definite
idea of what it is that they want to do, but this is usually for lack of exposure to other topics. It is not unheard of that they end up doing their PhD is a completely different area.
I can speak from my own personal experience. At every stage in my life I was sure I knew what I wanted to study, but as I learnt and become exposed to new topics this changed; though not my
certainty about my choice. Until about age 14 I wanted to study Molecular Biology. Had I specialised then I would not have seen any of the Physics and Mathematics which have become such an
important part of my life.
The point I'm trying to make that is that early specialisation might be depriving the student from finding what it is they truly like. Freedom of choice is only ever meaningful if one can
understand (or at least be aware of) the alternatives.
up vote Of course, you could argue that there is a lot more information readily available to school children than when I went to school, so perhaps a more conscious choice can be made at an early
23 down age. There is however still a danger even within one discipline, say, Mathematics.
The research council which funds mathematical research in the UK commissioned an international review some years ago. A panel of respected non-UK mathematicians analysed the state of UK
mathematical research. One of their conclusions was that due to the short length of the UK PhD (36 months at the time) students were forced to specialise earlier than in other countries
(though not as early as the OP suggests) in order to complete their PhD in time. This then made it harder to switch fields later in their career and made them less competitive in the long
I will refrain from commenting here on the half-measures that were introduced to try to solve this problem, but I simply want to point out that even such a late "early specialisation" as
this one is not desirable.
I'm in complete agreement with José here, and want to point out that there are problems even with early subspecialization within your subject. I know that I'm glad I kept learning about
10 other things over the first few years of grad school, because though I'm still a complex algebraic geometry person, I've almost traversed the entire length of the subject (even coming
back from a very different angle!) compared to what I thought I was interested in at first. – Charles Siegel Feb 27 '10 at 13:29
add comment
I agree with José's comment above: I do not think early specialisation is a good idea. Did I understand correctly that you want to give a one week to a 15-year old to decide on which area of
mathematics to specialize?
I want to add something different, however. I fail to see how "some undergraduate topics currently taught compulsorily are a bit of a burden". Mathematics is not a set of disconnected areas.
up vote They are all highly related. Most research problems, while staying in one area, may be related to another, motivated by another, applicable in another, or steal ideas or techniques from
11 down another. One general course in, say, real analysis, complex analysis, abstract algebra, differential geometry, discrete mathematics, or topology is not a burden, but I dare say an actual
vote necessity for anybody wanting to do research on any topic in pure math. To use your own example, someone doing research in Lie theory will benefit from, rather than be burdened by, a solid
understanding of basic differential geometry. Or to use my own case, I am a Poisson geometer, but I have used ideas or results from all the above topics in my research.
To answer your comments: yes, there would be a panorama of maths with lots of examples, open problems, and interactions with various researchers. I agree that math areas are connected
broadly speaking, but I'm convinced that for a young person who loves particularly one subject there's a way to learn things as he/she goes along when it's needed, rather than start with
a lot of imposed breadth. For example some decades ago plane curves like conics were taught in high-school, not anymore today. It is not so easy to say what is fundamental and what isn't,
depends very much on the goals. – Thomas Sauvaget Mar 2 '10 at 8:25
Well, it is not easy to say what is fundamental and what isn't because you do not know what you are going to need when you are doing research in area X. If it simply were the case that,
1 when you are doing research in area X, it becomes obvious that you need to learn about area Y, then you could go ahead and do it. In practice, however, it may be the case that you will
only notice that you need to use area Y if you are already familiar with it behorehand. Hence the need for a "broad" education. And honestly, the standard curriculum in the US or the UK
is not that broad in the first place. – Alfonso Gracia-Saz Mar 2 '10 at 16:28
add comment
Not the answer you're looking for? Browse other questions tagged mathematics-education or ask your own question.
|
{"url":"http://mathoverflow.net/questions/16587/specializing-early/16825","timestamp":"2014-04-17T12:47:23Z","content_type":null,"content_length":"80580","record_id":"<urn:uuid:fb6a56f0-44be-440f-9588-64db9ee527e3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Wikipedia, the free encyclopedia
(Redirected from
The joule (/ˈdʒuːl/ or sometimes /ˈdʒaʊl/), symbol J, is a derived unit of energy, work, or amount of heat in the International System of Units.^1 It is equal to the energy expended (or work done) in
applying a force of one newton through a distance of one metre (1 newton metre or N·m), or in passing an electric current of one ampere through a resistance of one ohm for one second. It is named
after the English physicist James Prescott Joule (1818–1889).^2^3^4
In terms firstly of base SI units and then in terms of other SI units:
$\rm J = {}\rm \frac{kg \cdot m^2}{s^2} = N \cdot m = \rm Pa \cdot m^3={}\rm W \cdot s = C \cdot V$
where kg is the kilogram, m is the metre, s is the second, N is the newton, Pa is the pascal, W is the watt, C is the coulomb, and V is the volt.
One joule can also be defined as:
• The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one '"coulomb volt" (C·V). This relationship can be used to define the volt.
• The work required to produce one watt of power for one second, or one "watt second" (W·s) (compare kilowatt hour). This relationship can be used to define the watt.
This SI unit is named after James Prescott Joule. As with every International System of Units (SI) unit whose name is derived from the proper name of a person, the first letter of its symbol is upper
case (J). However, when an SI unit is spelled out in English, it should always begin with a lower case letter (joule), except in a situation where any word in that position would be capitalized, such
as at the beginning of a sentence or in capitalized material such as a title. Note that "degree Celsius" conforms to this rule because the "d" is lowercase. —Based on The International System of
Units, section 5.2.
Confusion with newton-metre
Main article:
newton metre
In angular mechanics, torque is analogous to the linear Newtonian mechanics parameter of force, moment of inertia to mass, and angle to distance. Energy is the same in both systems. Thus, although
the joule has the same dimensions as the newton-meter (1 J = 1 N·m = 1 kg·m^2·s^−2), these units are not interchangeable: the CGPM has given the unit of energy the name "joule", but has not given the
unit of torque any special name, hence the unit of torque is known as the newton-metre (N·m) - a compound name derived from its constituent parts.^5 Torque and energy are related to each other using
the equation
$E= \tau \theta\$
where E is the energy, τ is the torque, and θ is the angle moved (in radians). Since radians are dimensionless, it follows that torque and energy have the same dimensions.
The use of newton-metres for torque and joules for energy is useful in helping avoid misunderstandings and miscommunications.^5 Another solution to this problem is to name the unit of angle, such
that the unit of torque is called joule per radian.
An additional solution is to realize that joules are scalars - they are the dot product of a vector force and a vector displacement whereas torque is a vector. Torque is the cross product of a
distance vector and a force vector. Drawing a traditional vector arrow over "newton-meter" in a torque resolves the ambiguity.
Practical examples
One joule in everyday life represents approximately:
• the energy required to lift a small apple (with a mass of approximately 100 g) vertically through one meter of air.
• the energy released when that same apple falls one meter to the ground.
• the heat required to raise the temperature of 1 g of water by 0.24 K.^6
• the typical energy released as heat by a person at rest, every 1/60th of a second.^7
• the kinetic energy of a 50 kg human moving very slowly (0.2 m/s).
• the kinetic energy of a tennis ball moving at 23 km/h (6.4 m/s).^8
• the kinetic energy of 1 kg moving √2 m/s.
Since the joule is also a watt-second and the common unit for electricity sales to homes is the kWh (kilowatt-hour), a kWh is thus 1000 (kilo) x 3600 seconds = 3.6 MJ (megajoules).
For additional examples, see: Orders of magnitude (energy)
SI multiples for joule (J)
Submultiples Multiples
Value Symbol Name Value Symbol Name
10^−1 J dJ decijoule 10^1 J daJ decajoule
10^−2 J cJ centijoule 10^2 J hJ hectojoule
10^−3 J mJ millijoule 10^3 J kJ kilojoule
10^−6 J µJ microjoule 10^6 J MJ megajoule
10^−9 J nJ nanojoule 10^9 J GJ gigajoule
10^−12 J pJ picojoule 10^12 J TJ terajoule
10^−15 J fJ femtojoule 10^15 J PJ petajoule
10^−18 J aJ attojoule 10^18 J EJ exajoule
10^−21 J zJ zeptojoule 10^21 J ZJ zettajoule
10^−24 J yJ yoctojoule 10^24 J YJ yottajoule
Common multiples are in bold face
The nanojoule (nJ) is equal to one billionth of one joule. One nanojoule is about 1/160 of the kinetic energy of a flying mosquito.^9
The microjoule (μJ) is equal to one millionth of one joule. The Large Hadron Collider (LHC) is expected to produce collisions on the order of 1 microjoule (7 TeV) per particle.
The millijoule (mJ) is equal to one thousandth of a joule.
The kilojoule (kJ) is equal to one thousand (10^3) joules. Nutritional food labels in certain countries express energy in standard kilojoules (kJ).
One kilojoule per second (1 kilowatt) is approximately the amount of solar radiation received by one square metre of the Earth in full daylight.^10
The megajoule (MJ) is equal to one million (10^6) joules, or approximately the kinetic energy of a one-ton vehicle moving at 160 km/h (100 mph).
Because 1 watt times one second equals one joule, 1 kilowatt-hour is 1000 watts times 3600 seconds, or 3.6 megajoules.
The gigajoule (GJ) is equal to one billion (10^9) joules. Six gigajoules is about the amount of potential chemical energy in a barrel of oil, when combusted.^11
The terajoule (TJ) is equal to one trillion (10^12) joules. About 63 terajoules were released by the atomic bomb that exploded over Hiroshima.^12 The International Space Station, with a mass of
approximately 450,000 kg and orbital velocity of 7.7 km/s,^13 has a kinetic energy of roughly 13.34 terajoules.
The petajoule (PJ) is equal to one quadrillion (10^15) joules. 210 PJ is equivalent to about 50 megatons of TNT. This is the amount of energy released by the Tsar Bomba, the largest man-made nuclear
explosion ever.
The exajoule (EJ) is equal to one quintillion (10^18) joules. The 2011 Tōhoku earthquake and tsunami in Japan had 1.41 EJ of energy according to its 9.0 on the moment magnitude scale. Energy in the
United States used per year is roughly 94 EJ.
The zettajoule (ZJ) is equal to one sextillion (10^21) joules. Annual global energy consumption is approximately 0.5 ZJ.
The yottajoule (YJ) is equal to one septillion (10^24) joules. This is approximately the amount of energy required to heat the entire volume of water on Earth by 1 °Celsius.
1 joule is equal to:
• 1×10^7 ergs (exactly)
• 6.24150974×10^18 eV (electronvolts)
• 0.2390 cal (thermochemical gram calories or small calories)
• 2.3901×10^−4 kcal (thermochemical kilocalories, kilogram calories, large calories or food calories)
• 9.4782×10^−4 BTU (British thermal unit)
• 0.7376 ft·lb (foot-pounds)
• 23.7 ft·pdl (foot-poundals)
• 2.7778×10^−7 kilowatt-hour
• 2.7778×10^−4 watt-hour
• 9.8692×10^−3 litre-atmosphere
• 11.1265 femtograms (mass-energy equivalence)
• 1×10^−44 foe (exactly)
Units defined exactly in terms of the joule include:
• 1 thermochemical calorie = 4.184 J^14
• 1 International Table calorie = 4.1868 J^15
• 1 watt hour = 3600 J
• 1 kilowatt hour = 3.6×10^6 J (or 3.6 MJ)
• 1 watt second = 1 J
• 1 ton TNT = 4.184 GJ
See also
Notes and references
1. ^ International Bureau of Weights and Measures (2006), The International System of Units (SI) (8th ed.), p. 120, ISBN 92-822-2213-6
2. ^ American Heritage Dictionary of the English Language, Online Edition (2009). Houghton Mifflin Co., hosted by Yahoo! Education.
3. ^ The American Heritage Dictionary, Second College Edition (1985). Boston: Houghton Mifflin Co., p. 691.
4. ^ McGraw-Hill Dictionary of Physics, Fifth Edition (1997). McGraw-Hill, Inc., p. 224.
5. ^ ^a ^b From the official SI website: "A derived unit can often be expressed in different ways by combining base units with derived units having special names. Joule, for example, may formally be
written newton metre, or kilogram metre squared per second squared. This, however, is an algebraic freedom to be governed by common sense physical considerations; in a given situation some forms
may be more helpful than others. In practice, with certain quantities, preference is given to the use of certain special unit names, or combinations of unit names, to facilitate the distinction
between different quantities having the same dimension."
6. ^ "Units of Heat - BTU, Calorie and Joule". Engineeringtoolbox.com. Retrieved 2013-09-16.
7. ^ This is called the basal metabolic rate. It corresponds to about 1200 kilocalories (also called dietary calories) per day. "At rest" means awake but inactive.
8. ^ Ristinen, Robert A.; Kraushaar, Jack J. (2006). Energy and the Environment (2nd ed.). Hoboken, NJ: John Wiley & Sons. ISBN 0-471-73989-8.
9. ^ CERN - Glossary
10. ^ "Construction of a Composite Total Solar Irradiance (TSI) Time Series from 1978 to present". Retrieved 2005-10-05.
11. ^ IRS publication
12. ^ Los Alamos National Laboratory report LA-8819, The yields of the Hiroshima and Nagasaki nuclear explosions by John Malik, September 1985. Available online at http://www.mbe.doe.gov/me70/
13. ^ International Space Station Fact Sheet
14. ^ The adoption of joules as units of energy, FAO/WHO Ad Hoc Committee of Experts on Energy and Protein, 1971. A report on the changeover from calories to joules in nutrition.
15. ^ Feynman, Richard (1963). "Physical Units". Feynman's Lectures on Physics. Retrieved 2014-03-07.
|
{"url":"http://www.bioscience.ws/encyclopedia/index.php?title=Gigajoule","timestamp":"2014-04-17T12:39:47Z","content_type":null,"content_length":"83977","record_id":"<urn:uuid:3cea57e8-741a-4f46-b136-7d99c0e0b22a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Eikonal equation for a general anisotropic or chiral medium: application to a
We discuss the numerical simulation of a graded refractive index (GRIN) lens with a general anisotropic medium and compare it with an equivalent nongraded positive-index-of-refraction-material (PIM)
lens with an isotropic medium. To evaluate lens performance, we developed a modified eikonal equation valid for the most general form of an anisotropic or chiral medium. Our approach is more
comprehensive than previous work in this area and is obtained from the dispersion relation of Maxwell’s equations in the eikonal approximation. The software developed for the numerical integration of
the modified eikonal equation is described. Subsequently, a full finite-difference time-dependent simulation was performed to verify the validity of the eikonal calculations. The performance of the
GRIN lens is found to be improved over the equivalent PIM one. The GRIN lens is also five to ten times lighter than the equivalent PIM. A GRIN lens operating at 15 GHz is now under fabrication at
Boeing, and the experimental results of this lens will be reported in a forthcoming paper.
© 2006 Optical Society of America
OCIS Codes
(080.1010) Geometric optics : Aberrations (global)
(080.2710) Geometric optics : Inhomogeneous optical media
(080.2720) Geometric optics : Mathematical methods (general)
(080.3620) Geometric optics : Lens system design
(080.3630) Geometric optics : Lenses
ToC Category:
Original Manuscript: July 8, 2005
Revised Manuscript: October 24, 2005
Manuscript Accepted: October 27, 2005
Claudio G. Parazzoli, Benjamin E. C. Koltenbah, Robert B. Greegor, Tai A. Lam, and Minas H. Tanielian, "Eikonal equation for a general anisotropic or chiral medium: application to a negative-graded
index-of-refraction lens with an anisotropic material," J. Opt. Soc. Am. B 23, 439-450 (2006)
Sort: Year | Journal | Reset
1. V. G. Veselago, "The electrodynamics of substances with simultaneously negative values of epsilon and µ," Sov. Phys. Usp. 10, 509-514 (1968). [CrossRef]
2. R. A. Shelby, D. R. Smith, and S. Schultz, "Experimental verification of a negative index of refraction," Science 292, 77-79 (2001). [CrossRef] [PubMed]
3. C. G. Parazzoli, R. B. Greegor, K. Li, B. E. C. Koltenbah, and M. Tanielian, "Experimental verification and simulation of negative index of refraction using Snell's law," Phys. Rev. Lett. 90,
107401 (2003). [CrossRef] [PubMed]
4. D. Shurig and D. R. Smith, "Negative index lens aberrations," Phys. Rev. E 70, 065601(R) (2004). [CrossRef]
5. C. G. Parazzoli, R. B. Greegor, J. A. Nielsen, M. A. Thompson, K. Li, A. M. Vetter, M. H. Taniliean, and D. C. Vier, "Performance of a negative index of refraction lens," Appl. Phys. Lett. 84,
3232-3234 (2004). [CrossRef]
6. P. Vodo, P. V. Parimi, W. T. Lu, and S. Sridar, "Focusing by planoconcave lens using negative refraction," Appl. Phys. Lett. 86, 201108 (2005). [CrossRef]
7. D. R. Smith, J. J. Mock, A. F. Starr, and D. Schurig, "Gradient index metamaterials," Phys. Rev. E 71, 036609 (2005). [CrossRef]
8. R. B. Greegor, C. G. Parazzoli, J. A. Nielsen, M. A. Thompson, M. H. Tanielian, and D. R. Smith, "Simulation and testing of a graded negative index of refraction lens," Appl. Phys. Lett. 87,
091114 (2005). [CrossRef]
9. E. W. Marchand, Gradient Index Optics (Academic, 1978).
10. E. Langenbach, "Raytracing in gradient index optics," Proc. SPIE 1780, 486-490 (1993).
11. M. Born and E. Wolfe, Principles of Optics, 7th ed. (Cambridge U. Press, 1999).
12. V. A. De Lorenci, R. Klippert, and D. H. Teodoro, "Birefringence in nonlinear anisotropic dielectric media," Phys. Rev. D 70, 124035 (2004). [CrossRef]
13. M. Kline and I. W. Kay, Electromagnetic Theory and Geometrical Optics (Interscience, 1965).
14. D. R. Smith, P. Kolinko, and D. Schurig, "Negative refraction in indefinite media," J. Opt. Soc. Am. B 21, 1032-1043 (2004). [CrossRef]
15. See Ref. , Sec. 3.1.1.
16. P. R. Garabedian, Partial Differential Equations (Wiley, 1964).
17. D. Hanselman and B. Littlefield, Mastering MATLAB 6 (Prentice Hall, 2001). [PubMed]
18. Computer Simulation Technology of America.
19. R. Greegor, C. G. Parazzoli, J. A. Nielsen, M. A. Thompson, M. H. Tanielian, D. C. Vier, S. Schultz, D. R. Smith, and D. Schurig, "Microwave focusing and beam collimation using negative index of
refraction lenses," Special Issue on Metamaterials, IEE Proc. H. Microwaves, Antennas Propag., submitted for publication.
20. S. Wolfram, The Mathematica Book, 3rd ed. (Wolfram Media/Cambridge U. Press, 1996).
21. See, for example, M. Abramowitz and I. Stegun, Handbook of Mathematical Functions (Dover, 1972), Sec. 3.8.3 and 3.12, Example 6.
22. See Ref. , Chap. 5 and 9.
23. See Ref. , Table 9.2.
24. See Ref. , Sec. 9.2.3, Eq. (20).
25. D. Schurig, Duke University, Durham, N. C. 27707 (personal communication, 2005).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/josab/abstract.cfm?URI=josab-23-3-439","timestamp":"2014-04-17T23:08:03Z","content_type":null,"content_length":"143533","record_id":"<urn:uuid:b646bad6-3813-41aa-a737-03f82a2fe5bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math 187 HOT List
Here is a short list of interesting cryptography web links to learn more about cryptography. If you find other web pages that you like that you think should be added to this list please e-mail them
to ma187s@insci14.ucsd.edu. The number of cryptography related sites seems to double every year, this list is only a starting point that hopefully contains a couple of sites with more complete lists
of links. Another problem we found is that these links disappear every so often. If you notice some missing, please let us know. This list is not actively maintained since our course happens once a
year. We will try to backup some of the pages that are not in stable locations.
Cryptography FAQ
RSA Cryptography FAQ
PGP FAQ- mini overview
Reference pages
NOVA program- Decoding Nazi Secrets
Beginner's Guide to Cryptography
A-Z Cryptology! - Over 45 different fun codes, ciphers, and concealment message methodologies
Beginners Cryptography Page
Cryptology - An overview
Introduction to Cryptography
Cryptography: The study of encryption - A list of web links
Cryptolog - Excellent list of internet references
Why cryptography is harder than it looks - a short html article
Algorithms & programs
Cryptographic Algorithms
North American Cryptography Archives
Fenced DES
Ritter's Crypto bookshop - A nice bibliography of crytptography
Quantum computing
It is likely that quantum computing will be the next revolution in computer science and the effect on cryptography will be earth shattering due to Shor's algorithm. This is a new field and a lot of
ground needs to be broken before the first quantum computer can be built, but more and more people are begining to work on it. Watch the internet as quantum computing companies spring up all over the
Press release of MIT research on quantum computers - August 12 ,1999
Quantum Computers - A short summary written by Daniel Gottesman
Quantum Computing and Shor's Algorithm - An html article
Quantum Computing with Molecules - Scientific American Article
Stanford, Berkeley, MIT, IBM Quantum computing project - Somebody's got to do it
Dabacon's Quantum Computing Stage - A good start on internet references
Quantum Information at Los Alamos National Laboratory - Some tutorials on quantum stuff in general
IBM's Quantum Info Page - It lacks a lot of references, but talks about a Quantum teleportation project
Laboratory for Theoretical and Quantum Computing - Université de Montréal
Buisnesses & Associations
Bokler Software's Cryptographic Resources Page
Quadralay Corporation Cryptography Archive - very useful set of links
RSA's Home Page
Cryptogaphy Research, Inc.
SSH - Check out their Tech Corner for resources
International Association for Cryptologic Research
General- Cryptograms
Games for this class
Cryptogram Corner - Cryptograms, plus links to more
Cypherspace- A Javascript puzzler
E-text sources
Project Gutenberg Site- In need of some e-text?
People associated with Math 187 at UCSD
[Back| Home| Programs| Documentation| Internet| People]
|
{"url":"http://www.math.ucsd.edu/~crypto/internet.html","timestamp":"2014-04-20T11:19:10Z","content_type":null,"content_length":"7451","record_id":"<urn:uuid:944aed16-46bf-4545-b82c-1559c90a3fa9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roslindale Prealgebra Tutor
Find a Roslindale Prealgebra Tutor
...I am currently a research associate in materials physics at Harvard, have completed a postdoc in geophysics at MIT, and received my doctorate in physics / quantitative biology at Brandeis
University. I will travel throughout the area to meet in your home, library, or wherever is comfortable for ...
16 Subjects: including prealgebra, calculus, physics, geometry
I am a licensed teacher and an experienced tutor who has worked with high school students for many years. I help with math - Algebra, Geometry, Pre-Calculus, Statistics. I am comfortable with
Standard, Honors, and AP curricula.
8 Subjects: including prealgebra, statistics, algebra 1, geometry
...I can teach the basics of computer programming. I have experience in C++ as a programmer for 2 years and in my undergraduate major. I also have experience in Java as I have been teaching it
this past year to high school students.
19 Subjects: including prealgebra, calculus, physics, algebra 2
...I have always enjoyed helping others with schoolwork through demonstrating and explaining concepts to whomever needed it. I am passionate about using the knowledge I have gained to help others
either improve or excel in academics, particularly in Math. I will look to you for feedback after each...
15 Subjects: including prealgebra, chemistry, English, writing
...I am also very good at teaching writing. I can teach the basics of grammar, spelling, and punctuation for the lower levels (K-5), and essay writing, critical analysis, and critical essays of
the classics for upper level grades. Before I began a family I was in the actuarial field.
25 Subjects: including prealgebra, English, reading, calculus
|
{"url":"http://www.purplemath.com/Roslindale_prealgebra_tutors.php","timestamp":"2014-04-21T12:34:05Z","content_type":null,"content_length":"23977","record_id":"<urn:uuid:550cf23b-1645-4256-9a71-38baee16bd64>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electricity 101 - Ruralite Magazine
How it works and how to save money
For many people, that introductory lesson in electricity is where the education stops. Unless it is a job requirement, you may feel there is not much need to understand how electricity reaches your
home. You know that if you flip a switch, the lights come on. You plug in your cell phone, and it charges. If the lights don’t come on, you call the electric company.
While you may not need to fully understand the route electricity takes to get to your home, it makes sense to understand where power comes from, and how to get the most out of the service you
The Basics
Electricity is the flow of electrical power or charge. It is considered a secondary energy source because it comes from the conversion of other sources of energy known as primary sources.
These primary sources include coal, natural gas, nuclear, hydroelectric and oil. Some energy sources—such as sunlight, wind and water—are renewable and can be replenished. Others—such as oil, natural
gas and coal—are non-renewable and cannot be replenished.
Measuring Electricity
Electricity can be measured in three ways.
Volts (V): A unit of electric force that measures the pressure of electricity. House electricity is 120 volts. Flashlight batteries are 1.5 volts. Car batteries are 12 volts.
Watts (W): The measure of power that can be generated by an electric current. Most appliances and light bulbs are labeled with the wattages they use. Incandescent light bulbs are typically 60, 75 or
100 watts. Microwave ovens and hair dryers are 1,000 or 1,200 watts. A kilowatt (kW) is equivalent to 1,000 watts. A kilowatt-hour (kWh) is a measurement of energy consumption. It is the amount of
power used over time, and the basis for how electric bills are calculated.
Amperes (amp): The measure of how much electricity is moving through a conductor. Amperes equal watts divided by volts. A typical household electrical outlet is 15 amps.
Phantom Power
There are many small ways to save energy, such as turning off the lights when you leave the room or running only full loads in your washing machine. A more encompassing way to save energy is to
reduce phantom power.
Even when turned off, most electronics consume a small amount of electricity if they are still plugged in. Chargers for mobile devices consume electricity if they are plugged in, even when they are
not actively charging the device.
This wasted energy, called “phantom load,” accounts for as much as 10 percent of a home’s total electric use.
Save money by unplugging your electronics when you are finished using them. Using a power strip can help you unplug multiple devices at once. “Smart” power strips automatically cut off phantom loads.
Light Bulbs
You can’t talk about saving energy without talking about light bulbs. But deciding what is right for your home can be confusing.
Newer light bulbs, such as compact fluorescent lamps and light-emitting diodes, last longer and use less energy than traditional incandescent bulbs.
CFLs can be as much as 75 percent more efficient than basic incandescent bulbs. LEDs can last up to 25 times longer than a classic incandescent bulb, but are more expensive.
Light bulb packaging now includes a lighting facts label, which includes such information as brightness, estimated yearly cost, life of the bulb, the light’s appearance from warm to cool and the
As we switch to more-efficient bulbs, we need to change the way we shop for them. In the past, we have selected light bulbs based on their wattage. But wattage indicates how much power is used rather
than the brightness of the bulb. The term to learn is lumens.
Lumens measure brightness. For example, a traditional 60-watt incandescent bulb produces about 800 lumens of light. A CFL produces the same 800 lumens using less than 15 watts.
Despite the variety of choices, you can use lumens to compare the brightness of any bulb. Once you know the brightness you need, you can shop wisely and get the most for your money.
The Cost of Electricity
Use these formulas to calculate your energy use and projected costs.
Calculate Energy Consumption: Power x Time = Energy
For example, using a 100-watt bulb for 10 hours equals 1 kWh.
(100 watts x 10 hours = 1,000 watt-hours or 1 kilowatt-hour.)
Calculate Energy Costs:
Power (kW) x Time (hours of operation) x Price ($/kWh) = Cost of operation.
To find out how much it may cost to run a specific appliance, follow these five easy steps. You are billed per kWh, or for how much electricity you use in one hour. Examples are based on an average
cost of $0.144 per kWh.
1) Obtain the wattage (watts) from the appliance nameplate. Example: A quartz heater with a nameplate of 1,500 watts.
Note: If listed as kW, skip to step 3. If amps are specified, multiply amps x voltage to obtain watts.
2) Divide the number of watts by 1,000 to get kW.
Example: 1,500 W ÷ 1,000 = 1.5k W.
3) To find out how many kWh the appliance uses, multiply the kW times the number of hours* the appliance runs each day.
Example: The heater runs for 10 hours per day = 1.5k W x 10 hours = 15 kWh per day.
* If the appliance operates for less than one full hour, divide the number of minutes by 60. For example: a hair dryer is used 5 minutes each day, or 5 ÷ 60 = 0.083 hours per day. A 1,250-watt hair
dryer = 1.25kW x 0.083 hours per day = 0.1 kWh per day.
4) To calculate the daily operating cost, multiply the kWh of the appliance by the average cost per kWh.
Example: Quartz heater daily cost = 15 kWh x $0.144 = $2.16 per day.
5) To calculate the monthly operating cost, multiply the daily cost by the number of days the appliance is used during the month.
Example: If you run the 1,500-watt quartz heater 10 hours a day, 30 days a month = 15 kWh x $0.144 x 30 = $64.80.
|
{"url":"http://www.ruralite.org/electricity-101/","timestamp":"2014-04-17T01:26:08Z","content_type":null,"content_length":"26681","record_id":"<urn:uuid:1baf963c-d842-4e84-a03d-52f194c4a7c9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How do I calculate sunrise/sunset times for non-Earth planets? [Archive] - Cartographers' Guild
07-15-2012, 01:53 AM
I am trying to create a world, and I don't like to do things halfway. So I am starting from scratch: new star, new planet. I plan to keep them similar to Earth, but not exactly, which means I can't
just use Earth data for sunrise/sunset and moonrise/moonset times, like I could find here: http://aa.usno.navy.mil/data/docs/RS_OneYear.php.
What I am hoping to get is some sort of algorithm that would allow me to calculate these times for any planet, hopefully using custom time scales.
Input variables would include:
velocity of revolution
rotational velocity
orbit as defined by Keplerian orbital elements (length of year being calculated from ellipse size and velocity of revolution.)
axial tilt
distance at aphelion and perihelion (Those are probably already defined by the Keplerian orbit. Is that correct?)
planet radius (Does this matter, or is rotational velocity the only relevant input for day length?)
Time unit length (As in, having a planet where the inhabitants' hour is not the same length of time as our hour. In other words, I would like to use custom time units, and define day length in terms
of them.)
probably more because I don't understand enough
I would also like to be able to calculate the points of sunrise/sunset and the arc that the sun would take through the sky.
Does anyone know of a program that would allow me to do this, and hopefully create tables such as the one in the link I posted? Or perhaps a planetary simulation program that would at least allow me
to account for everything except point 7? I'm willing to be flexible on that, because it's likely more trouble than it's worth. Or perhaps there is someone on this forum with enough knowledge in
astrophysics and programming to create such a program (unlikely, but I thought I'd ask)? Even a set of equations that would require manual solving would be a start.
Or am I mad to even consider attempting this level of detail for the sake of a fictional world?
Oh, and did I mention that it's in a binary star system, so I have an extra star whose positioning over time I will also need to track?
Anyway, hope my request is clear. Please ask for any clarification you need.
|
{"url":"http://www.cartographersguild.com/archive/index.php/t-19399.html","timestamp":"2014-04-16T14:31:50Z","content_type":null,"content_length":"22976","record_id":"<urn:uuid:f9c003d4-547b-43e2-a2af-e16d83aceec7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Population Dynamics - Part 2
Copyright © University of Cambridge. All rights reserved.
Discrete Modelling
We often use discrete mathematics to model a population when time is modelled in discrete steps. This fits well with annual censuses of wildlife populations.
Sometimes populations are themselves discrete, such as:
• Species with non-overlapping generations (eg. annual plants)
• Species with pulsed reproductions (eg. many wildlife species in seasonal environments)
Winter cress is a winter annual plant.
Geometric Growth
The population equation, $N_{t+1}=\lambda N_t$ , from before means that over discrete intervals of time,$t_0, t_1, t_2, ...$, the rate of change in population size is proportional to the size of the
We first solve this equation: $$\begin{align*} N_{t+1}&=\lambda N_t \\ &=\lambda \lambda N_{t-1} \\& =...\\ &= \lambda^{t+1} N_0 \\ \Rightarrow N_t &=\lambda^t N_0 \end{align*}$$ The population size
will depend on the value of $\lambda$
• If $\lambda> 1$ then exponential increase
• If $\lambda=1$ then stationary population
• If $\lambda< 1$ then exponential decrease
Question: If a population of owls increases by 40% in a year, what is the value of r and $\lambda$ ?
Given there were initially 10 owls, what will the population size be in 75 days? Can you plot this population growth?
Exponential Growth
Some populations may grow continuously, without pulsed births and deaths (eg. humans). In these cases, time is a continuous smooth curve, so we use differential equations to represent this continuous
Using our discrete model from above: $$\begin{align*} N_{t+\Delta t}&=\lambda^{\Delta t} N_t =(1+r)^{\Delta t}N(t)\approx (1+r\Delta t) N(t)\\ \Rightarrow \Delta N_t&\approx r \Delta t N_t \\ \\\
Rightarrow \lim_{\Delta t \to 0} \frac{\Delta N(t)}{\Delta t} &=\frac {\mathrm{d}N(t)}{\mathrm{d}t}=rN(t) \end{align*}$$ Question: Solve the equation, $\frac {\mathrm{d}N(t)}{\mathrm{d}t}=rN(t)$ ,
using standard integrals, showing that the solution is $N(t)=N_0e^{rt}$.
Different values of r determine the change in population size, as shown below.
Also note the connection between the discrete and continous solutions: $$\begin{align*} N_t =\lambda^t N_0 &\text{ and } N(t)=N_0 e^{rt} \\ \Rightarrow \lambda^t&=e^{rt} \\ \lambda&=e^r \\ \ln(\
lambda)&=r \end{align*}$$ Question: Using the discrete model above, how long does it take for this population to double in size? What about the continous case?
Limitations of the Models
Consider a population of insects which suddenly dies out right before the start of every time period, and whose children hatch right after. A discrete model would lead us to believe that there are no
insects during the entire period, so instead we should use a continuous model.
On the other hand, it is often impossible to continually monitor the population size, so we approximate using the discrete case.
Choosing which of discrete or continuous to use is an important decision in modelling populations.
Can you also think of any assumptions we have made with these models, and why they could be a problem? Consider the environment the population inhabits and differences between members of the
Click here to see the geometric model adapted to include environmental resistance.
Question: If $\lambda = 1.25$, by how much does a population of blue footed boobies increase per year?
The population N(t) of blue footed boobies is assumed to satisfy the logistic growth equation $\frac {\mathrm{d}N}{\mathrm{d}t}=\frac{1}{500} N(t) \big( 1-N(t)\big)$ . Given $N_0=200$, solve for N
(t). Repeat for $N_0=2000$. Discuss the long-term behaviour of the population in both cases.
|
{"url":"http://nrich.maths.org/7104/index?nomenu=1","timestamp":"2014-04-16T04:59:23Z","content_type":null,"content_length":"7582","record_id":"<urn:uuid:e244f56a-6900-429e-877b-352de4d84b72>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The minimum number of photons per second?
1. The problem statement, all variables and given/known data
An owl has good night vision because its eyes can detect a light intensity as small as 4.5 10-13 W/m2. What is the minimum number of photons per second that an owl eye can detect if its pupil has a
diameter of 7.5 mm and the light has a wavelength of 503 nm?
2. Relevant equations
I'm not really sure how to begin. Can you help me solve this step by step?
Photons per sec= Energy per sec/Energy per photon ??
3. The attempt at a solution
|
{"url":"http://www.physicsforums.com/showthread.php?p=2728439","timestamp":"2014-04-16T10:33:38Z","content_type":null,"content_length":"23448","record_id":"<urn:uuid:b3022bbd-dcc6-488e-8d76-b9968369fccb>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
|
an investment
September 6th 2010, 02:19 PM #1
Sep 2009
an investment
an investment yields an annual interest of $1500. If $500 more is invested and the rate is 2% less, the annual interest is $1300. What is the amount of investment and the rate of interest?
Sounds like a job for some simultaneous equations.
Given A is the initial amount invested, r is the rate then
Maybe use
(1+r)A= 1500+A ...(1)
(A+500)(1+(r-2))=1300+(A+500) ...(2)
Leaves you with some algebra.
I may be over complicating this one, it is early morning here!
Hello, aeroflix!
An investment yields an annual interest of $1500.
If $500 more is invested and the rate is 2% less, the annual interest is $1300.
What is the amount of investment and the rate of interest?
Let $x$ = amount invested.
Let $r$ = rate of interest.
We are told that $x$ dollars at $r$ percent yields $1500.
. . Equation: . $rx \:=\:1500$
We are told that $(x+500)$ dollars at $(r-0.02)$ percent yields $1300.
. . Equation: . $(x+500)(r-0.02) \:=\:1300$
Now solve the system of equations.
You will get a quadratic equation
and must discard one of the roots.
September 6th 2010, 02:40 PM #2
September 6th 2010, 02:45 PM #3
Super Member
May 2006
Lexington, MA (USA)
September 8th 2010, 12:20 AM #4
Sep 2009
|
{"url":"http://mathhelpforum.com/algebra/155390-investment.html","timestamp":"2014-04-17T19:51:22Z","content_type":null,"content_length":"40065","record_id":"<urn:uuid:3260f12b-f066-4180-b9dc-b3476ecaa53d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
compounded interest
December 15th 2007, 08:38 AM
compounded interest
Suppose you have a choice of keeping $500 for five years in a savings account with a 2% interest rate, or in a five year certificate of deposit with and interest rate of 4.5%. Calculate how much
interest you would earn with each option over five years time with continuous compounding.
for the first part, would it be A(5)=500*e(.02*5/100)
that doesnt seem right to me, but im not sure what else to do.
December 15th 2007, 08:45 AM
Suppose you have a choice of keeping $500 for five years in a savings account with a 2% interest rate, or in a five year certificate of deposit with and interest rate of 4.5%. Calculate how much
interest you would earn with each option over five years time with continuous compounding.
for the first part, would it be A(5)=500*e(.02*5/100)
that doesnt seem right to me, but im not sure what else to do.
Check what you are doing and your arithmetic. The formula is wrong, and if
that is what you evaluated you did it wrong, nor is it the result of evaluating
the correct formula.
December 15th 2007, 08:49 AM
what formula should i be using?
December 15th 2007, 08:53 AM
|
{"url":"http://mathhelpforum.com/business-math/24909-compounded-interest-print.html","timestamp":"2014-04-20T21:15:06Z","content_type":null,"content_length":"5807","record_id":"<urn:uuid:0b07b561-fc69-48cf-8370-0942a5f18ad5>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
|
green theorem or stokes????
June 27th 2013, 05:28 PM #1
May 2013
green theorem or stokes????
Hi,I dont think this is the normal integral.
Please help with approach.thanks.i get confused btwn greens and stokes which is it?
∫(x²-2xy)dx+(x²y+3)dy where c is boundary of region defined by y^2=8x and x=2 traversed in counterclockwise direction.
Re: green theorem or stokes????
Hey n22.
Since this is a line integral, you need to parameterize the curve first and then use the results of vector calculus for line integrals:
Vector calculus identities - Wikipedia, the free encyclopedia
Re: green theorem or stokes????
Green's theorem is for plane areas while stokes is for general surface, which make green's theorem a special case of stokes'theorem. both relates the surface integral of functions to line
integral of the line that encompasses that surface.
June 27th 2013, 08:22 PM #2
MHF Contributor
Sep 2012
June 28th 2013, 03:31 AM #3
Junior Member
May 2013
|
{"url":"http://mathhelpforum.com/calculus/220195-green-theorem-stokes.html","timestamp":"2014-04-17T14:00:13Z","content_type":null,"content_length":"36218","record_id":"<urn:uuid:4d074fb2-4f8a-4966-a13e-5b14764e1557>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ben Lomond, CA Math Tutor
Find a Ben Lomond, CA Math Tutor
...I will teach all levels from middle school mathematics up to calculus. I have extensive experience teaching international baccalaureate mathematics. I'm flexible in my teaching style, and will
work with parents, schools and students to determine the format of tutoring most likely to bring them success.
11 Subjects: including algebra 1, algebra 2, calculus, chemistry
...Last I love to give an ending test to measure how much a student has improved. It also gives me additional missed questions to focus on. Struggling in math?
32 Subjects: including calculus, physics, statistics, ADD/ADHD
...I am patient, engaging and nonjudgmental. I believe that everyone has intelligence and am passionate about helping them find it. I look forward to working with you or your child, making what
was hard easy!Reading is fundamental to learning and succeeding in this world.
25 Subjects: including algebra 1, algebra 2, chemistry, reading
...Since high-level physics research requires in-depth knowledge of mathematics, I took 50 credit hours of math as an undergraduate student and minored in mathematics while pursuing my PhD
(equivalent to a Master’s degree in mathematics). When I was a ninth grader, I was a winner in high school math...
8 Subjects: including algebra 2, calculus, geometry, linear algebra
...From beginning to intermediate, I can help you learn everything you need to know to become a budding digital photographer or graphic designer. I was a proctor and gave orientations for the
GED, so I am very familiar with subject matter, requirements, and upcoming changes to the test. I have tut...
21 Subjects: including prealgebra, reading, English, writing
Related Ben Lomond, CA Tutors
Ben Lomond, CA Accounting Tutors
Ben Lomond, CA ACT Tutors
Ben Lomond, CA Algebra Tutors
Ben Lomond, CA Algebra 2 Tutors
Ben Lomond, CA Calculus Tutors
Ben Lomond, CA Geometry Tutors
Ben Lomond, CA Math Tutors
Ben Lomond, CA Prealgebra Tutors
Ben Lomond, CA Precalculus Tutors
Ben Lomond, CA SAT Tutors
Ben Lomond, CA SAT Math Tutors
Ben Lomond, CA Science Tutors
Ben Lomond, CA Statistics Tutors
Ben Lomond, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Ben_Lomond_CA_Math_tutors.php","timestamp":"2014-04-19T07:20:02Z","content_type":null,"content_length":"23811","record_id":"<urn:uuid:39ae4c44-b676-4c5c-85d3-36e0fa32c183>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
1. solve the following inequality. write your solution in interval notation. (x+3)/(x-3)<0 2. find all the zeros of the following polynomial. write the polynomial in factor form. f(x)=x^4+3x^2-40
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a701e7e4b082f0b852e148","timestamp":"2014-04-16T08:05:03Z","content_type":null,"content_length":"35323","record_id":"<urn:uuid:850660b1-fc2a-4825-b670-fd746586d09f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Video from multiple graph exports
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Video from multiple graph exports
From Michael Hanson <mshanson@mac.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Video from multiple graph exports
Date Wed, 03 Dec 2008 21:06:23 -0500
I see. I had misinterpreted your earlier response to suggest that you had done something more fancy, but I suspect for some users these represent useful ideas to get started. I could imagine
extending this to examples such as integrating automatic PDF generation into an ado file, passing options to epstopdf via Stata, and/or engaging in batch processing of files from within Stata.
For my own curiosity, is the "start" command standard in the Windows equivalent of a shell? (I'm so used to working with Unix-based OSs, I'm not even sure if "shell" is the right word here.) Does it
simply launch the program associated with the file extension of the listed file -- in this case some pre-specified PDF reader?
On Dec 3, 2008, at 6:15 PM, Martin Weiss wrote:
Not too much magic there. What I meant was: I would not know how to tell Acrobat or other suggested PDF-makers to just turn my graph into a pdf file. Most of the other utilities are installed as
printers, so that would complicate matters for me...
sysuse auto, clear
sc mpg wei
cap erase mygr.eps
cap erase mygr.pdf
gr export mygr.eps
!epstopdf mygr.eps
!start mygr.pdf
----- Original Message ----- From: "Michael Hanson" <mshanson@mac.com>
To: <statalist@hsphsun2.harvard.edu>
Sent: Thursday, December 04, 2008 12:07 AM
Subject: Re: st: Video from multiple graph exports
Since epstopdf is available for the same three platforms as Stata (i.e., Mac, Unix, Windows), perhaps a few examples showing how you "talk to it from inside Stata" would be of general
interest. Indeed, without stepping on the feet of the Stata Journal editors, I think this would be a good topic for a Stata Tip in an upcoming issue of the SJ.
-- Mike
On Dec 3, 2008, at 5:58 PM, Martin Weiss wrote:
Well, epstopdf is free as well and I know how to talk to it from inside Stata. In contrast to ps2pdf, there are no white margins to take care of...
----- Original Message ----- From: "Paula Lackie" <plackie@carleton.edu>
To: <statalist@hsphsun2.harvard.edu>
Sent: Wednesday, December 03, 2008 11:53 PM
Subject: Re: st: Video from multiple graph exports
Any Mac can "print to PDF" and there are a number of options for getting a PC to print to PDF. One reliable free version is "PDFCreator"
----- Original Message -----
From: "Martin Weiss" <martin.weiss1@gmx.de>
To: statalist@hsphsun2.harvard.edu
Sent: Wednesday, December 3, 2008 3:20:01 PM GMT -06:00 US/ Canada Central
Subject: Re: st: Video from multiple graph exports
"Export the Stata graphs as PDF format. (A feature only available
in the Mac version, I believe.)"
In Windows, -gr export- as .eps and use !epstopdf from your MikTeX
distribution. Not much more effort than on the MAC...
----- Original Message ----- From: "Michael Hanson" <mshanson@mac.com>
To: <statalist@hsphsun2.harvard.edu>
Sent: Wednesday, December 03, 2008 10:13 PM
Subject: Re: st: Video from multiple graph exports
I have done something like this for several presentations -- although I typically prefer to use a remote to step through the "movie frames" rather than automate the transitions.
That way I can stop and comment on certain slides (the audience sees it as "freezing" the animation), or go
back to a specific slide to answer questions.
The caveat, given the details you have provided in your message: my process requires a Macintosh. Specifically, it requires use of Keynote,
Apple's presentation software available only for Mac OS X.
Briefly, in three steps (though I am happy to provide details if
1. Export the Stata graphs as PDF format. (A feature only available in
the Mac version, I believe.)
2. Place PDFs of graphs into individual slides in Keynote.
3. Add automatic timed transitions and effects as needed. You can set
transition times on a per-slide basis, as you inquired.
This whole process is very easy with Keynote, as it provides fine
controls for aligning the graphs and professional transitions between slides. Plus, since everything is done with PDF, you don't get those "jaggies" (i.e. pixelation) that often
afflict graphics in PowerPoint. Additionally, with Keynote you can export your presentation to QuickTime (.mov) or Flash (.swf) formats as a self- running, cross- platform file if
I suspect one could use LaTeX-based presentation tools (beamer? powerdot? prosper?) to accomplish the same thing, albeit with (much) more effort.
Hope this helps,
On Dec 3, 2008, at 2:49 PM, Dan Weitzenfeld wrote:
Hi Folks,
I'm considering making a movie using multiple Stata graphs, exported. E.g., for t=0,1,...n, graphing the data at each t, and then using a
slide-show program to stack the graphs in time order, creating a
"movie" illustrating how the data changes over time.
My questions:
1. Has anyone does this before, and if so, do you have
recommendations for the most flexible slide-show program?
Specifically, I'm wondering if there is a program that will allow for
variable intervals between slides (e.g., t=0, 1.5, 2, 2.2,....)
2. Is there a way to overlay a Stata graph on top of a .jpg file?
I've been doing this manually, using -spmap- to plot my
location-oriented data, exporting graphs as .emf/.wmf, ungrouping the
result in PowerPoint and aligning the .jpg overlay.
3. Am I trying to use Stata to do something it's not suited for? I know JMP can play movies from data, but I don't think the movies can
be exported, and, well, I'm partial to Stata.
Thanks in advance,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-12/msg00188.html","timestamp":"2014-04-21T15:46:43Z","content_type":null,"content_length":"16517","record_id":"<urn:uuid:c96afd16-1453-48c7-ad23-1c640c9b892d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Casting Out Nines for 2nd Graders
Date: 07/05/2004 at 20:39:42
From: Donna
Subject: Casting Out Nines
Dear Dr. Math,
Can you please send me a simpler explanation of WHY the Casting Out
Nines method works than you have in your archives? I understand how
to do it, but I need to know why it works. I have to present this
method so that second graders can understand why it works.
Date: 07/05/2004 at 22:58:49
From: Doctor Peterson
Subject: Re: Casting Out Nines
Hi, Donna.
I've tried a number of times to find a good way to prove the method
at an elementary level, and found that any really clear explanation
requires some knowledge of modular arithmetic and algebra. Without
those ideas, there's just too much work needed to work around them.
But that's talking mostly about proofs. Second graders wouldn't
appreciate an actual proof anyway, so we can just look for a
plausibile explanation to show HOW it works -- what's going on
behind the scenes. That may be easier to handle, though we'll have
to keep it extremely simple.
The basic idea, leaving out all the details, is this: Adding the
digits of a number decreases it by a multiple of 9, so repeating the
process until you have a single digit leaves you with the remainder
after division by 9 (or with 9 if that remainder is 0). (In advanced
terms, the digit sum is congruent to the original number modulo 9).
When numbers are added or multiplied, the remainder of the result is
the same as the sum or product of the remainders of the given
numbers. (In advanced terms, the sum and product are well-defined in
modular arithmetic; that is, they preserve congruence.) So if the
remainders on both sides of an equation are not equal, then neither
are the values of both sides themselves.
For second graders, we'll want to work with a specific example rather
than with generalities, and with addition rather than multiplication.
Take this sum as our example:
24 + 37 = 51
Is this correct? Well, we look at the left side, the sum. We add the
digits of 24 and get 6; we add the digits of 37 and get 10, then add
again to get 1. Now we add 6 and 1 to get 7. On the right side, we
add the digits of 51 to get 6; since this is not equal to 7, our
answer is wrong. (The right answer is 61, which does give a digit
sum of 7.)
Now, what is happening when we do that?
Instead of adding 24 + 37, we're adding 6 + 1. Now, 6 is 18 less than
24, and 1 is 36 less than 37, so their sum is 18 + 36 less than the
real sum. Do you see that 18 and 36 are both multiples of 9? That
means that the sum we get, 7, is some number of 9's less than the real
sum. This will always happen. [Why? Because replacing 20+4 with 2+4
takes away 20 and adds 2, which is the same as taking away 10 and
adding 1 twice. But that means taking away a multiple of 9.]
When we do the same thing to the 51, the sum of the digits, 5+1=6, is
45 less than 51 itself; so again we get a number that is some number
of 9's less than the actual number.
But that means that the numbers we get on the two sides should
themselves be a multiple of 9 apart. In fact, since they are both
single-digit numbers, they should be equal (unless one is 0 and the
other is 9). Since they are not, the numbers can't be right.
I'm not at all sure this will work for the average second grader; even
though I avoided going into detail about place value and remainders, I
had to bring in multiples of 9. It's hard to avoid something like
that, since that is what the technique is all about under the hood:
multiples, remainders, modular arithmetic.
Please let me know if you find a nice way to express these ideas for
that age.
- Doctor Peterson, The Math Forum
Date: 07/06/2004 at 19:10:02
From: Donna
Subject: Thank you (Casting Out Nines)
Thank you very much for the speedy reply. My presentation is
tomorrow. Maybe the other educators in my group will have a
suggestion for explaining it to second graders. I'll let you know if
they do. Again, thank you!!!
|
{"url":"http://mathforum.org/library/drmath/view/65299.html","timestamp":"2014-04-16T16:51:51Z","content_type":null,"content_length":"8931","record_id":"<urn:uuid:a69fcf3d-bf82-4864-978e-eddfbfe9dfb6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wallington Prealgebra Tutor
Find a Wallington Prealgebra Tutor
...I have experience tutoring in both of these subjects as well as SAT Math. I tutored a Junior in HS for the SAT Math section, a freshman in HS in all subjects, a 6th and 8th grader in Math, as
well as students at the elementary age (K-6) in all subjects. I played softball at the varsity level fo...
28 Subjects: including prealgebra, English, reading, geometry
...My tutoring strategies are much the same. I help students by going over techniques, maybe introduce new ways to do certain things. For the science classes, I find that flash cards, and games
designed to drill information to be helpful.
8 Subjects: including prealgebra, biology, algebra 1, algebra 2
...I believe my extensive schooling can now allow me to share my successes and knowledge with future scholars. As for my tutoring subjects, I am very confident in my ability to tutor: biology,
math, INCLUDING PSAT and SAT Math, and chemistry. I have taken intro to upper level chemistry courses, general biology, and up to calculus II in math.
14 Subjects: including prealgebra, chemistry, SAT math, ACT Math
...Personally, I received a 780 on the Math portion of the SAT. The SAT is less of an exam than a game challenging you to leave your comfort zone and think outside the box, and I work to establish
this kind of mindset with each of my students. My scores on the individual sections of the GRE were 169/170 for Quantitative Reasoning, 161/170 for Verbal Reasoning, and 5.5/6 for Analytical
33 Subjects: including prealgebra, physics, calculus, GRE
...I worked clinically in a hospital hematology laboratory for a year, and then went on to graduate school at Stony Brook University. I received my master's degree in 2013 as a physician assistant
and have since been working at New York Presbyterian. Having been in the science/medical fields I use math often.
11 Subjects: including prealgebra, algebra 1, algebra 2, precalculus
|
{"url":"http://www.purplemath.com/wallington_nj_prealgebra_tutors.php","timestamp":"2014-04-16T07:53:43Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:dc954ef4-2628-4aca-a421-189722a476eb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Insertion Sort Algorithm in Descending Order
12-07-2011, 01:11 AM
Insertion Sort Algorithm in Descending Order
Below I have listed my code for insertion sort algorithm that sorts in ascending order. I cannot for the life of me figure out how to turn it into descending order. I'm sure it is simple, but I
need help. Thanks in advance.
// -----------------------------------------------------------------
// Sorts the specified array of objects using the insertion
// sort algorithm.
// -----------------------------------------------------------------
public static void insertionSort(Comparable[] list) {
for (int index = 1; index < list.length; index++) {
Comparable key = list[index];
int position = index;
// Shift larger values to the left
while (position > 0 && key.compareTo(list[position - 1]) < 0) {
list[position] = list[position - 1];
list[position] = key;
12-07-2011, 02:32 AM
Re: Insertion Sort Algorithm in Descending Order
Take a piece of paper and write down how the indexes work for this sort. Work out the logic for how this sort works.
Then think about how to work the sort in the opposite order.
Where would the indexes start and where would the elements be moved to.
12-07-2011, 02:39 AM
Re: Insertion Sort Algorithm in Descending Order
This is part of an input program. It works as simply as typing in how many integers there will be, and then, typing in the integers. The code that I listed above successfully sorts them in
ascending order. I have been working on switching things around for 4 hours this afternoon to no avail. That is why I resorted to posting. I have hit a wall.
12-07-2011, 02:43 AM
Re: Insertion Sort Algorithm in Descending Order
Have you worked out how the posted code works?
Can you write the logic in pseudo code to describe the steps the code takes?
What is there about it that sorts in ascending order? Where does the index start as it moves through the array?
Where and why are the elements moved as the index changes.
When you get that down and understood, reverse the logic and order to do it the other way.
12-07-2011, 02:50 AM
Re: Insertion Sort Algorithm in Descending Order
I am not sure if I understand full how this code works. I tried looking at it logically and reversing what I thought I understood, and it didn't work. This code was just given to me without
explanation which is why I'm having such a difficult time understanding it. Can you point me in the right directions with a more direct answer please?
12-07-2011, 03:03 AM
Re: Insertion Sort Algorithm in Descending Order
I am not sure if I understand full how this code works.
That is the first then you should work on.
You can't rewrite code without understanding what it does.
Use a piece of paper and a pencil, create a list of words or numbers to sort and work out how the code does its thing.
12-07-2011, 03:38 AM
Re: Insertion Sort Algorithm in Descending Order
I know what the code does that is not what I meant. I understand that it inserts each number input into its correct place in order. I just cannot figure out how to reverse this code to descend
it. I have tried as best as I can, and that is why I came here for help. Not being rude, but I am not looking for more riddles or questions to accompany it.
12-07-2011, 04:00 AM
Re: Insertion Sort Algorithm in Descending Order
I'm trying to suggest ways for you to figure out how the existing code works.
Can you list the steps in the sort algorithm? Have you used a piece of paper and a pencil to work out the logic?
If you can't do that then we'll have to work through the code to show you the what it does.
When you understand that, then you should be able to use the same kind of logic to work out how to change the sort order.
12-07-2011, 04:28 AM
Re: Insertion Sort Algorithm in Descending Order
I do not know what you mean about using paper and pencil to work out the logic. I am not a java programmer, and I will never be around this material again in 2 more weeks. I am better at Java
programming that a lot of my classmates, but my teachers reads powerpoints; that's it. I have not been taught a thing and have had to teach myself all semester. This is the first thing I do not
understand of the semester. I just came here to ask for help. I'm not trying to come off as a jerk, but I am just tired of this class and mostly its teacher.
12-07-2011, 12:48 PM
Re: Insertion Sort Algorithm in Descending Order
I do not know what you mean about using paper and pencil to work out the logic.
You draw a list of objects to be sorted and set pointers to the elements as per what the code does.
You redraw the list as the code moves objects in the list. You redraw the pointer as the code changes the values of the index variables.
12-07-2011, 04:13 PM
Re: Insertion Sort Algorithm in Descending Order
Never mind, I'll go somewhere else for help.
12-07-2011, 04:17 PM
Re: Insertion Sort Algorithm in Descending Order
You need to show a little effort. We don't write code for students. They are expected to write their own code.
We will help anyone to get their code to work, but it will be the student doing the coding.
12-07-2011, 08:27 PM
Re: Insertion Sort Algorithm in Descending Order
Just imagine a Comparable that, when you feed it two numbers to compare, it 'thinks' that it has to compare the negative of those numbers, i.e. you feed it a and b and the Comparable 'thinks' it
needs to compare -a and -b.
kind regards,
|
{"url":"http://www.java-forums.org/advanced-java/52266-insertion-sort-algorithm-descending-order-print.html","timestamp":"2014-04-16T22:18:50Z","content_type":null,"content_length":"13460","record_id":"<urn:uuid:b044be14-feb3-48f8-966f-fd5a3bf831e2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pacific Institute for the Mathematical Sciences - PIMS
Scientific General Events
• The Alberta Topology Seminar (ATS) retreat will be held at the Goldeye resort outside Nordegg, AB.
• This meeting focuses on unsolved problems in low-dimensional homotopy theory
and combinatorial group theory. These include Whitehead's Asphericity
Conjecture, the Andrews-Curtis Conjecture, Wall's Domination Problem in
dimension two, the relation gap problem, and the Eilenberg-Ganea Conjecture.
This seminar is in succession to similar conferences held in Luttach (Italy),
the Pacific Northwest, and Russia.
The workshop format emphasizes detailed discussion of ideas in progress and
collaboration of the participants, from advanced graduate students to
early-career researchers to senior experts.
Update 15 May 2009: The National
Science Foundation, Division of Mathematical Sciences, has awarded Portland
State University a grant for participant support (partial subsidies towards
travel expenses), and the Pacific Institute for the Mathematical Sciences has
granted supplementary support for the same purpose.
• PIMS is co-sponsoring a Canada/Korea Special Session on Algebraic Geometry and Topology, June 6-8, at the CMS Meeting in St. John's, Newfoundland.
• The Langlands programme dominates the representation theory of linear algebraic groups over local and global fields. The theme of this workshop is the exploration of some of the geometric ideas
appearing in recent advances in the Langlands programme, with an emphasis on results which apply to both function fields and number fields.
For more information, please visit the official website at
• The goal of the conference is to promote cross-disciplinary research in statistical methods in engineering, science and technology. This is to be interpreted broadly to cover a wide range of
application areas including environment, information and manufacturing sciences. The conference is intended to stimulate interactions among statisticians, researchers in the application areas,
and industrial practitioners. It will provide a forum where participants can describe current research, identify important problems and areas of application, and formulate future research
• The general topic of the conference was the theory and application of
discrete structures and its goal was to highlight the most salient
trends in the field, which has close links to such diverse areas as
cryptography, computer science, large-scale networks and biology. The
conference brought together researchers from the various disciplines
with which discrete and algorithmic mathematics interact.
Particular areas of interest were the following: graphs and digraphs, hypergraphs,
matroids, ordered sets, designs, coding theory, enumeration,
combinatorics of words, discrete optimization, discrete and
computational geometry, lattice point enumeration, combinatorial
algorithms, computational complexity, and applications of discrete and
algorithmic mathematics, including (but not limited to) web graphs,
computational biology, telecommunication networks, and information
• The goal of this workshop is to highlight emerging new topics in
spatial probability, and related areas where probabilistic ideas play
an important role. The emphasis is on types of questions and approaches
that are not largely based on existing techniques. We have invited
researchers with a variety of backgrounds, to help cross-fertilization
between areas. Some topics that the organizers have in mind are below,
although speakers have the freedom to choose what they find suitable.
Combinatorial optimization problems with a stochastic component are a source of
challenging open questions. Examples are the traveling salesman problem
on a random set of points, other optimal path problems on random data,
and allocation problems. Heuristics from statistical physics often
suggest conjectures, open to investigation. Interacting spatial
processes has been an active area since the 1970s, with interesting
connections to other fields of mathematics such as random permutations,
and PDEs. Such processes with non-local interactions have not been
studied extensively. Interesting examples of deterministic processes
have been found, where randomness plays a role in the analysis, such as
crystal growth or sandpiles. These types of connections between
deterministic and random are likely to be the source of intriguing
questions in the future.
|
{"url":"http://www.pims.math.ca/scientific/general-event/random-walks-random-environments?page=12","timestamp":"2014-04-16T10:13:41Z","content_type":null,"content_length":"30082","record_id":"<urn:uuid:71570170-2780-4091-8075-fec4350cf8b8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Induction proofs
October 14th 2009, 07:16 PM
Induction proofs
1. Give a sentence P(n) depending on a natural number n, such that P(1),P(2),...,P(99) are true but P(100) is false. Make the sentence as simple as possible.
2. Let <a> be a sequence satisfying a1=a2=1 and a_n = 1/2(a_n-1 + 2/(a_n-2)) for n greater than equal 2. Prove that 1 <= a_n <= 2 for all n in the natural numbers
October 14th 2009, 08:32 PM
1. Give a sentence P(n) depending on a natural number n, such that P(1),P(2),...,P(99) are true but P(100) is false. Make the sentence as simple as possible.
2. Let <a> be a sequence satisfying a1=a2=1 and a_n = 1/2(a_n-1 + 2/(a_n-2)) for n greater than equal 2. Prove that 1 <= a_n <= 2 for all n in the natural numbers
If you gave a little more thought to this stuff I think you could come up with pretty nice answers
1) what about n < 100 ? Is this a simple enough sentence?
2) is that a_n = (1/2)(a_(n-1) + 2/(a_(n-2))? Well, show or n =1 and then assume truth for n and show for n+1.
|
{"url":"http://mathhelpforum.com/advanced-algebra/108130-induction-proofs-print.html","timestamp":"2014-04-18T22:54:48Z","content_type":null,"content_length":"4626","record_id":"<urn:uuid:95d27a9d-9686-458c-bba6-275bdef7f03d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
T.: Vogt’s theorem on categories of homotopy coherent diagrams
Results 1 - 10 of 16
, 1996
"... this paper we try to lay some of the foundations of such a theory of categories `up to homotopy' or more exactly `up to coherent homotopies'. The method we use is based on earlier work on: ..."
Cited by 22 (6 self)
Add to MetaCart
this paper we try to lay some of the foundations of such a theory of categories `up to homotopy' or more exactly `up to coherent homotopies'. The method we use is based on earlier work on:
"... ..."
, 1993
"... ... L are simplicial sets, then there is a strong deformation retraction of the fundamental crossed complex of the cartesian product K \Theta L onto the tensor product of the fundamental crossed
complexes of K and L. This satisfies various side-conditions and associativity/interchange laws, as for t ..."
Cited by 15 (2 self)
Add to MetaCart
... L are simplicial sets, then there is a strong deformation retraction of the fundamental crossed complex of the cartesian product K \Theta L onto the tensor product of the fundamental crossed
complexes of K and L. This satisfies various side-conditions and associativity/interchange laws, as for the chain complex version. Given simplicial sets K 0 ; : : : ; K r , we discuss the r-cube of
homotopies induced on (K 0 \Theta : : : \Theta K r ) and show these form a coherent system. We introduce a definition of a double crossed complex, and of the associated total (or codiagonal) crossed
complex. We introduce a definition of homotopy colimits of diagrams of crossed complexes. We show that the homotopy colimit of crossed complexes can be expressed as the
"... Abstract. We extend the W-construction of Boardman and Vogt to operads of an arbitrary monoidal model category with suitable interval, and show that it provides a cofibrant resolution for
well-pointed Σ-cofibrant operads. The standard simplicial resolution of Godement as well as the cobar-bar chain ..."
Cited by 11 (9 self)
Add to MetaCart
Abstract. We extend the W-construction of Boardman and Vogt to operads of an arbitrary monoidal model category with suitable interval, and show that it provides a cofibrant resolution for
well-pointed Σ-cofibrant operads. The standard simplicial resolution of Godement as well as the cobar-bar chain resolution are shown to be particular instances of this generalised W-construction.
- IN CATEGORIES IN ALGEBRA, GEOMETRY AND MATHEMATICAL , 2006
"... We show that complete Segal spaces and Segal categories are Quillen equivalent to quasi-categories. ..."
- Indag. Math. (N.S , 1997
"... Abstract. The results of a previous paper on the equivariant homotopy theory of crossed complexes are generalised from the case of a discrete group to general topological groups. The principal
new ingredient necessary for this is an analysis of homotopy coherence theory for crossed complexes, using ..."
Cited by 10 (7 self)
Add to MetaCart
Abstract. The results of a previous paper on the equivariant homotopy theory of crossed complexes are generalised from the case of a discrete group to general topological groups. The principal new
ingredient necessary for this is an analysis of homotopy coherence theory for crossed complexes, using detailed results on the appropriate Eilenberg–Zilber theory, and of its relation to simplicial
homotopy coherence. Again, our results give information not just on the homotopy classification of certain equivariant maps, but also on the weak equivariant homotopy type of the corresponding
equivariant function spaces. Mathematics Subject Classifications (2001): 55P91, 55U10, 18G55. Key words: equivariant homotopy theory, classifying space, function space, crossed complex.
"... Abstract. We provide a general definition of higher homotopy operations, encompassing most known cases, including higher Massey and Whitehead products, and long Toda brackets. These operations
are defined in terms of the W-construction of Boardman and Vogt, applied to the appropriate diagram categor ..."
Cited by 6 (3 self)
Add to MetaCart
Abstract. We provide a general definition of higher homotopy operations, encompassing most known cases, including higher Massey and Whitehead products, and long Toda brackets. These operations are
defined in terms of the W-construction of Boardman and Vogt, applied to the appropriate diagram category; we also show how some classical families of polyhedra (including simplices, cubes,
associahedra, and permutahedra) arise in this way. 1.
, 1996
"... This is an unfinished explanation of the notion of “flexible sheaf”, that is a homotopical notion of sheaf of topological spaces (homotopy types) over a site. See “Homotopy over the complex
numbers and generalized de Rham cohomology ” (submitted to proceedings of the Taniguchi conference on vector b ..."
Cited by 3 (0 self)
Add to MetaCart
This is an unfinished explanation of the notion of “flexible sheaf”, that is a homotopical notion of sheaf of topological spaces (homotopy types) over a site. See “Homotopy over the complex numbers
and generalized de Rham cohomology ” (submitted to proceedings of the Taniguchi conference on vector bundles, preprint of Toulouse 3, and also available at my homepage 2) for a more detailed
introduction, and also for a further development of the ideas presented here. The present paper was finished in December 1993 while I was visiting MIT. Since writing this, I have realized that the
theory sketched here is essentially equivalent to Jardine-Illusie’s theory of presheaves of topological spaces (although they talk about presheaves of simplicial sets which is the same thing). This
is the point of view adopted in “Homotopy over the complex numbers and generalized de Rham cohomology”. 1 Added in August 1996: I am finally sending this to “Duke eprints ” because it now seems that
there may be several useful features of the treatment given here. First of all, the direct construction of the homotopy-sheafification (by doing a certain operation n+2 times) seems to be useful, for
example I need
, 2006
"... Abstract. In this paper we give a summary of the comparisons between different definitions of so-called (∞,1)-categories, which are considered to be models for ∞-categories whose n-morphisms are
all invertible for n> 1. They are also, from the viewpoint of homotopy theory, models for the homotopy th ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. In this paper we give a summary of the comparisons between different definitions of so-called (∞,1)-categories, which are considered to be models for ∞-categories whose n-morphisms are all
invertible for n> 1. They are also, from the viewpoint of homotopy theory, models for the homotopy theory of homotopy theories. The four different structures, all of which are equivalent, are
simplicial categories, Segal categories, complete Segal spaces, and quasi-categories. 1.
"... Abstract. We explain how higher homotopy operations, defined topologically, may be identified under mild assumptions with (the last of) the Dwyer-Kan-Smith cohomological obstructions to
rectifying homotopy-commutative diagrams. ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. We explain how higher homotopy operations, defined topologically, may be identified under mild assumptions with (the last of) the Dwyer-Kan-Smith cohomological obstructions to rectifying
homotopy-commutative diagrams.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1114611","timestamp":"2014-04-18T01:02:48Z","content_type":null,"content_length":"33199","record_id":"<urn:uuid:8af5e0b3-64b3-4a46-8722-13b8828391ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Sparse Coding Method for Specification Mining and Error Localization
Wenchao Li and Sanjit A. Seshia
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2011-163
December 25, 2011
Formal specifications play a central role in the design, verification, and debugging of systems. We consider the problem of mining specifications from simulation or execution traces of reactive
systems with a special focus on digital circuits. We propose a novel sparse coding method that can extract specifications in the form of a set of basis subtraces. For a set of finite subtraces each
of length p, we introduce the sparse Boolean basis problem as the decomposition of each subtrace into a Boolean combination of only a small number of basis subtraces of the same dimension. The
contributions of this paper are (1) we formally define the sparse Boolean basis problem and propose a graph-based algorithm to solve it; (2) we demonstrate that we can mine useful specifications
using our sparse coding method; (3) we show that the computed bases can be used to do simultaneous error localization and error explanation in a setting that is especially applicable to post-silicon
BibTeX citation:
Author = {Li, Wenchao and Seshia, Sanjit A.},
Title = {A Sparse Coding Method for Specification Mining and Error Localization},
Institution = {EECS Department, University of California, Berkeley},
Year = {2011},
Month = {Dec},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-163.html},
Number = {UCB/EECS-2011-163},
Abstract = {Formal specifications play a central role in the design, verification, and debugging of systems. We consider the problem of mining specifications from simulation or execution traces of reactive systems with a special focus on digital circuits. We propose a novel sparse coding method that can extract specifications in the form of a set of basis subtraces. For a set of finite subtraces each of length p, we introduce the sparse Boolean basis problem as the decomposition of each subtrace into a Boolean combination of only a small number of basis subtraces of the same dimension. The contributions of this paper are (1) we formally define the sparse Boolean basis problem and propose a graph-based algorithm to solve it; (2) we demonstrate that we can mine useful specifications using our sparse coding method; (3) we show that the computed bases can be used to do simultaneous error localization and error explanation in a setting that is especially applicable to post-silicon debugging.}
EndNote citation:
%0 Report
%A Li, Wenchao
%A Seshia, Sanjit A.
%T A Sparse Coding Method for Specification Mining and Error Localization
%I EECS Department, University of California, Berkeley
%D 2011
%8 December 25
%@ UCB/EECS-2011-163
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-163.html
%F Li:EECS-2011-163
|
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-163.html","timestamp":"2014-04-17T06:44:04Z","content_type":null,"content_length":"6689","record_id":"<urn:uuid:f82339ad-368e-4512-a6b3-8789f7e75555>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Classical Model
This page describes the Classical Model.
The Production Function and the Demand for Labor
The Production Function
In the classical production function, output Y is taken to be a function of capital K and labor N. (The notation for labor suggests the number of hours or the number of workers.) In the short run the
capital stock is taken to be fixed at K so that
Y = f(K,N).
Demand for Labor
The marginal product of labor is dY/dN = MPN, which should be a decreasing function of Y. A profit-maximizing firm should hire additional workers if P·MPN > W. At the margin P·MPN = W or,
equivalently, MPN = W/P. The MPN curve thus is the demand for labor.
The Supply of Labor
The supply of labor is an increasing function of the real wage rate.
Equilibrium in the supply and demand for labor as functions of the real wage rate determine the real wage rate and the quantity of labor hired N. The quantity of labor hired then determines, via the
production function, the level of output Y.
The graphic to the right also shows the effects of a negative technology shock. Output goes down for two reasons. First, the labor demand curve shifts to the left, lowering the equilibrium amount of
labor hired. Second, output for that amount of labor is lower because the production function has shifted downward.
Aggregate Supply and Demand
Equilibrium in aggregate supply and aggregate demand determines the price level P.
Aggregate Supply
Given that the level of output Y is already determined, the aggregate supply curve is vertical.
Aggregate Demand
The classical aggregate demand is based on M = k P Y, where k is a constant because the velocity of money (Veocity of Money, Wikipedia) is fixed.
Supply and Demand for Loanable Funds
Adding a supply and demand for loanable funds produces an equilibrium interest rate. This completes the Classical Model.
Note that the interest rate does not appear in any other part of the model. There is no mechanism by which changes in the interest rate (or investment) lead to changes in output.
The EconModel presentation for the Classical Model has the following features.
Building Blocks
The EconModel presentation explains the following curves:
Production Function
Labor Demand
Labor Supply
Aggregate Supply
Aggregate Demand
Loanable Funds Supply and Demand
The EconModel presentation analyzes the effects of changes in:
Technology/Productivity Shock
Taxes on Labor Income
Money Supply
Government Spending/Taxes
|
{"url":"http://econmacro.com/classical/classical_model.htm","timestamp":"2014-04-19T11:56:15Z","content_type":null,"content_length":"7168","record_id":"<urn:uuid:b59c33f9-ad9d-4cf1-b241-c8a7300d48bf>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Temperature determining device and process - Felice, Ralph A.
The present invention relates to a radiation pyrometer useful for the measurement of the temperature of a radiating body. More particularly, the present invention relates to a radiation pyrometer
that enhances the resolution and repeatability of the measured temperature of the radiating body. Additionally, the present invention relates to the technique utilized to enhance the resolution and
repeatability of the measured temperature.
Radiation pyrometers are known and commercially available. Typically, pyrometers are used to generate a measured temperature of a radiating body. The term "target" is used to indicate the radiating
body evaluated for temperature determination, and the term "measured temperature" is used to indicate the value generated by a pyrometer or a pyrometric technique. The measured temperature may, or
may not, be the actual temperature of the target.
Pyrometers are particularly useful for measuring target temperatures when the target is positioned in a remote location, or when the temperature or environment near the target is too hostile or
severe to permit temperature measurement by other, more conventional, means or when the act of measuring in a contact manner may itself perturb the target temperature. The terms "measuring" and
"measure" are use to include all aspects of a pyrometric technique including, but not limited to, energy collection, correlation, data manipulation, the report of the measured temperature, and the
Current pyrometers are one of two types: brightness or ratio devices. Brightness and ratio pyrometers both utilize a solution of a form of the Planck Radiation Equation to calculate the target's
measured temperature. The Planck Radiation Equation for spectral radiation emitted from an ideal blackbody is ##EQU1## where L[]λ =radiance in energy per unit area per unit time per steradian per
unit wavelength interval,
h=Planck's constant,
c=the speed of light,
λ=the wavelength of the radiation,
k[B] =Boltzmann's constant, and
T=the absolute temperature.
For non-blackbodies, ##EQU2## where H[]λ =the radiation emitted, and ε=emissivity. In the brightness method of pyrometry, H[]λ and ε are measured at a known wavelength, λ, and, therefore, T can be
Brightness devices rely upon capturing a known fraction of the radiation from a source in a particular solid angle. Brightness pyrometers known in the prior art are dependent upon knowing the
emissivity of the target, as required by Equation 2, supra. Emissivity is the ratio of the radiation emitted by the target to the radiation emitted by a perfect blackbody radiator at the same
temperature. Typically, emissivity is unknown or estimated to a low degree of accuracy. Additionally, the emissivity is often a function of the target temperature, wavelength of radiation examined,
and history of the target. This limits the utility of brightness pyrometry severely.
In practice, it is left to the user of a brightness pyrometer to estimate target emissivity, usually based upon an analysis of the target's composition. The user must then determine if the target's
thermal and environmental history have not appreciably altered the target emissivity. The wavelength or group of contiguous wavelengths of radiation examined are determined by the instrument used,
and no selection is possible. It is then left to the user to decide whether or not the indicated target temperature is correct.
Brightness pyrometers are also susceptible to effects of the environment. The gases given off by the target or otherwise present in the atmosphere can selectively absorb radiation at various
wavelengths, thus altering the energy reaching the pyrometer and hence the measured temperature. Again, current instruments give no guidance or assistance to the user in surmounting this obstacle.
Ratio pyrometers depend upon graybody behavior. A graybody is an energy radiator which has a blackbody energy distribution reduced by a constant, known as the emissivity, throughout the wavelength
interval examined. Ratio pyrometers detect the radiation intensity at two known wavelengths and, utilizing Planck's Equation, calculate a temperature that correlates to the radiation intensity at the
two specified wavelengths.
One form of the Planck Radiation Equation useful for ratio pyrometry is expressed as ##EQU3## where T=absolute temperature; λ[i] =specific wavelength chosen;
C'=second radiation constant=hc/k[B] ;
R=ratio of radiation intensity at λ[1], to that at λ[2] ; and
K[i] =instrument response factor at each wavelength chosen.
Here the low-temperature, short-wavelength approximation has been made; i.e., e^hc/λk[B]^T -1! has been replaced with e^hc/λk[B]^T !.
Tradeoffs must be made in the design of ratio pyrometers, particularly in the wavelengths selected for inspection. Planck's Equation yields higher precision when the selected wavelengths are further
apart. However, broadly spaced wavelengths permit extreme errors of indicated temperatures for materials that do not exhibit true graybody behavior. In practice, the two distinct wavelengths are
typically chosen close together to minimize target emissivity variations, and the resulting diminution of accuracy accepted as a limitation of the pyrometric device.
Ratio devices are also affected by gaseous absorptions from the workpiece or environment. If a selective absorption occurs for either of the two wavelengths fixed by the instrument, the measured
temperature will be incorrect.
Both brightness and ratio devices are therefore critically dependent on target emissivity and atmospheric absorptions in the region under study.
There is another, more subtle error to which both brightness and ratio devices are prone. If the measuring device has a significant bandwidth at the wavelengths utilized, a simple emissivity
correction will not suffice for a target with spectral variation of emissivity. The emissivity correction is treated as a variable gain for both classes of devices (brightness and ratio), and is
therefore a linear correction. If the bandwidth is large the contribution from neighboring wavelengths of different emissivity will render the resulting radiation intensity variation with temperature
non-linear, since the Planck function is non-linear. This implies that there is no single emissivity correction for certain targets if the bandwidth is large. Furthermore, if any element in the
optical path has a spectral transmission dependence, the same error applies; no single gain factor can correct for such an optical element (e.g., a gaseous, absorbing atmosphere, a glass window or
lens, a mirror, etc.).
Experimenters have investigated multi-wavelength pyrometry for some time. G. A. Hornbeck (Temperature: Its Measurement and Control in Science and Industry, 3 (2), Reinhold, N.Y., 1962) described a
three-wavelength device that could measure temperatures independent of target emissivity if the emissivity variation was linear over the wavelengths examined. The works of Cashdollar and Hertzberg
(Temperature: Its Measurement and Control in Science and Industry, 5 453-463, American Institute of Physics, New York, 1982; U.S. Pat. No. 4,142,417) describe temperature measurement of particulate
matter and gas in coal dust explosions using six-wavelength and three-wavelength devices utilizing a least squares fit to Planck's Radiation Equation under the assumption that the particles are
essentially graybodies and that the dust cloud is optically thick.
Gardner et al. (High Temperature-High Pressures, 13, 459-466, 1981) consolidated the contents of a series of papers on the subject. Gardner extends the concept of Hornbeck as well as the work of Svet
(High Temperature-High Pressures, 11, 117-118, 1979), which indicated that emissivity could be modeled as linear over a range of wavelengths for a number of materials. Also of interest is a previous
publication by Gardner (High Temperature-High Pressures, 12, 699-705, 1980), which discusses coordinate spectral pairs of measured intensity and the associated wavelength. Differences between all
possible pair combinations are calculated, and the target emissivity estimated. Use of the emissivity with measured intensities permits calculation of the target temperature. The work of Andreic
(Applied Optics, Vol. 27, No. 19, 4073-4075, 1988) calculated the mean color temperature from many spectral pairs and determined that detector noise of only 1% would produce intolerable effects on
measurement accuracy. The references of Hornbeck, Cashdollar, Hertzberg, Gardner, Svet, and Andreic, discussed above, are incorporated herein by reference.
In contrast, the present invention measures the radiation intensity at numerous wavelengths of extremely narrow bandwidth to generate a large number of coordinated data pairs of primary data points,
fits the primary data points to a mathematical function, generates a statistically significant number of processed data points from the mathematical function, calculates an individual two-wavelength
temperature for several pairs of processed data points, inspects the results for internal consistency, and numerically averages the appropriate ensemble of individual two-wavelength temperatures to
generate the measured temperature. A data point is defined as a wavelength and its associated (spectral)intensity such that if each were substituted into Equation 1 a unique temperature would result.
A processed data point is a data point as described above except that the spectral intensity is generated by the invention's mathematical function. A pair of processed data points, hereafter known as
a generating pair, is required to generate a temperature by the use of Equation 3, the formula for ratio pyrometry.
Nothing in the prior art envisions generating a non-Planckian mathematical function to fit primary data points, the calculation of multiple processed data points, and the numerical averaging of the
multiple processed data points to generate a measured temperature of extreme accuracy and precision with an associated tolerance. In contrast to the limited capabilities of previous techniques, the
present invention has demonstrated an accuracy of measured temperature to ±5° C. at 2500° C., or ±0.15%, with a reproducibility of ±0.015%.
It also yields a tolerance--a measure of accuracy for the indicated temperature--which has never been offered before. It is an extremely useful feature, in that its result is that the user
immediately knows to what degree the measurement just made is to be relied upon. This is in stark contrast with prior practice. The accuracy of pyrometers is typically specified by their
manufacturers. This specification means that when the target is a blackbody (or possibly a graybody) and the environment does not interfere, the instrument will return a measurement of the specified
But measurements of real interest occur with targets and environments of unknown characteristics. The current invention detects whether the target or the environment are not well behaved. In the case
of the target this can mean exhibiting other than graybody behavior; in the case of the environment this might result from other than gray or neutral density absorption. In spite of such
deficiencies, the present invention extracts the correct temperature. The tolerance reported with the temperature indicates how successful that extraction was.
The present invention also has a unique advantage with respect to immunity from noise. As has been previously described, one reason to choose the wavelengths close together for ratio temperature
measurement is to eliminate the variation of emissivity as a contributing factor to the measurement error. The rationale is that if the wavelengths are close together the change in emissivity is
likely to be small. However, choosing the wavelengths close together maximizes the effect of noise. The magnitude of the noise generally remains constant throughout the spectrum. Choosing the
wavelengths close together insures that the intensity will not differ much between the two wavelengths, thus making the noise contribution a larger fraction of the measured signal.
The invention overcomes this problem by using the weight of the entire spectrum collected to fix each processed-intensity data point. Thus processed data points can be chosen arbitrarily close
together without magnifying the noise contribution. Observation and modeling show that the contribution of noise is actually less than that expected from evaluating the expression for error for the
extremes of wavelength measured. The error associated with any two wavelength/intensity pairs can be calculated using differential calculus if the error is small: ##EQU4## where dR=error in the
ratio, and R=ratio of intensities at two wavelengths.
Here the term dR/R can be replaced with the infinitesimal, ΔR/R, where ΔR is the error in the ratio, and similarly, dT/T can be replaced with ΔT/T where ΔT is the error in temperature. The equation
thus becomes: ##EQU5##
Equation 5 can be used to calculate the maximum expected error, which can be compared to the error actually observed. The observed error of the invention has uniformly been smaller than the
calculated value. Equation 5 further points out that the accuracy observed to date is not the limit of the accuracy that can be expected. The invention is calibrated according to a source of radiant
intensity, instead of a standard source of temperature. Therefore, if the total error in radiant intensity, ΔR/R, is reduced to 1%, the expected error at 2500° C. is ±0.10%.
If the target exhibits graybody behavior in any spectral region, it is also possible for the present invention to quantify the target emissivity in all regions. That is, the spectral emissivity for
the entire wavelength range of the data can be quantified once the temperature is known. Once quantified, changes in emissivity can identify changes in the target as a function of various external
effects (time, temperature, chemistry, etc.), as well as identify changes in the target environment, such as off-gassing, reactions, or material decomposition.
In addition, the choice of a source of radiance as the calibration standard extends the useful operating range of the present invention well above currently available temperature calibration
standards. Current pyrometers are calibrated by exposing their optical inputs to blackbody sources at the temperature desired and in some fashion (electrical or mechanical) forcing the output of the
pyrometer to agree with the blackbody temperature. The limit for such a direct temperature calibration is 3000° C., the highest temperature a blackbody source can currently attain reproducibly. The
invention described herein, by way of contrast, need only be calibrated by a source of radiant intensity (that is, a device whose emitted radiation is known as a function of wavelength, such as a
standard lamp) to yield accurate temperatures. There is no need to expose the invention to the range of desired temperatures for it to be capable of measuring that range, a feature not possible using
the prior art.
The present invention is a method to measure the temperature of a radiating body, and a device which utilizes the method.
The measurement of temperature is a problem in many process industries: aluminum, iron and steel, ceramics, cement, glass, and composites are a few examples. Non-contact, and therefore
non-perturbing, techniques of radiation pyrometry would be preferred but for the weakness that, as currently practiced, they require knowledge of the target's emissivity. This parameter is defined as
the ratio of the radiation emitted by the sample to that of a blackbody (ideal) radiator at the same temperature.
Unfortunately emissivity is a function of the target's composition, morphology, temperature, and mechanical and thermal histories, and of the wavelength at which the measurement is made. For some
materials, it changes while the temperature measurement is being made. Prior to the present invention, this central difficulty has proven so intractable that the growth of radiation pyrometry has
been stunted.
The effect of this difficulty is to preclude trustworthy temperature determination without allowance for emissivity within the measurement. The historically recommended method of accomplishing this
is to encase the experiment in a blackbody cavity, thereby allowing the radiation to come to thermal equilibrium. Clearly this is not a practical solution.
The commercially available technique of ratio, or two-color, pyrometry attempts another approach: canceling the emissivity by dividing two measurements of the radiation emitted and calculating the
temperature from this ratio. This works in principle but there is still no guarantee that the emissivity is constant at the wavelengths chosen. This concern is the basis for the instrument maker's
dilemma: whether to opt for emissivity cancellation or precision. Emissivity cancellation and precision are mutually exclusive in a ratio instrument, and the choice is signaled by the distance
between wavelengths. The closer the wavelengths the more likely the emissivities are to cancel; the farther apart the larger the magnitude of the resultant signal, and thus the greater the precision.
The present invention, which is suitable for measuring the temperature of any radiating body that is above ambient temperature, quantifies radiation intensity at multiple wavelengths, generates a
mathematical function to fit the primary data points, calculates multiple processed data points using the mathematical function, utilizes multiple pairs of the processed data points to calculate
individual two-wavelength temperature estimates, inspects the results for internal consistency, and numerically averages the estimates to generate a measured temperature of great accuracy and a
tolerance, which is a quantification of that accuracy. The invention also permits evaluation of the quality of the emission spectra being measured, and identifies whether the target exhibits true
graybody behavior and, if it does not, which portions of the spectra will generate erroneous measured temperatures.
The present invention's ability to quantify radiation intensities at multiple wavelengths with a single sensor minimizes temperature measurement errors due to variations between sensors. Removing
this source of intrinsic error permits statistical manipulation of the collected data to enhance the accuracy and reproducibility of the temperature measurement technique. Fitting the primary data
points to a mathematical function accommodates target deviations from true graybody behavior, as well as further minimizing the effects of thermal, detector, and instrument noise.
The present invention provides a process for measuring temperature, comprising quantifying the radiation intensity emitted by a radiating body at no less than 4 distinct wavelengths; generating a
mathematical function which correlates the radiation intensities to the corresponding wavelength at which the radiation intensity was quantified; and generating a temperature value utilizing Equation
3 and no less than two processed data points generated utilizing the mathematical function. The invention may also be practiced using three or more processed data points generated utilizing the
mathematical function. The invention also encompasses the use of only quantified radiation intensity which exhibits emission spectra consistent with known thermal radiation effects for generation of
the mathematical function. Data may be said to be consistent when the processed data points are computed at wavelengths where the fractional residuals of the quantified radiation intensity exhibit an
RMS value substantially equal to zero or where the quantified radiation intensity exhibits magnitudes of fractional residuals no less than -0.1 and no more than 0.1, preferably no less than -0.05 and
no more than 0.05, most preferably no less than -0.02 and no more than 0.02. The invention may also be used to determine the emissivity of the radiating body, as well as the absorption of the
intervening environment between the radiating body and the device utilized to quantify the radiation intensity of the body. Additionally, the chemical species present in the environment between the
radiating body and quantifying device may be identified and measured.
The invention also includes averaging the individual temperature values calculated utilizing Equation 3 and no less than three processed data points, and the determination of the tolerance of the
resulting temperature value by calculating the statistical variation of the temperature values calculated utilizing Equation 3 and no less than three generating pairs. One pertinent statistical
variation is the determination of the standard deviation of the average of the individual temperature values calculated.
The invention also encompasses a device, comprising an optical input system, a wavelength dispersion device, a radiation transducer, a means for generating a mathematical function to correlate the
radiation transducer output to the corresponding wavelengths of incident radiation; and a means for generating a temperature value utilizing Equation 3 and no less than two processed data points
generated utilizing the mathematical function, as well as all the other capabilities described herein.
The present invention thus provides a process and apparatus for temperature determination which exhibits improved accuracy, noise immunity, great adaptability to varied temperature measurement
situations, and unlimited high temperature response. In addition, the tolerance of the measured temperature is reported, temperature measurements are made independent of knowledge of the target
emissivity, and all corrections are made digitally (in a mathematical expression, leaving the hardware completely versatile). These features provide a method and device which are effective in
non-ideal, i.e., absorbing or reflecting, environments.
Other advantages will be set forth in the description which follows and will, in part, be obvious from the description, or may be learned by practice of the invention. The advantages of the invention
may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
FIG. 1 A conceptual schematic of the invention
FIG. 2 A graph of the data and of the mathematical function (an intermediate output of the invention) for 2000° C.
FIG. 3 Fractional residuals for the mathematical function of FIG. 2, with systematic variations less than 0.02
FIG. 4 An example of a graph of noisy data with a graph of the mathematical function generated by the invention superimposed
FIG. 5 A graph of the fractional residuals for a random noise test illustrating fractional residuals with an RMS value of zero or substantially zero
FIG. 6 Data collected by the invention and corrected for instrument response
FIG. 7 The data of FIG. 6 with 10% random noise added
FIG. 8 Spectral data collected by the invention in an off-gassing environment, corrected for instrument response
FIG. 9 Spectral data collected immediately after FIG. 8 but with the environmental off-gas purged away
FIG. 10 Absorption spectra of chemical species present in the target environment
The invention relates to a non-perturbing method for the measurement of elevated temperatures, and an apparatus to utilize the method.
The method of the instant invention includes the measurement of thermal radiation at multiple wavelengths, representing the measurements of thermal radiation by an analytical function, determining
the useful range of wavelengths used for thermal radiation measurement, and testing calculated temperatures based upon multiple pairs of measured thermal radiation for consensus. Additional steps of
calibrating the apparatus for system optical response, and for displaying the calculated consensus temperature, or activating a device based upon the calculated consensus temperature, are also
encompassed by the invention.
The apparatus of the invention is any device, or collection of devices, which is capable of separating thermal radiation into its spectral components, transducing the spectral components at three or
more wavelengths, generating an analytical function to represent the transduced radiation, determining the range of the analytical function where the transduced radiation is within a specified
tolerance, and calculating a consensus temperature based upon two or more points on the generated analytical function.
Reference will now be made to the preferred embodiment of the apparatus, an example of which is illustrated schematically in FIG. 1. As illustrated in the figure, the apparatus of the invention
comprises an optical input, an optical transducer, and computation means.
Several embodiments of the optical input system have been utilized. A preferred embodiment utilized a commercially available camera body, in this case a Nikon F3. Thus, any appropriate compatible
lens can be used. Specifically, a Tamron SP 28-135 mm F/4 zoom lens was utilized.
The back of the camera was modified to accept a fiber optic connector that held the end of a fiber optic cable at a location corresponding to both the center of the focusing reticule and the film
plane. Depicted as the Optical Transmission Line in FIG. 1, the fiber optic cable is used to couple the optical input system with the optical transducer.
The target could thus be observed through the camera viewfinder and the appropriate portion of the target brought into focus in the traditional manner. The camera shutter was then locked open to
transfer the incoming radiation to the pyrometer. The fiber optic cable was PCS 1000, a plastic-clad single-strand fiber with a 1-mm fused-silica core, manufactured by Quartz and Silice and available
from Quartz Products, P.O. Box 1347, 688 Somerset St., Plainfield, N.J. 07060. As is clear to one skilled in the art, numerous methods and devices capable of directing thermal radiation to the
transducer are encompassed by the invention, including but not limited to lenses, mirrors, prisms, graded-index fiber optics, holographic and replicated optical elements, electrical and magnetic
equivalents of lenses and mirrors, direct radiation, and the like.
The second end of the fiber optic cable terminated in a flat-field spectrograph which dispersed the light into its spectral components. The spectrograph used in the preferred embodiment, model CP-200
manufactured by Instruments SA (of 6 Olsen Ave., Edison, N.J. 08820-2419), was fitted with a concave holographic grating of either 75 or 200 lines/mm which provided dispersion of 0.9 or 0.6 nm,
respectively, when coupled with a model 1462 detector manufactured by EG&G Princeton Applied Research, P.O. Box 2565, Princeton, N.J. 08543-2565. The model 1462 detector is a linear diode array with
1024 elements on 25 μm spacing. An typical order-sorting blocking filter limits the spectrum to wavelengths longer than 400 or 500 nm.
The flat field spectrograph and linear diode array comprise the radiation transducer of the preferred embodiment. The present invention encompasses any means for transducing the spectral components
of the thermal radiation into a signal which may be used to generate an analytical function to represent the radiation. The transduced signal could be pneumatic, hydraulic, optical, acoustic or
gravimetric, but is more typically electrical. Other acceptable transducers include, but are not limited to, linear diode arrays, charge coupled devices, charge injection devices, infrared focal
plane arrays, multiple photocell arrays, and single element detectors equipped with multiple wavelength filters, absorbers, or optical systems capable of separating the spectral radiation.
In the preferred embodiment, the transducer generates an analog electrical signal, which is converted to an equivalent digital signal by a PARC Model 146 OMA A/D converter.
The digitized signal thus resulting quantifies the thermal radiation intensity at 1024 discrete wavelengths (collected simultaneously through a common optical system) and is stored numerically in a
computer file for post-processing.
Correction (intensity calibration) of the digitized data so that the discrete spectral intensities have the appropriate relative magnitude requires a system response curve. This is generated
separately by collecting data using a standard lamp as the target. The resulting system response curve provides correction through a matrix multiplication of subsequent measurements, and need not be
repeated unless apparatus components are reconfigured.
This calibration of the system was effected using a standard of spectral irradiance, such as an Eppley Laboratories 100 watt Quartz Iodine lamp. From Equation 5, ##EQU6## it can be seen that, for
typical values of temperature and wavelength, the error in temperature is significantly smaller than the error in the irradiance calibration. For example, if a 1% irradiance calibration were utilized
to calibrate a system at wavelengths 550 and 900 nanometers the resulting error in temperature at 1000 K. would be 0.1%, or 1 degree.
The corrected digitized data are then represented analytically by fitting these data to a mathematical function. It has been found that several non-Planckian mathematical expressions can represent
thermal radiation well: exponential and logarithmic functions, and polynomials of second, third, fourth, and higher orders. In the case of the quadratic and higher order polynomials the method of
orthogonal polynomials can be used. FIG. 2 shows a corrected data set and the fit of that set on the same axes.
If every combination of two wavelength intensities were used to calculate the target temperature, more than 500,000 calculations of temperature would be performed. While this can be easily done using
currently available microcomputers, it is neither necessary nor desirable. Better results are obtained when an analytical function is used to represent the data, and subsequent calculations use the
analytical form.
As described in Equation 1, above, a general statement of The Planck Radiation Equation for spectral radiation emitted from an ideal blackbody is ##EQU7## Defining the radiation constants C and C' by
the expressions C=2hc^2, and C'=hc/k[B],
Equation 1 can be manipulated to read ##EQU8## where the usual short wavelength assumption has been made. The temperature can then be calculated using the expression ##EQU9## where the ratio of
spectral intensities, ^L λ[1]^/L λ[2] is represented as R. This solution is the basis of all ratio, or two-color, pyrometry.
Differentiation of this expression to evaluate the error in the calculated temperature (dT/T) yields Equation 4, ##EQU10## The error in the calculated temperature is thus a product of three terms.
The first term, T/C', is fixed by the target temperature and the radiation constant. The third term, the uncertainty in the ratio of spectral intensities dR/R, is a function of the specific equipment
used to measure target spectral intensity. Inspection of Equation 4 indicates that the uncertainty in temperature, dT/T, is directly proportional to the second term, (λ[2] ×λ[1])/(λ[1] -λ[2]) which
is known as the effective wavelength.
Rearranging the expression for effective wavelength in Equation 4 leads to ##EQU11## where λ[2] <λ[1]. Inspection of this expression of the effective wavelength term indicates that the expression is
minimized where λ[2] is as small as possible, and λ[1] is as large as possible.
Use of Expert System Software
The use of specialized software, known generally as "expert system software" is applicable to the present invention. The expert system software performs, among other functions, the following:
Collects data
Corrects data for background and for instrument, environment, and target (if known) optical response
Discards obviously non-thermal data
Represents data by an analytical function
Determines the useful spectral range of the data
Tests the data for consensus temperature
a) Uses the consensus range to report the temperature and its tolerance
b) Reports that there is no consensus.
Thus, the invention provides a measured temperature and quantifies the accuracy of the result obtained by a statistical evaluation of the resultant suite of calculated temperatures.
The invention also identifies those situations when the process and apparatus of the invention are unsuccessful. This typically means that some environmental parameter is perturbing the data. In this
event, suitable optics can be utilized, due to the extreme flexibility of the apparatus, to selectively filter, remove or compensate for the perturbing effect. Additionally, portions of the emission
spectra that exhibit behavior inconsistent with known thermal radiation effects can be excised from the evaluated data set, and erroneous measurements based upon inconsistent segments of the
evaluated spectra can be avoided.
FIG. 6 depicts a collection of raw emissivity data points, and clearly shows absorption bands at 590 nm, 670 nm and at 770 nm. These excursions are non-thermal, systematic errors. Although the
present invention minimizes the effect of such excursions, excising the non-thermal data or selecting intensity values from portions of the data not affected by the non-thermal error can enhance the
quality of the temperature determination and increase the accuracy of the measurement.
The invention may also maintain a database of previous temperature measurements for a specific target. Subsequent temperature measurements of the same or similar targets may be compared to the
software's database values to provide an internal check of the data. Emissivity/wavelength relationships, in particular, may be thus critically evaluated.
Except for the collection of raw data points, generating a mathematical function to fit the data points, the calculation of individual two-wavelength calculated temperatures, the numerical averaging
of the individual two-wavelength calculated temperatures to generate a measured temperature, and the discarding of values not meeting the statistical criteria chosen, the specifics related to
measuring target temperatures are not, however, critical to the present invention.
EXAMPLE 1
A series of temperature measurements were made using two commercially available NIST-traceable blackbodies as targets. The high temperature source was Model BWS156A (Electro Optical Industries Inc.,
P.O. Box 3770, Santa Barbara, Calif. 93130), covering the range from 1000° C. to 3000° C. The low temperature source was Model 463/101C (Infrared Industries, 12151 Research Parkway, Orlando, Fla.
32826), covering the range from 100° C. to 1000° C. Blackbody setpoints from 600° C. to 3000° C. were evaluated using the invention. Table 1 is an example of such an evaluation. In general, two
measurements were made at each setpoint; these show the exceptional reproducibility of the invention.
The sequence of operation of the invention began with the collection of raw data. The optical input portion of the apparatus was positioned to permit the radiation emitted from the target to be
directed onto the sensor, and the spectral emissions were quantified at multiple wavelengths.
The first computational step was that the background was subtracted from the raw data. It had been collected in the same manner as the raw data, but without the target's radiation being presented to
the optical input. The background is typically electronic in nature (e.g., dark current) but may have a physical component: either reflections or emissions.
The next step was the correction of the data for instrument factors: i.e., transmittivity/reflectivity of every optical element in the collection and transmission path and adjustment for the various
responsivities of the individual detector elements. The corrected data was then fitted to a numerical expression, such as a polynomial of high-enough order (quadratic or higher), to adequately
represent the data. A cubic expression was determined to be adequate. An example of the fit of the numerical expression to raw data points from Example 1, an evaluation at 2000° C., is depicted in
FIG. 2.
The residuals (data values of intensity subtracted from corresponding values from fitted curve) are helpful in quantifying the accuracy of an evaluation. The fractional residuals (residual divided by
corresponding data value) from the 2000° C. fit selected above are depicted in FIG. 3. Inspection of FIG. 3 indicates that fractional residuals with a systematic error less than 0.02 may be found
between 500 nm and 800 nm. This boundary of ±0.02 has been found to be a useful criterion as to whether or not the data is well represented by the analytical function where systematic variations from
zero are seen in the fractional residuals. Where the fractional residuals show variations of a random nature, i.e., their RMS value is zero, there appears to be no upper limit to their magnitude for
good results to be obtained. Therefore, the portion of the data between 500 and 800 nm was selected as the useful range of the evaluation.
Another measure of the quality of the analytical representation of the data is the coefficient of determination. Coefficients of determination such as that shown in Table 6, greater than 0.99, are
often observed. While this indicates that the data are well-represented by the analytical function, the reverse is not true. For example, the coefficient of determination for Table 7 is 0.910.
The numerical expression that had been fitted to the data must then solved for 6-50 values of intensity of radiation for a series of wavelengths chosen incrementally. The increment is usually 25 or
50 nanometers, and the range over which they are chosen is determined by the temperature of the object. These are the pairs from which the temperatures are calculated. The number of individual
temperature values, N, is j items taken 2 at a time, [j] C[2] or ##EQU12## For this example, j=6, and 15 intensity pairs were used to generate 15 individual temperature values.
These values were then inspected for consensus; i.e., to see whether or not they yielded the same temperature. Since the entire spectrum is utilized in a systematic way, it is possible to determine
from this inspection which areas of the collected spectrum yield values which are in general agreement with each other. In this way absorptions and emissions from the optical environment as well as
non-graybody areas of the target spectrum can be eliminated, and the previous steps repeated until an acceptable consensus temperature is determined, or it is determined that the apparatus, as
configured, is not capable of generating a consensus temperature within the acceptable error tolerance.
The consensus temperature is judged worthy of reporting as the object temperature if a significant portion of the spectrum yields a consensus value which, when averaged, displays a standard deviation
within an acceptable tolerance range, typically of on the order of 0.25% of the absolute temperature. A significant body of experience using standards of known temperature as the objects to be
measured indicates that the standard deviation of the consensus temperature can be considered as the tolerance to which the temperature is known.
EXAMPLE 2-4
Multiple temperature evaluations were made in the manner described in Example 1 of blackbody targets at temperatures between 850° C. and 2500° C. The results are reported in Tables 2-4.
EXAMPLE 5
Evaluation of the error correcting capability of the invention was accomplished by intentionally injecting random error (noise) into both generated (artificial) and real data sets, but otherwise
practicing the algorithm of the invention as described above.
Tables of spectral intensities at various wavelengths for various temperatures were generated using Planck's law for a number of temperatures. These data then had varying amounts of error inserted
over their spectral ranges using a random number generator. Specifically, error of ±10% was added from 450-495 nm, ±5% for 496-517 nm, and ±2% for 518-800 nm. FIG. 4 depicts the resulting intensity/
wavelength curve for 2400 Kelvins and the fitted curve for the same region. FIG. 5 depicts the fractional residual values resulting from a cubic curve fit to the artificially noisy raw data. The
residual evaluation for this example clearly shows the noise added. The results of this and other artificial random noise tests are tabulated in Table 5. Inspection of this table shows the invention
returns a value closer to the temperature used to generate the uncorrupted spectra than that returned by simple multi-value averaging; the average error of the invention is less than half that of
simple averaging methods.
To extend the noise evaluation to real data, error was injected to real data sets selected randomly. FIG. 6 depicts the selected raw data corrected for instrument response. A total of 21 calculations
of temperature were made using points extracted from the fitted curve at values from 625 to 925 nm, in 50 nm increments (j=7; N=21). The reported temperature, shown as "Prediction Results" and a
tabulation of the 21 pairs is included as Table 6. An average temperature of 3160.0 Kelvins was generated, with a tolerance of ±10.3 Kelvins.
Random error was then added to the data of FIG. 6; a random number generator added a maximum error of ±10% to each value of intensity. FIG. 7 depicts the data with the error added. A cubic expression
was then fit to the corrupted data, and the same 21 pairs of intensities as in the original data were evaluated. As shown in Table 7, the present invention reported a temperature of 3172.7 ±23.2
Kelvins. The indicated temperature has changed by 12.2 Kelvins, and the measurement tolerance has increased 12.9 Kelvins. The difference in the temperature calculation has changed less than 0.4%
(12.2/3160) while the data has been corrupted by 10%. Moreover, the increase in the measurement tolerance is seen to match almost exactly the change in the reported temperature due to the injected
noise (do we need the quotes?). This shows that the tolerance identifies to the user the degree of error in the reported value.
Comparative Example 5(a)
The corrupted data of Example 5, i.e., the data shown in FIG. 7, were evaluated without fitting the data to a mathematical expression. The data point closest to the selected wavelength values
(624.4006 nm for "625") were chosen for the temperature calculation.
Table 8 shows that the temperature calculated in the manner of the prior art would change 162 Kelvins, to 3322.0 Kelvins, for a noise-induced error of greater than 5%. The measurement tolerance also
increased dramatically, to ±439 Kelvins, indicating that the temperature is no longer well known.
EXAMPLE 6
Example 6 illustrates both the ability of the invention to determine the temperature despite interference by absorbing gas and its ability to accurately determine temperatures much greater than 3000°
C. FIG. 8 shows spectral data collected from a target in an off-gassing environment with a minimal clearing flow of purge gas. Table 9 shows the temperature calculation performed by the invention for
this data. FIG. 9 shows a data collection immediately after that of FIG. 8 with all parameters held constant except for the purge, which had been increased by a factor of six. The absorbing gas had
been mostly cleared away and the only absorptions left are the narrow ones at 589 and 767 nanometers. Table 10 is the temperature calculation for these data. As can be seen, the temperatures
indicated by both calculations, 3526.7 Kelvins/3253.7° C. for Table 9 and 3519.4 Kelvins/3246.4° C. for Table 10, agree very well showing that the invention operates successfully in the presence of
absorbing gas. These also show that the invention is capable of functioning as described well above 3000° C./3273 Kelvins.
EXAMPLE 7
Example 7 illustrates the ability of the invention to provide identification of absorbing chemical species in the environment. A data ensemble such as that represented by the graph of FIG. 8 is the
starting point. The invention's output temperature is calculated as has been described. This value of temperature is then substituted into Equation 1 to generate a corresponding Planckian intensity
for every wavelength of the data set. The generated intensity is then normalized to the collected data at a point where no non-thermal effects are present (in this case at 800 nanometers). The
difference between these two sets of spectral intensities is then calculated, as in FIG. 10, and is the absorption spectrum of the chemical species present. These can be identified using standard
tables of chemical spectra. The net effect is that two unknowns, the temperature of the target and the chemical species of the intervening environment, have been quantified by one measurement.
Thus, it should be apparent to those skilled in the art that the subject invention accomplishes the objects set forth above. It is to be understood that the subject invention is not to be limited by
the examples set forth herein. These have been provided merely to demonstrate operability, and the selection of specific components and operating methodologies, if any, can be determined from the
total specification disclosure provided, without departing from the spirit of the invention herein disclosed and described, the scope of the invention including modifications and variations that fall
within the scope of the attached claims.
TABLE 1
All values in Degrees C. Temperature Setpoint Indicated Difference Tolerance
1600 1603.3 3.3 8.17
1603.3 3.3
1700 1700.3 .3 8.38
1700.4 .4 8.37
1800 1798.2 -1.8 7.92
1798.4 -1.6 7.92
1900 1897.8 -2.2 7.14
1897.8 -2.2 7.14
2000 2001.5 1.5 7.98
2001.5 1.5 8.07
2100 2106.1 6.1 6.45
2106.1 6.1 6.45
2200 2198.1 -1.9 6.96
2198.2 -1.8 6.90
TABLE 2
All temperatures in Deg C. Temperature Setpoint Indicated Difference Tolerance
850 855.3 5.3 15.21
851.4 1.4 14.73
852.6 2.6 14.76
857.9 7.9 15.54
848.2 -1.8 15.05
850.6 0.6 15.02
900 899.0 -1.0 6.71
898.3 -1.7 6.90
900.7 .7 6.36
898.2 -1.8 6.85
898.3 -1.7 7.43
1000 998.3 -1.7 2.90
1002.6 2.6 2.92
1000.5 .5 3.44
1002.4 2.4 2.86
1001.8 1.8 2.95
TABLE 3
All temperatures in Deg C. Temperature Setpoint Indicated Difference Tolerance
1300 1302.9 2.9 6.54
1400 1397.6 -2.4 10.2
1500 1501.4 1.4 11.8
1650 1649.9 -0.1 10.4
TABLE 4
All temperatures in Deg C. Temperature Setpoint Indicated Difference Tolerance
1600 1601.6 1.6 5.79
1800 1797.2 -2.8 6.29
1797.2 -2.8 6.24
2000 1999.0 -1.0 5.05
1999.0 -1.0 5.05
2200 2195.6 -4.4 3.80
2195.7 -4.3 3.80
2300 2296.7 -3.3 1.21
2299.5* -0.5 5.96
2500 2494.9 -5.1 2.66
2498.0* -2.0 5.78
*Difference in repeatability is due to change in apertures between measurements.
TABLE 5
Random Noise Tests Generating Invention Difference Average Difference Temperature Temperature (Col B- Temperature (Col D- Column A Column B Col A) Column D Col A)
2250 2250.1 0.1 2254.3 4.3
2400 2398.5 -1.5 2397.9 -2.1
2500 2502.5 2.5 2504.6 4.6
2600 2601.5 1.5 2604.5 4.5
2700 2705.3 5.3 2708.3 8.3
TABLE 6
Prediction Results Temp = 3160.0 Tol = 10.3 N = 21 r^2 = .99063 675 725 775 825 875 925
Data File: f3213m2.dat
TABLE 7
Prediction Results Temp = 3172.7 Tol = 23.2 N = 21. r^2 = .91024 675 725 775 825 875 925
Data File: F3213M2R.TXT
TABLE 8
Prediction Results Temp = 3322.0 Tol = 439. N = 21. 675 725 775 825 875 925
Data File: F3213M2R.TXT
TABLE 9
Prediction Results Temp = 3526.7 Tol = 45.3 N = 28. r^2 = .98044 575 625 675 725 775 825 875
Data File:f3 220m2.und
TABLE 10
Prediction Results Temp = 3519.4 Tol = 20.9 N = 21. r^2 = .97263 600 650 700 750 800 850
DataFile: f3221m2.und
|
{"url":"http://www.freepatentsonline.com/5772323.html","timestamp":"2014-04-19T07:03:52Z","content_type":null,"content_length":"94410","record_id":"<urn:uuid:f32c22cf-2365-404d-95e5-96795f51300b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Green, CA SAT Math Tutor
Find a Green, CA SAT Math Tutor
...I worked for self-help and business guru Tony Robbins for 2 years and graduated from his 'Mastery University' one year training program. In addition, I worked for world class $500/hr business
coach Doc Barham for about a year. Finally, I have successfully started and run three small businesses:...
47 Subjects: including SAT math, reading, English, writing
...I like making teaching fun and explain in a way that the individual can understand. I worked in a math tutoring center where I tutored kids in first to twelfth grade math and have also had
experience with at home private tutoring. I am very reliable, enthusiastic about what I do, I take things very seriously, I work well unsupervised, I am self-motivated and am very patient.
6 Subjects: including SAT math, geometry, algebra 1, elementary (k-6th)
...I have taught this subject in the last 13+ years more times than I can remember, and I know which areas students struggle the most. I will teach Algebra 1 completely so this will become a
foundation so the student can succeed in Algebra 2, which builds upon the previous Algebra 1 knowledge. Als...
38 Subjects: including SAT math, reading, English, writing
...Since organization and logic skills are key to any essay's argument, this is an obvious fit to my interests. Warning: I will mark your papers up in red ink, my comments are not few or far
between. As an Undergraduate at UCLA, I worked for the Program in Computing Dept. in the computer lab where...
18 Subjects: including SAT math, English, writing, calculus
...I have studied microbiology extensively as part of my two years at medical school. I am familiar with gram negative vs gram postive bacteria (ex. strep vs. e coli) and I am also familiar with
categorizing bacteria according to shape (ex. cocci - strep, trepenoma pallidum - spirochete). I am also...
43 Subjects: including SAT math, English, writing, reading
Related Green, CA Tutors
Green, CA Accounting Tutors
Green, CA ACT Tutors
Green, CA Algebra Tutors
Green, CA Algebra 2 Tutors
Green, CA Calculus Tutors
Green, CA Geometry Tutors
Green, CA Math Tutors
Green, CA Prealgebra Tutors
Green, CA Precalculus Tutors
Green, CA SAT Tutors
Green, CA SAT Math Tutors
Green, CA Science Tutors
Green, CA Statistics Tutors
Green, CA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Broadway Manchester, CA SAT math Tutors
Cimarron, CA SAT math Tutors
Dockweiler, CA SAT math Tutors
Dowtown Carrier Annex, CA SAT math Tutors
Firestone Park, CA SAT math Tutors
Foy, CA SAT math Tutors
La Tijera, CA SAT math Tutors
Lafayette Square, LA SAT math Tutors
Miracle Mile, CA SAT math Tutors
Pico Heights, CA SAT math Tutors
Rimpau, CA SAT math Tutors
Wagner, CA SAT math Tutors
Westvern, CA SAT math Tutors
Wilcox, CA SAT math Tutors
Wilshire Park, LA SAT math Tutors
|
{"url":"http://www.purplemath.com/Green_CA_SAT_Math_tutors.php","timestamp":"2014-04-16T10:34:15Z","content_type":null,"content_length":"24187","record_id":"<urn:uuid:33496554-3249-4597-a545-07ddb225cfbb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Converting Rectanglar equation to polar equation
December 14th 2012, 05:35 PM #1
Junior Member
May 2012
Converting Rectanglar equation to polar equation
Dear Every one,
I need some help converting rectanglur implicit equation to polar equation:
9*(x^2+y^2)=(x^2+y^2-2y)^2. and I know the fact of r=\sqrt(x^2+y^2) and tan \theta=y/x, where x is not equal to 0.
Thank you for your help,
Re: Converting Rectanglar equation to polar equation
Hello, Cbarker1!
So you know that: . $x^2 + y^2 \:=\:r^2$
Do you also know this? . $\begin{Bmatrix}x &=& r\cos\theta \\ y &=& r\sin\theta\end{Bmatrix}$
$\text{Convert to polar form: }\: 9(x^2+y^2)\:=\:(x^2+y^2-2y)^2$
What's stopping you?
$9\underbrace{(x^2+y^2)}_{r^2} \;=\;(\underbrace{x^2+y^2}_{r^2} - 2\underbrace{y}_{r\sin\theta})^2$
. . . . . $9r^2 \;=\;(r^2 - 2r\sin\theta)^2$
December 14th 2012, 05:57 PM #2
Super Member
May 2006
Lexington, MA (USA)
|
{"url":"http://mathhelpforum.com/pre-calculus/209849-converting-rectanglar-equation-polar-equation.html","timestamp":"2014-04-18T17:45:43Z","content_type":null,"content_length":"34992","record_id":"<urn:uuid:28f4591e-22c4-4dcf-94ff-2bfe364b867b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: [SI-LIST] : Differential TDR "Measureme
RE: [SI-LIST] : Differential TDR "Measurements"
From: Farrokh Mottahedin (Farrokh.Mottahedin@quantum.com)
Date: Tue Apr 25 2000 - 14:48:42 PDT
Approach 2 only works if there is no coupling between the differential
signals. This would be the case if the pair members are sufficiently far
apart that the impedance to ground dominates each signal. For example,
design a board with trace geometries such that each trace to board impedance
is 50 ohms, and the traces are separated far apart. Then a differential
signal travelling down these traces sees 100 ohms signal to signal. Now,
because the traces are distant from each other, there is no common mode
noise rejection. Therefore, this is not an electrically desirable solution.
If the traces are close enough to cancel common mode noise, then they
interact, and the coupling must be measured. Perhaps the fab vendor is
using known geometries of trace spacing and size to estimate the
interaction, but then this is not a measurement.
Approach 1 works best on a differential test trace pair that are uniform and
sufficiently long for reflections to settle. Usually 10 cm is sufficiently
long enough to allow a 100 ps risetime TDR signal to settle. ( If tr = 100
ps, bw = .35 / 100 ps = 3.5 GHz, lambda = c / f = 8.5 cm). When
differential impedance is measured on actual routed traces, the stubs and
vias and even sharp bends affect the measurements. The pcb vendor will
fabricate the board so that the test trace impedance is correct, but actual
signal traces may be different. This happened to me with a fab vendor a few
years ago. They measured the test coupon, and I measured the signals.
SCSI SPI-3 standard uses Approach 1, measured either with a TDR or with a
network analyzer.
Farrokh Mottahedin
Quantum Corp.
500 McCarthy Blvd.
Milpitas, CA 95035
-----Original Message-----
From: Zabinski, Patrick J. [mailto:zabinski.patrick@mayo.edu]
Sent: Monday, April 24, 2000 5:45 PM
To: si-list@silab.eng.sun.com
Subject: [SI-LIST] : Differential TDR "Measurements"
We're working more and more with differential signals,
and subsequently dealing with more differential printed
circuit boards (PCBs). Over the past few years, we've
had difficulty with several PCB vendors
trying to obtain a controlled impedance 100 ohm
differential pair.
The problem generally boils down to "who's measurement
do we believe"? We measure one impedance, while the
PCB vendor measures another.
We've done some digging, and there appears to be two
approaches to measuring differential impedance, and I'd
like to hear what folks have to say about them.
Approach 1: inject two signals of opposite polarity,
one into the true and one into the complement. The
complement signal is substracted from the true, and
you read the impedance just like a single-ended
Approach 2: Inject one signal into the true trace and
record its signal. Then, inject a signal into the complement
trace and record its signal. Then, with the magic of
mathematics, compile these two different captured signals
into an effective differential measurement.
The equipment we have in-house uses Approach 1, while
nearly every board vendor we work with uses Approach 2.
Can anyone shed some light into the accuracies, sensitivities,
etc. of these two approaches? Are there cases where one
approach is better/worse than the other?
Pat Zabinski ph: 507-284-5936
Mayo Foundation fx: 507-284-9171
200 First Street SW zabinski.patrick@mayo.edu
Rochester, MN 55905 www.mayo.edu/sppdg/sppdg_home_page.html
**** To unsubscribe from si-list or si-list-digest: send e-mail to
majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE
si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
**** To unsubscribe from si-list or si-list-digest: send e-mail to
majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE
si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
This archive was generated by hypermail 2b29 : Wed Nov 22 2000 - 10:50:06 PST
|
{"url":"http://www.qsl.net/wb6tpu/si-list5/0047.html","timestamp":"2014-04-18T20:45:26Z","content_type":null,"content_length":"9388","record_id":"<urn:uuid:c0987a25-67c4-48be-af60-9a3a003d313f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|