content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
49-XX Calculus of variations and optimal control; optimization [See also 34H05, 34K35, 65Kxx, 90Cxx, 93-XX]
49Qxx Manifolds [See also 58Exx]
49Q05 Minimal surfaces [See also 53A10, 58E12]
49Q10 Optimization of shapes other than minimal surfaces [See also 90C90]
49Q12 Sensitivity analysis
49Q15 Geometric measure and integration theory, integral and normal currents [See also 28A75, 32C30, 58A25, 58C35]
49Q20 Variational problems in a geometric measure-theoretic setting
49Q99 None of the above, but in this section | {"url":"http://www.ams.org/mathscinet/msc/msc2010.html?t=49Q10&btn=Current","timestamp":"2014-04-18T15:41:00Z","content_type":null,"content_length":"12998","record_id":"<urn:uuid:e872973e-c661-46b2-a192-721c2fc58e46>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Topic review (newest first)
2005-12-15 11:55:51
Right again.
2005-12-15 11:07:11
3? Out of 3? Correct?
Lmao, that's crazy, wouldn't of done it without you guys.
Let's suppose the last question was:
f(x) = 3e^x - ½ ln (x - 2) as oppose to f(x) = 3e^x - ½ ln x - 2
Would it then be..
(3e^x)' = 3e^x
(½ ln x - 2)' = is that just, ½ 1/(x-2)? or simplified 1/(2x-4)
∴ ƒ'(x) = 3e^x - 1/(2x-4). Not quite so sure on that one, to be honest.
2005-12-15 10:13:10
3/3, well done. They are all correct.
2005-12-15 09:48:43
Ok, and I'm back again.
ƒ(x) = 3 sin ² x + sec 2x
Ok, I'm not sure if this is correct, but I made that out to be:
(3 sin ² x)' = using chain rule, 3 [ 2 (sin x) (cos x) ] = 6 sin x cos x
(sec 2x)' = 2 sec (2x) tan (2x)
∴ ƒ'(x) = 6 sin x cos x + 2 sec (2x) tan (2x)
Please correct me if I'm wrong
Another question.
[x + ln(2x)] ³
I'm very unsure of whether this is correct, but I ended up with:
[x + ln(2x)]' = (1 + 1/x)
using chainrule I ended up with:
ƒ'(x) = 3 [x + ln(2x)] ² (1 + 1/x)
Ok, looking back on that, it looks very wrong. So corrections would never go a miss.
Final One:
f(x) = 3e^x - ½ ln x - 2
Hmm... let's begin
(3e^x)' = 3e^x, right?
(½ ln x)' = ½ 1/x = 1/2x, right?
(2)' = 0
∴ ƒ'(x) = 3e^x - 1/2x.
Hmmm.. please let me know how wrong I am on these, lol. Only way to improve, eh?
Thanks again guys, as I've said before, really appreciate what you guys do, I'm surprised you don't charge!
2005-12-15 09:06:59
You don't have to use 't', that's just how we were taught. t is just a function of x, chosen to make the expression easier to differentiate.
2005-12-15 09:06:51
Oh guys I've just noticed, I haven't made myself too clear; I've written a x sin(x) when I meant more along the lines of (a)times[sin(x)]. How would that be differenciated?
Edit: Don't worry, I got it
2005-12-15 09:01:43
Wow, this is incredible. I like this community, such drive to offer a helping hand
Thanks guys!
2005-12-15 05:59:38
Indeed I be. Yarr.
ryos wrote:
(sin²(x))′ = sinxcosx + cosxsinx = 2sinxcosx (there's probably and identity to simplify that further, but I can never remember those darn things).
2 sinxcosx ≡ sin 2x
2005-12-15 05:15:03
Arrrgh, mathsy be a quick one.
2005-12-15 05:13:47
The chain rule is most convenient for this one. (If you haven't learned it, the chain rule is a rule for differentiating composed functions. If ƒ(x) = sinx and g(x) = ax, then sin(ax) = ƒ(g(x)). The
chain rule goes like this: (ƒ(g(x)))′ = ƒ(g(x))′ * g′(x).)
So. (sinx)′ = cosx, and (ax)′ = a. Plug these into the chain rule, and out pops a*cos(ax).
The same principle applies to the rest of the functions in that list. Another example is (ln(ax))′ = a/ax = 1/x.
Rewrite this one as (sinx)(sinx). You can then use the product rule ((ƒ(x)g(x))′ = ƒ(x)g′(x) + ƒ′(x)g(x)).
(sin²(x))′ = sinxcosx + cosxsinx = 2sinxcosx (there's probably and identity to simplify that further, but I can never remember those darn things).
Again, the same idea applies to the rest of the list.
Product rule again. ax*cosx + a*sinx.
Or did you mean to say a*sin(x)? That's just a*cos(x). Remember that (aƒ(x))′ = a(ƒ′(x))
a x sin(bx + c)
This one's an application of chain and product rules both.
sin(bx + c)′ = b*cos(bx + c)
(ax*sin(bx+c))′ = abx*cos(bx+c) + a*sin(bx+c)
I hope that helps you. If you were looking more for explanations of why the various rules works, there are some good proofs in my calculus book...
2005-12-15 04:59:52
Each of the sections use the same method, so I'll show you how to differentiate the first of each one.
f(x) = sin (ax)
For this one, you need to use the chain rule: dy/dx = dy/dt * dt/dx.
Substitute t = ax.
dy/dt = cos t, dt/dx = a, ∴ dy/dx = a cos t = a cos (ax)
You'll find that that result is the same for most of them, but not all, so be careful.
For example, ln (ax) differentiated is still 1/x.
I'll come back to sin² x, because it's trickier than the rest.
a sin x, by contrast, is easy. Just take out the constant a and differentiate as normal, before putting the constant back on.
d(a sin x)/dx = a * d(sin x)/dx = a cos x.
a sin (bx + c) is just a combination of the above two things, and you should get a result of ab cos (bx + c).
As I said, sin² x is a bit trickier. For things like this, you need the product rule.
d(uv)/dx, where u and v are functions of x, = u*dv/dx + v*du/dx.
In this case, u = v = sin x.
d(sin² x)/dx = 2*sinx*d(sinx)/dx = 2sin x cos x.
I hope I helped.
2005-12-15 03:31:21
I've had a good look at all the existing topics based around this rather annoying subject, but nothing seems to satisfyingly relate to what I'm questioning. I don't quite understand how special
functions react to differentiation, well I suppose I do, if basic differentiation. I'm just really having problems with questions involving the differentiation of multiple functions that are
multiplied or squared.
There's a heck of a lot of questions I'm having problems with, and I'd much rather learn how to solve them as oppose to watching someone else work through it for me. So, I was thinking, would it be
possible if someone could fill out a list of how the functions are differentiated, maybe if I were to provide a structure? At least, I know this:
f(x) f'(x)
sin(x) cos(x)
cos(x) -sin(x)
tan(x) sec²(x)
sec(x) sec(x)tan(x)
cosec(x) -cosec(x)cot(x)
cot(x) -cosec²(x)
ln(x) 1/x
e^x e^x
I know, it's not much at all, but I'm just quite unsure of the following:
f(x) f'(x)
f(x) f'(x)
f(x) f'(x)
a x sin(x)
a x cos(x)
a x tan(x)
a x sec(x)
a x cosec(x)
a x cot(x)
a x ln(x)
a x e^(x)
and finally, just as an example:
f(x) f'(x)
a x sin(bx + c)
a x cos(bx + c)
a x tan(bx + c)
a x sec(bx + c)
a x cosec(bx + c)
a x cot(bx + c)
a x ln(bx + c)
a x e^(bx + c)
I'd be eternally greatful if someone could show me how the above are differentiated. I understand it's an extensive list, maybe excessive, so it's perfectly understandable if you don't have the time
for me, but I'm just eager to get to grips with this, as to cease resorting to this comunity with similar questions. Thanks so much for your time, I really do appreciate it. David. | {"url":"http://www.mathisfunforum.com/post.php?tid=2213&qid=20607","timestamp":"2014-04-18T10:46:38Z","content_type":null,"content_length":"29022","record_id":"<urn:uuid:45181280-cb65-4eab-97f6-766a75a94127>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deriving some inverse trig functions and some integrals
February 3rd 2010, 01:40 PM #1
Junior Member
Sep 2009
Deriving some inverse trig functions and some integrals
Hi all!
I am having some trouble with a few problems I've been working on.
Find the derivatives of the function
$y = x^3arctan(1-x) + arcsin(2x)$
Step was was recognizing the product rule for $x^3acrtan(1-x)$
so I have $3x^2arctan(1-x) + x^3(-1/(\sqrt{1-(-1)^2)} + 2/(\sqrt{1-(2x)^2})$
but I am not sure that it is right
Problem number two: Find the derivative of
$y = tan(arccos(x)) + arccsc(3x^2)$
this one threw me a monkey wrench right away since the argument to arccos is only x and not a function of x. even if it were more then just x i'm still scratching my head on how to start this one
Problem three:
$/int 4 / (8 + 9x^2 -6x)$
I figured to used the identity that $\int du / (a^2 + u^2) = (1 / a)arctan(u/a) + C$
So I then went to complete the square to give me what I wanted
I got $[(3x)^2 + 2(3x) + 1^2 ] -1$
so then my denominator turned into 7 + (3x+1)^2 which I figured was wrong. A little confused on this one.
Same for this last problem
$\int 5/(\sqrt{4-x^2+2x})$
any help would be appreciated!!!
1) d/dx(arctan(1-x) = -1/(1-(1-x)^2 = -1/sqrt(2x-x^2)
2) tan(arcos(x)) = sqrt(1-x^2)/x draw a right triangle with angle whose cosine is x
dsqrt(1-x^2)/x /dx = -1/sqrt(1-x^2) - sqrt(1-x^2)/x^2
= -1/[x^2 sqrt(1-x^2)]
Even without this d(tan(u)/dx = sec^2 (u)8du/dx
d(arctan(cos(x))/dx = sec^2 (cos(x)) [-1/sqrt(1-x^2)]
sec(arccos(x)) = 1/x = -1/[x^2 sqrt(1-x^2)]
for 3) 9x^2 -6x +8 = (3x-1)^2 + 7 let u =3x-1 and note a = sqrt(7)
February 3rd 2010, 02:25 PM #2 | {"url":"http://mathhelpforum.com/calculus/127016-deriving-some-inverse-trig-functions-some-integrals.html","timestamp":"2014-04-20T10:17:17Z","content_type":null,"content_length":"35363","record_id":"<urn:uuid:8fab8710-63c7-436e-ae2e-e1ffe2299047>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 18
- Journal of Artificial Intelligence Research , 2001
"... Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the
problems associated with policy degradation in value-function methods. In this paper we introduce � � , a si ..."
Cited by 153 (5 self)
Add to MetaCart
Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems
associated with policy degradation in value-function methods. In this paper we introduce � � , a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in
Partially Observable Markov Decision Processes ( � s) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm’s
chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter � � (which has a natural interpretation in terms of bias-variance trade-off), and
requires no knowledge of the underlying state. We prove convergence of � � , and show how the correct choice of the parameter is related to the mixing time of the controlled �. We briefly describe
extensions of � � to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with
internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by � � can be used in both a traditional stochastic gradient algorithm and a
conjugate-gradient procedure to find local optima of the average reward. 1.
- National University , 1999
"... In [2] we introduced ¢¡¤£¦¥¨§¦¡, an algorithm for computing arbitrarily accurate approximations to the performance gradient of parameterized partially observable Markov decision processes (
¡©£¦¥¨§¦ ¡ s). The algorithm’s chief advantages are that it requires only a single sample path of the underly ..."
Cited by 63 (3 self)
Add to MetaCart
In [2] we introduced ¢¡¤£¦¥¨§¦¡, an algorithm for computing arbitrarily accurate approximations to the performance gradient of parameterized partially observable Markov decision processes ( ¡©£¦¥¨§¦
¡ s). The algorithm’s chief advantages are that it requires only a single sample path of the underlying Markov chain, it uses only one ���� � ������ � free parameter which has a natural
interpretation in terms of bias-variance trade-off, and it requires no knowledge of the underlying state. In addition, the algorithm can be applied to infinite state, control and observation spaces.
- In Symposium on Principles of Programming Languages , 1999
"... ) Vineet Gupta Radha Jagadeesan Prakash Panangaden y vgupta@mail.arc.nasa.gov radha@cs.luc.edu prakash@cs.mcgill.ca Caelum Research Corporation Dept. of Math. and Computer Sciences School of
Computer Science NASA Ames Research Center Loyola University--Lake Shore Campus McGill University Moffe ..."
Cited by 29 (1 self)
Add to MetaCart
) Vineet Gupta Radha Jagadeesan Prakash Panangaden y vgupta@mail.arc.nasa.gov radha@cs.luc.edu prakash@cs.mcgill.ca Caelum Research Corporation Dept. of Math. and Computer Sciences School of Computer
Science NASA Ames Research Center Loyola University--Lake Shore Campus McGill University Moffett Field CA 94035, USA Chicago IL 60626, USA Montreal, Quebec, Canada Abstract This paper describes a
stochastic concurrent constraint language for the description and programming of concurrent probabilistic systems. The language can be viewed both as a calculus for describing and reasoning about
stochastic processes and as an executable language for simulating stochastic processes. In this language programs encode probability distributions over (potentially infinite) sets of objects. We
illustrate the subtleties that arise from the interaction of constraints, random choice and recursion. We describe operational semantics of these programs (programs are run by sampling random
choices), deno...
, 2009
"... In this article we look at the modern theory of moving meshes as part of an r-adaptive strategy for solving partial differential equations with evolving internal structure. We firstly examine
the possible geometries of a moving mesh in both one and higher dimensions, and the discretization of partia ..."
Cited by 14 (2 self)
Add to MetaCart
In this article we look at the modern theory of moving meshes as part of an r-adaptive strategy for solving partial differential equations with evolving internal structure. We firstly examine the
possible geometries of a moving mesh in both one and higher dimensions, and the discretization of partial differential equation on such meshes. In particular, we consider such issues as mesh
regularity, equidistribution, variational methods, and the error in interpolating a function or truncation error on such a mesh. We show that, guided by these, we can design effective moving mesh
strategies. We then look in more detail as to how these strategies are implemented. Firstly we look at position-based methods and the use of moving mesh partial differential equation (MMPDE),
variational and optimal transport methods. This is followed by an analysis of velocity-based methods such as the geometric conservation law (GCL) methods. Finally we look at a number of examples
where the use of a moving mesh method is effective in applications. These include scale-invariant problems, blow-up problems, problems with moving fronts and problems in meteorology. We conclude
that, whilst r-adaptive methods are still in a relatively new stage of development, with many outstanding questions remaining, they have enormous potential for development, and for many problems they
represent an optimal form of adaptivity.
- International Journal of Intelligent Systems , 1999
"... In many practical problems, we must optimize a set function, i.e., find a set A for which f(A) ! max, where f is a function defined on the class of sets. Such problems appear in design, in image
processing, in game theory, etc. Most optimization problems can be solved (or at least simplified) by usi ..."
Cited by 12 (8 self)
Add to MetaCart
In many practical problems, we must optimize a set function, i.e., find a set A for which f(A) ! max, where f is a function defined on the class of sets. Such problems appear in design, in image
processing, in game theory, etc. Most optimization problems can be solved (or at least simplified) by using the fact that small deviations from an optimal solution can only decrease the value of the
objective function; as a result, some derivative must be equal to 0. This approach has been successfully used, e.g., for set functions in which the desired set A is a shape, i.e., a smooth (or
piece-wise smooth) surface. In some real-life problems, in particular, in the territorial division problem, the existing methods are not directly applicable. For such problems, we design a new simple
differential formalism for optimizing set functions. 1 Introduction: Optimization of Set Functions is a Practically Important but Difficult Problem Optimization is important. In most application
problems, we h...
- Inv. Math
"... Abstract. We prove that stable ergodicity is C r open and dense among conservative partially hyperbolic diffeomorphisms with one-dimensional center bundle, for all r ∈ [2, ∞]. The proof follows
Pugh-Shub program [21]: among conservative partially hyperbolic diffeomorphisms with one-dimensional cente ..."
Cited by 8 (3 self)
Add to MetaCart
Abstract. We prove that stable ergodicity is C r open and dense among conservative partially hyperbolic diffeomorphisms with one-dimensional center bundle, for all r ∈ [2, ∞]. The proof follows
Pugh-Shub program [21]: among conservative partially hyperbolic diffeomorphisms with one-dimensional center bundle, accessibility is C r open and dense, and essential accessibility implies
ergodicity. 1.
"... Meyer, Kent and Clifton (MKC) claim to have nullified the Bell-Kochen-Specker (Bell-KS) theorem. It is true that they invalidate KS’s account of the theorem’s physical implications. However,
they do not invalidate Bell’s point, that quantum mechanics is inconsistent with the classical assumption, th ..."
Cited by 6 (0 self)
Add to MetaCart
Meyer, Kent and Clifton (MKC) claim to have nullified the Bell-Kochen-Specker (Bell-KS) theorem. It is true that they invalidate KS’s account of the theorem’s physical implications. However, they do
not invalidate Bell’s point, that quantum mechanics is inconsistent with the classical assumption, that a measurement tells us about a property previously possessed by the system. This failure of
classical ideas about measurement is, perhaps, the single most important implication of quantum mechanics. In a conventional colouring there are some remaining patches of white. MKC fill in these
patches, but only at the price of introducing patches where the colouring becomes “pathologically” discontinuous. The discontinuities mean that the colours in these patches are empirically
unknowable. We prove a general theorem which shows that their extent is at least as great as the patches of white in a conventional approach. The theorem applies, not only to the MKC colourings, but
also to any other such attempt to circumvent the Bell-KS theorem (Pitowsky’s colourings, for example). We go on to discuss the implications. MKC do not nullify the Bell-KS theorem. They do, however,
show that we did not, hitherto, properly understand the theorem. For that reason their results (and Pitowsky’s earlier results) are of major importance. 1 1.
, 2006
"... Abstract. Some of the guiding problems in partially hyperbolic systems are the following: (1) Examples, (2) Properties of invariant foliations, (3) Accessibility, (4) Ergodicity, (5) Lyapunov
exponents, (6) Integrability of central foliations, (7) Transitivity and (8) Classification. Here we will su ..."
Cited by 5 (2 self)
Add to MetaCart
Abstract. Some of the guiding problems in partially hyperbolic systems are the following: (1) Examples, (2) Properties of invariant foliations, (3) Accessibility, (4) Ergodicity, (5) Lyapunov
exponents, (6) Integrability of central foliations, (7) Transitivity and (8) Classification. Here we will survey the state of the art on these subjects, and propose related problems.
"... The problem considered is inference in a simple errors-in-variables model where consistent estimation is impossible without introducing additional exact prior information. The probabilistic
prior information required for Bayesian analysis is found to be surprisingly light: despite the model's lack o ..."
Cited by 1 (1 self)
Add to MetaCart
The problem considered is inference in a simple errors-in-variables model where consistent estimation is impossible without introducing additional exact prior information. The probabilistic prior
information required for Bayesian analysis is found to be surprisingly light: despite the model's lack of identification a proper posterior is guaranteed for any bounded prior density, including
those representing improper priors. This result is illustrated with the improper uniform prior, which implies marginal posterior densities obtainable by integrating the likelihood function;
surprisingly, the posterior mode for the regression slope is the usual least squares estimate. KEYwoRDs: Errors-in-variables, Bayesian inference, identification, improper priors, proper posteriors,
finitely additive probabilities, coherence. 1.
, 1997
"... In many practical problems, we must optimize a set function, i.e., find a set a for which f(a) ! max, where f is a function defined on the class of sets. Such problems appear in design, in image
processing, in game theory, etc. In particular, if we use statistical methods (i.e., random sets) to solv ..."
Add to MetaCart
In many practical problems, we must optimize a set function, i.e., find a set a for which f(a) ! max, where f is a function defined on the class of sets. Such problems appear in design, in image
processing, in game theory, etc. In particular, if we use statistical methods (i.e., random sets) to solve the corresponding set-related practical problems, then, at the end, we get a probability
distribution on the class of all sets; to arrive at a single answer, we can, e.g., apply the Maximum Likelihood Method, i.e., optimize the likelihood set function L(a). Most optimization problems can
be solved (or at least simplified) by using the fact that small deviations from an optimal solution can only decrease the value of the objective function; as a result, some derivative must be equal
to 0. This approach has been successfully used, e.g., for set functions in which the desired set a is a shape, i.e., a smooth (or piecewise smooth) surface. In some real-life problems, however, there
are no rea... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1762565","timestamp":"2014-04-17T16:31:26Z","content_type":null,"content_length":"39110","record_id":"<urn:uuid:6b8ee190-eb4e-4e35-8c9d-b1843e738b84>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Happy Pi Day!
Happy Pi Day, everyone! March 14 (3.14) at 1:59 (you get the idea) is the peak of Pi Day, a celebration of the Greek letter which represents the irrational number by which the diameter of a circle is
multiple in order to obtain the circumference ... but you guys knew that, right?
The sixteenth letter of the Greek alphabet, p, was first used for the familiar value 3.1415"¦ in the publication, "Synopsis Palmariorium Mathesios," authored by William Jones in 1706, though the fact
that "the ratio of the circumference to the diameter of a circle is the same for all circles, and that it is slightly more than 3, was known to ancient Egyptian, Babylonian, Indian and Greek
geometers." 2006 was the 300th Anniversary of the introduction of the mathematical symbol pi. (Here's more on the history of Pi.)
Be prepared for 3.14.2009 by ordering a mental_floss "Simple as 3.141592..." shirt (men's or women's). Keep reading for more pi facts.
necklaces, I wrote a poem, and others competed to see who could memorize the most digits. In this video, savant Daniel Tammet discusses with David Letterman how he recited 22,514 digits of pi from
memory on Pi Day 2004.
From our Amazing Fact Generator: "In 1897, Indiana tried to pass a bill stating that pi is equal to 3.2 as opposed to its truly infinite value, but it never became law due to an intervention by a
Purdue University professor."
Unfortunately, even in 2008, some people are confused about pi. Check out this picture below taken by a student at Georgia Southern (sent to me via my high school math teacher), where someone thought
Pi Day was March 13, and that the digits were 3.13 ...
When I was asking for Weekend Links awhile back, Brian from San Francisco sent in this gem regarding pi that I saved until now:
"A work I am continually impressed by is "Poe, E.: Near a Raven," a constrained writing experiment that encodes 740 digits of Pi in a poem evoking Poe's 'The Raven.'" It's pretty cool, and helps
illustrate that the concept of pi is all around us!
So today, try and have more fun with pi! Find out if your birthday is in the first 1254543 digits. And if you think you know all there is to know about 3.14, try your hand at this quiz. For those who
are huge fans of pi, you can now smell irrational, too.
How many digits of pi do you know? This song may help. Does anyone else celebrate Pi Day? What are some of your activities or memories from it?
March 14, 2008 - 9:59am | {"url":"http://mentalfloss.com/article/18231/happy-pi-day","timestamp":"2014-04-18T11:35:02Z","content_type":null,"content_length":"86658","record_id":"<urn:uuid:f431aa39-fdc3-466e-96c5-05b5838db90b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
The independent choice logic and beyond
Results 1 - 10 of 18
"... As exact inference for first-order probabilistic graphical models at the propositional level can be formidably expensive, there is an ongoing effort to design efficient lifted inference
algorithms for such models. This paper discusses directed first-order models that require an aggregation operator ..."
Cited by 13 (0 self)
Add to MetaCart
As exact inference for first-order probabilistic graphical models at the propositional level can be formidably expensive, there is an ongoing effort to design efficient lifted inference algorithms
for such models. This paper discusses directed first-order models that require an aggregation operator when a parent random variable is parameterized by logical variables that are not present in a
child random variable. We introduce a new data structure, aggregation parfactors, to describe aggregation in directed first-order models. We show how to extend Milch et al.’s C-FOVE algorithm to
perform lifted inference in the presence of aggregation parfactors. We also show that there are cases where the polynomial time complexity (in the domain size of logical variables) of the C-FOVE
algorithm can be reduced to logarithmic time complexity using aggregation parfactors. 1
"... Log-linear description logics are a family of probabilistic logics integrating various concepts and methods from the areas of knowledge representation and reasoning and statistical relational
AI. We define the syntax and semantics of log-linear description logics, describe a convenient representatio ..."
Cited by 8 (6 self)
Add to MetaCart
Log-linear description logics are a family of probabilistic logics integrating various concepts and methods from the areas of knowledge representation and reasoning and statistical relational AI. We
define the syntax and semantics of log-linear description logics, describe a convenient representation as sets of first-order formulas, and discuss computational and algorithmic aspects of
probabilistic queries in the language. The paper concludes with an experimental evaluation of an implementation of a log-linear DL reasoner.
- MACH LEARN , 2011
"... Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the
subject from its infancy through childhood and teenage years. We show how in each phase ILP has been characteri ..."
Cited by 7 (6 self)
Add to MetaCart
Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject
from its infancy through childhood and teenage years. We show how in each phase ILP has been characterised by an attempt to extend theory and implementations in tandem with the development of novel
and challenging real-world applications. Lastly, by projection we suggest directions for research which will help the subject coming of age.
"... Abstract. In this paper we introduce a fixpoint semantics for quantitative logic programming, which is able to both combine and correlate evidence from different sources of information. Based on
this semantics, we develop efficient algorithms that can answer queries for non-ground programs with the ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. In this paper we introduce a fixpoint semantics for quantitative logic programming, which is able to both combine and correlate evidence from different sources of information. Based on this
semantics, we develop efficient algorithms that can answer queries for non-ground programs with the help of an SLD-like procedure. We also analyze the computational complexity of the algorithms and
illustrate their uses. 1
- In Intl. Conf. on Logic Programming and Nonmonotonic Reasoning (LPNMR , 2009
"... Abstract. Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and
correlate evidence obtained from non-independent information sources. BLP has non-monotonic semantics bas ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and
correlate evidence obtained from non-independent information sources. BLP has non-monotonic semantics based on the concepts of belief combination functions and is inspired by Dempster-Shafer theory
of evidence. Most importantly, unlike the previous efforts to integrate uncertainty and logic programming, BLP can correlate structural information contained in rules and provides more accurate
certainty estimates. The results are illustrated via simple, yet realistic examples of rule-based Web service integration. 1
, 2010
"... ProbLog is a recently introduced probabilistic extension of the logic programming language Prolog, in which facts can be annotated with the probability that they hold. The advantage of this
probabilistic language is that it naturally expresses a generative process over interpretations using a declar ..."
Add to MetaCart
ProbLog is a recently introduced probabilistic extension of the logic programming language Prolog, in which facts can be annotated with the probability that they hold. The advantage of this
probabilistic language is that it naturally expresses a generative process over interpretations using a declarative model. Interpretations are relational descriptions or possible worlds. In this
paper, a novel parameter estimation algorithm CoPrEM for learning ProbLog programs from partial interpretations is introduced. The algorithm is essentially a Soft-EM algorithm that computes binary
decision diagrams for each interpretation allowing for a dynamic programming approach to be implemented. The CoPrEM algorithm has been experimentally evaluated on a number of data sets, which justify
the approach and show its effectiveness.
, 2008
"... This manual describes AILog, (formerly CILog), a simple representation and reasoning system based on the books Artificial Intelligence: foundations of computational agents [Poole and Mackworth,
2009] and Computational Intelligence: A Logical Approach Poole, Mackworth, and Goebel [1998], and the Inde ..."
Add to MetaCart
This manual describes AILog, (formerly CILog), a simple representation and reasoning system based on the books Artificial Intelligence: foundations of computational agents [Poole and Mackworth, 2009]
and Computational Intelligence: A Logical Approach Poole, Mackworth, and Goebel [1998], and the Independent Choice Logic [Poole, 2008] for the probabilistic reasoning. AILog provides: • a definite
clause representation and reasoning system • a simple tell-ask user interface, where the user can tell the system facts and ask questions of the system • explanation facilities to explain how a goal
was proved, why an answer couldn’t be found, why a question was asked, why an error-producing goal was called, and why the depth-bound was reached • knowledge-level debugging tools, that let the user
debug incorrect answers, missing answers, why the system asks a question, system errors, and possible infinite loops • depth-bounded search, that can be used to investigate potential infinite loops
and used to build an iterative-deepening search procedure
"... We describe a general method of transforming arbitrary programming languages into probabilistic programming languages with straightforward MCMC inference engines. Random choices in the program
are “named ” with information about their position in an execution trace; these names are used in conjuncti ..."
Add to MetaCart
We describe a general method of transforming arbitrary programming languages into probabilistic programming languages with straightforward MCMC inference engines. Random choices in the program are
“named ” with information about their position in an execution trace; these names are used in conjunction with a database holding values of random variables to implement MCMC inference in the space
of execution traces. We encode naming information using lightweight source-to-source compilers. Our method enables us to reuse existing infrastructure (compilers, profilers, etc.) with minimal
additional code, implying fast models with low development overhead. We illustrate the technique on two languages, one functional and one imperative: Bher, a compiled version of the Church language
which eliminates interpretive overhead of the original MIT-Church implementation, and Stochastic Matlab, a new open-source language. 1
"... The complexity of probabilistic models, especially those involving recursion, has far exceeded the representational capacity of graphical models. Functional programming languages with
probabilistic choice operators have recently been proposed as universal representations for statistical modeling (e. ..."
Add to MetaCart
The complexity of probabilistic models, especially those involving recursion, has far exceeded the representational capacity of graphical models. Functional programming languages with probabilistic
choice operators have recently been proposed as universal representations for statistical modeling (e.g., IBAL [Pfe01], λ ◦ [PPT08], Church [GMR + 08]). The conditional independence structure of a
probabilistic program is not, in general, representable by a graphical model. Rather, it is dynamic and is given by the random control and data flow of the program. These functional probabilistic
languages are allied with imperative probabilistic languages (e.g., Infer.NET) and a similar tradition of augmenting logical representations with probabilistic quantifiers (e.g., BLOG [MMR+ 05],
, 2009
"... Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and correlate
evidence obtained from non-independent information sources. BLP has non-monotonic semantics based on the ..."
Add to MetaCart
Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and correlate
evidence obtained from non-independent information sources. BLP has non-monotonic semantics based on the concepts of belief combination functions and Dempster-Shafer theory of evidence. Most
importantly, unlike the previous efforts to integrate uncertainty and logic programming, BLP can correlate structural information contained in rules and provides more accurate certainty estimates.
Declarative semantics is provided as well as query evaluation algorithms. Also BLP is extended to to programs with cycles and to correlated base facts. The results are illustrated via simple, yet
realistic examples of rule-based Web service integration. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=6471896","timestamp":"2014-04-23T10:22:10Z","content_type":null,"content_length":"36420","record_id":"<urn:uuid:eb7541a4-71a1-4f3c-9a8f-736c27dd3c0e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
ENGR 2001 - Engineering Computing Applctns
An introductory, software-oriented, engineering computing course using an interactive, high-performance, scientific and engineering software package which integrates computation and visualization in
a programming environment to solve engineering application problems. Topics will include embedded mathematical functions, complex numbers, matrix manipulation, plotting, user defined script and
function files, matrix algebra, numerical techniques and graphical user interfaces. | {"url":"http://www.alfredstate.edu/print/academics/courses/engr-2001-engineering-computing-applctns","timestamp":"2014-04-16T22:25:18Z","content_type":null,"content_length":"9475","record_id":"<urn:uuid:c151ca0d-1d5a-4c31-88ba-1cd3b358775f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
on Asymptotic Geometric Analysis
Talk Titles and Abstracts
Random matrices with independent log-concave rows
Radoslaw Adamczak
University of Warsaw / Fields Institute
Coauthors: O. Guedon, A. Litvak, A. Pajor, N. Tomczak-Jaegermann
I will discuss several properties of random matrices with independent log-concave rows, obtained during the last several years, including estimates on their norms, a solution to the
Kannan-Lovasz-Simonovits problem and estimates on their smallest singular value. If time permits I will also briefly mention some limiting results for their spectral distributions.
Volume of Lp-zonotopes and best best constants in Brascamp-Lieb inequalities
David Alonso
Universidad de Zaragoza, Fields Institute
Given some unit vectors a1, ..., am ? Rn that span all Rn and some positive numbers q1, ..., qm, we consider for every p = 1 the convex body
Kp:={ x ? Rn : ?i=1m |<x, [(ai)/(qi)]>|p = 1}.
We will give some upper bounds for the volume of Kp and some lower bounds for the volume of its polar, depending on some parameters, which improve the ones obtained using the Brascamp-Lieb
inequality. We will also see how the best choice of this parameters is related to the transformation which takes Kp to a special position which, for instance, when p=8, is John's position.
Gaussian and almost Gaussian formulas for volumes and the number of integer points in polytopes
Alexander Barvinok
University of Michigan
Coauthors: John A. Hartigan (Yale)
We present a family of computationally efficient formulas for volumes and the number of integer points in polytopes represented as the intersection of the non-negative orthant and an affine subspace.
Although the formulas are not always applicable, they are asymptotically exact in a wide variety of situations. In particular, we obtain asymptotic formulas for the number of non-negative integer
matrices with prescribed row and column sums and for the volumes of the respective transportation polytopes. The intuition for the formulas is provided by the maximum entropy principle, the Local
Central Limit Theorem and its ramifications.
Invariant distributions in integral geometry
Gautier Berck
When the group, or the isotropy subgroup, is non-compact, classical Crofton type formulae may fail to exist because of the appearence of divergent integrals. The aim of the talk is to show that in
some situations this problem may be circumvented replacing the invariant measure by an invariant distribution. The procedure will be illustrated by basic examples and applications in convex geometry.
On the location of roots of Steiner polynomials
Maria A. Hernandez Cifre
Departamento de Matematicas, Universidad de Murcia
Coauthors: Martin Henk
For two convex bodies K, E of the n-dimensional euclidean space and a non-negative real number x, the volume of K+xE is a polynomial of degree n in x, whose coefficients are, up to a constant,
important measures associated to both sets, the relative quermassintegrals. This polynomial is called the (relative) Steiner polynomial of K (with respect to E). If we consider the Steiner polynomial
as a formal polynomial in a complex variable, we are interested in studying geometric properties of its roots: their location in the complex plane, size, relation with other geometric magnitudes (in-
and circumradius) and characterization of (families of) convex bodies by mean of properties of the roots. In this talk I will show the known results on this topic, which had its starting point in a
problem posed by Teissier in 1982, in the context of Algebraic Geometry.
Integral functionals verifying a Brunn-Minkowski type inequality
Andrea Colesanti
Universita' di Firenze
Coauthors: Daniel Hug, Eugenia Saorin-Gomez
We consider a class of integral functionals defined in the family of convex bodies. The value of the functional on a convex body is given by the integral of a fixed continuous function defined on the
unit sphere, with respect to the area measure of the convex body. We assume that a functional of this form verifies an inequality of Brunn-Minkowski type. We prove that if in addition the functional
is symmetric, then it must be a mixed volume. The same result holds if the function defining the functional has some regularity property.
Small ball probability estimates, ?_2-behavior and the hyperplane conjecture
Nikos Dafnis
University of Athens
We introduce a method which leads to upper bounds for the isotropic constant. We prove that a positive answer to the hyperplane conjecture is equivalent to some very strong small
probability estimates for the Euclidean norm on isotropic convex bodies. As a consequence of our method, we obtain an alternative proof of the result of J. Bourgain that every ?2-body has bounded
isotropic constant, with a slightly better estimate.
The Poisson summation formula uniquely characterizes the Fourier Transform
Dmitry Faifman
Ph.D student, Tel-Aviv university
We show that under some regularity assumptions, The Poisson summation formula uniquely defines the Fourier Transform of a function. Then, we show how a family of unitary operators on L^2[0,infinity)
can be constructed which exhibit Poisson-like summation formulas As a by-product of this construction, peculiar unitary operators given by series arise.
Concentration of measure phenomenon and eigenvalues of Laplacian
Kei Funano
Kumamoto university
Coauthors: Takashi Shioya (Tohoku university)
In this talk, we discuss the relation between the concentration of measure phenomenon and (behavior of) eigenvalues of Laplacian on a closed Riemannian manifold. M. Gromov and V. D. Milman was first
studied for the case of the first non-trivial eigenvalue of Laplacian. Under non-negative Ricci curvature assumption we study the case of the k-th eigenvalues of Laplacian for any k.
Embedding from lpn into lrN for 0 < r < p < 2.
Olivier GUEDON
Université Paris-Est Marne-La-Vallée
Coauthors: Omer FRIEDLAND
We will present a Kashin type result for embedding lpn into l1N for 1 < p < 2 and N arbitrarily close to n. We will show that it is possible to define random embedings such that the conclusion holds
with overwhelming probability. The result can also be extended to embedding from lpn to lrN with 0 < r = 1. One of the main tool that we developp is a new type of multi-dimensional Esseen inequality
for studying small ball probabilities.
Minkowski valuations intertwining the special linear group
Christoph Haberl
Vienna University of Technology
A classification of SL(n) co- or contravariant Minkowski valuations will be presented. Here, a Minkowski valuation is a mapping from convex bodies to convex bodies which satisfies the
inclusion-exclusion principle. Thereby, we obtain new characterizations of the projection and centroid body operator. Our result shows that the additional assumption of homogeneity in previous
classifications is not necessary.
The average Frobenius number
Martin Henk
University of Magdeburg
Coauthors: Iskander Aliev, Aicke Hinrichs
Given a primitive positive integer vector a ? Zn > 0, the largest integer that cannot be represented as a non-negative integer combination of the coefficients of a is called the Frobenius number of
a. In a series of papers V.I. Arnold initiated the research to study the average size of Frobenius numbers, and in a recent paper, Bourgain and Sinai showed that the probability of a "large"
Frobenius number is "comparable small". Based on an approach using methods from Geometry of Numbers we can strengthen this result in such a way that we can estimate the average size of Frobenius
numbers. Together with a discrete version of a reverse arithmetic-geometric-mean inequality by Gluskin and Milman, this allows us to show that for large instances the order of magnitude of the
expected Frobenius number is (up to a constant depending only on the dimension) given by its lower bound, which, in particular, strengthens a recent result of Marklof on the asymptotic distribution
of Frobenius numbers. Furthermore, we discuss generalizations to the case of more than one input vector a.
Volume and mixed volume inequalities in stochastic geometry
Daniel Hug
Karlsruhe Institute of Technology
Coauthors: Karoly Böröczky, Rolf Schneider, ...
Stochastic geometry deals with random structures such as random closed sets, random processes of flats or random tessellations. A useful method for analyzing such structures is to associate a
deterministic convex set (sometimes a zonoid) with it. Thus strong results from convex geometric analysis become available. As appetizers, we give two examples:
Let Z0 denote the zero cell of a stationary Poisson hyperplane tessellation. We are interested in sharp bounds for the expected number of vertices of Z0. Such bounds are provided by the
Blaschke-Santaló inequality and by the Mahler inequality for zonoids. Equality cases in these bounds characterize special direction distributions of the given hyperplane tessellation. Recently, these
bounds have been improved by corresponding stability estimates, first in the geometric and then in the probabilistic framework. (joint work with Károly Böröczky)
As a second, new example, let X denote a stationary Poisson hyperplane process with fixed intensity g in \Rn. From X we pass to the intersection process X(k) of order k, which is a stationary process
of (n-k)-flats in \Rn. It is well known that the intersection density g(k)(X), i.e. the intensity of X(k), is maximal if and only if X is isotropic. Here we introduce a measure for the strength of
intersections from an affine-invariant point of view. The problem of determining its minimal value leads to a novel geometric inequality for mixed volumes of zonoids with isotropic generating
measures. The solution of a related problem involving joint intersections from a process of lines and an independent process of hyperplanes is partly based on Keith Ball's reverse isoperimetric
inequality together with the equality conditions due to Franck Barthe. (joint work with Rolf Schneider)
On the extremal distance between two convex bodies
C Hugo Jimenez
University of Seville
Coauthors: Márton Naszódi Eötvös University, Budapest
We consider d(K, L) a modified version of the Banach-Mazur distance of convex bodies in Rn proposed by Grünbaum. Gordon, Litvak, Meyer and Pajor in 2004 showed that for any two convex bodies d(K, L)
= n, moreover, if K is a simplex and L=-L then d(K, L)=n. The following question arises naturally: Is equality only attained when one of the sets is a simplex? Leichtweiss, and later Palmon proved
that if d(K, B2n)=n, where B2n is the Euclidean ball, then K is the simplex. We prove the affirmative answer to the question in the case when one of the bodies is strictly convex or smooth, thus
obtaining a generalization of the result of Leichtweiss and Palmon.
If you can hide behind it, can you hide inside it?
Dan Klain
University of Massachusetts Lowell
Suppose that K and L are compact convex subsets of Euclidean space, and suppose that, for every direction u, the orthogonal projection (that is, the shadow) of L onto the subspace u? normal to u
contains a translate of the corresponding projection of the body K. Does this imply that the original body L contains a translate of K? Can we even conclude that K has smaller volume than L?
In other words, suppose K can "hide behind" L from any point of view (and without rotating). Does this imply that K can "hide inside" the body L? Or, if not, do we know, at least, that K has smaller
Although these questions have easily demonstrated negative answers in dimension 2 (since the projections are 1-dimensional, and convex 1-dimensional sets have very little structure), the (possibly
surprising) answer to these questions continues to be No in Euclidean space of any finite dimension.
In this talk I will give concrete constructions for convex sets K and L in n-dimensional Euclidean space such that each (n-1)-dimensional shadow of L contains a translate of the corresponding shadow
of K, while at the same time K has strictly greater volume than L. This construction turns out to be sufficiently straightforward that a talented person could conceivably mold 3-dimensional examples
out of modeling clay.
The talk will then address a number of related questions, such as: under what additional conditions on K or L does shadow covering imply actual covering? What bounds can be placed on the volume ratio
of K and L if the shadows of L cover those K?
The chain rule as a functional equation
Hermann Koenig
University of Kiel, Germany
Coauthors: S. Artstein-Avidan, V. Milman
Let T be an operator from C^1(R) into C(R) satisfying the chain rule functional equation T ( f o g) = (T f) o g * (T g) . We show that in a non-degenerate case any solution of this equation has the
form (T f)(x) = H (f(x)) / H(x)* |f'(x)|^p * {sgn(f'(x)) , where H is continuous and p > 0 and where the last term sgn(f'(x)) may be missing; then also p = 0 is possible. An "initial condition" like
T(2*Id) = 2 will imply that T f = f' holds. We also consider T operating on smoother functions C^k(R) or C^inf(R) and n-dimensional generalizations of the chain rule equation.
Moments of unconditional logarithmically concave vectors.
Rafal Latala
University of Warsaw
Let X=(X_1, X_2, .., X_n) be a random vector with unconditional logaritmically concave distribution. We will discuss several results and open problems related to moments of linear combinations of
Rational Ehrhart quasi-polynomials
Eva Linke
Otto-von-Guericke-Universität Magdeburg
Ehrhart's famous theorem states that the number of integral points in a rationalpolytope is a quasi-polynomial in the integral dilation factor. We present the generalization to rational dilation
factors. The number of integral points can still be written as a rational quasi-polynomial. Furthermore, the coefficients of this rational quasi-polynomial are piecewise polynomial functions and
related to each other by derivation.
On the Euclidean metric entropy.
A. Litvak
University of Alberta
Coauthors: V. Milman, A. Pajor, N. Tomczak-Jaegermann
We discuss some properties of entropy and covering numbers. In particular we show extension and lifting properties. We provide applications as well.
Another observation about operator compressions
Elizabeth Meckes
Case Western Reserve University
Coauthors: Mark Meckes
Let T be a self-adjoint operator on a finite-dimensional Hilbert space. It is shown that the distribution of the eigenvalues of a compression of T to a subspace of a given dimension is almost the
same for almost all subspaces. This is a coordinate-free analogue of a recent result of Chatterjee and Ledoux on principal submatrices. The proof is based on measure concentration and entropy
techniques, and the result improves on the result of Chatterjee and Ledoux in various ways. This is joint work with Mark Meckes.
Functional inequalities related to Mahler's conjecture
Mathieu Meyer
Université de Paris Est Marne-la-Vallée
We develop topics related to the title.
Properties of isoperimetric, spectral-gap and log-Sobolev inequalities via concentration
Emanuel Milman
University of Toronto
Various properties of isoperimetric, Sobolev-type and concentration inequalities are studied on a Riemannian manifold equipped with a measure, whose generalized Ricci curvature is (possibly
negatively) bounded from below.
First, stability of these inequalities with respect to perturbation of the measure is obtained. The extent of the perturbation is measured using several different distances between perturbed and
original measure, such as a one-sided L8 bound on the ratio between their densities, Total-Variation, Wasserstein distances, and relative entropy (Kullback-Leibler divergence). In particular, an
extension of the Holley-Stroock perturbation lemma for the log-Sobolev inequality is obtained, and the dependence on the perturbation parameter is improved from linear to logarithmic.
Next, in the compact setting, an optimal (up to numeric constants) isoperimetric inequality is obtained as a function of the curvature lower bound and diameter upper bound. In particular, the best
known log-Sobolev inequality is obtained in this setting.
Time permitting, we will also mention the equivalence of Transport-Entropy inequalities with different cost functions and some of their applications.
The main tool used is a previous precise result on the equivalence between concentration and isoperimetric inequalities in the described setting.
Hormander's proof of the Bourgain-Milman theorem
Fedor Nazarov
University of Wisconsin at Madison
A long standing Mahler's conjecture asserts that the product of the volumes of a symmetric convex body in Rn and its polar body is never less than Pn=4n/n!. Bourgain and Milman proved the lower bound
cn Vn with some small positive constant c. Later, Kuperberg showed that one can take c=p/4. We shall use Hormander's ideas to give a fairly simple complex-analytic proof of the Bourgain-Milman
Feige's inequality
Krzysztof Oleszkiewicz
Institute of Mathematics: University of Warsaw & Polish Academy of Sciences
Let S denote a sum of (finitely many) independent non-negative random variables with means not exceeding 1. A remarkable result of Uriel Feige (SIAM Journal on Computing, 2006) states that for every
0<t<1 the inequality P(S<ES+t) > Ct holds true, where C is some universal positive constant (i.e. it does not depend on t, distributions and number of the random variables).
A short and simple proof will be presented, to a large extent along the lines of He, Zhang and Zhang (Mathematics of Operation Research, 2010), as well as some generalizations of the result.
Properties of metric spaces which are not coarsely embeddable into a Hilbert space
Mikhail Ostrovskii
St. Johns University, Queens, NY
The talk is devoted to expansion properties of locally finite metric spaces which do not embed coarsely into a Hilbert space. The obtained characterization can be used, for example, to derive the
fact that infinite locally finite graphs excluding a minor embed coarsely into a Hilbert space.
On the existence of subgaussian directions for log-concave measures
Grigoris Paouris
Texas A&M University
Coauthors: A. Giannopoulos and P. Valettas
We show that if m is a centered log-concave probability measure on Rn then, [(c1)/(vn)] = |Y2(m)|1/n = c2[(v{logn})/(vn)], where Y2(m) is the y2-body of m, and c1, c2 > 0 are absolute constants. It
follows that m has "almost subgaussian" directions: there exists q ? Sn-1 such that m({ x ? Rn : |<x, q>| = c t E |<·, q>| } ) = e- [( t2)/(log(t+1))] for all 1 = t = v{nlogn}, where c > 0 is an
absolute constant.
On the reconstruction of inscribable sets in discrete tomography
Carla Peri
Università Cattolica - Piacenza
Coauthors: Paolo Dulio
In the usual continuous model for tomography one attempts to reconstruct a function from a knowledge of its line integrals. All the reconstruction methods used in computerized tomography require a
large number of projection images to obtain results of acceptable quality.
The field of discrete tomography focuses on the reconstruction of samples, that consist of only a few different materials, from a "small" number of projections. For instance, it can be applied to the
reconstruction of nanocrystals at atomic resolution, where it is assumed that the crystal contains only a few types of atoms, and that the atoms lie on a regular grid, modeled by the integer lattice.
The high energies required to produce the discrete X-rays of a crystal mean that just a few number of X-rays can be taken before the crystal is damaged, so that the conventional techniques of
computerized tomography fail.
In general, this reconstruction task is a ill-posed inverse problem. In fact, for general data there need not exist a solution, if the data is consistent, the solution need not be uniquely determined
and "small" changes in the data can lead to unique but disjoint solutions. Thus, one has to use a priori information, such as convexity or connectedness, about the sets that have to be reconstructed
to satisfy existence, uniqueness and stability requirements.
By now there are many uniqueness results available for different classes of finite lattice sets, but just few stability results. In the present paper we introduce some new classes of lattice sets,
and investigate the problem of their reconstruction by means of their X-rays taken in the directions belonging to given finite set D. The geometric structure of such sets enable us to prove results
concerning additivity and uniqueness. When D is the set of coordinate directions, we give a sharp stability estimate which depends only on the data error, differently from all the known results,
which also involve the sizes of the sets. Some of these results hold true in any dimension.
On the volume of random convex sets
Peter Pivovarov
Fields Institute
Coauthors: Grigoris Paouris
Let K ? Rn be a convex body of volume one. Let X1, ..., XN be independent random vectors distributed uniformly in K and let KN be their (symmetric) convex hull. A result of Groemer's states that the
expected volume of KN is smallest when K is the Euclidean ball of volume one. A similar result, due to Bourgain, Meyer, Milman and Pajor, holds for the volume of random zonotopes ZN=?i=1N Xi. If
T:RN?Rn is the (random) linear operator defined by Tei=Xi, for i=1, ..., N, then KN is the image of the unit ball in l1N, while ZN is the image of the unit ball in l8N. What happens when T is applied
to other sets? I will discuss a unified approach to various inequalities involving the volume of random convex sets for which the Euclidean ball is the minimizer.
Poisson-Voronoi approximation
Matthias Reitzner
Univ. Osnabrueck
Coauthors: Matthias Schulte
Let X be a Poisson point process and K a convex set. For a point x in X denote by v(x) the Voronoi cell with respect to X, and by vX (K) the union of all Voronoi cells with center in K. We call vX(K)
the Poisson-Voronoi approximation of K.
For K a compact convex set the volume difference Vd(vX(K))-Vd(K) and the symmetric difference Vd(vX(K) \triangle K) are investigated. Estimates for the variance and central limit theorems are
obtained using the chaotic decomposition of these functions in multiple Wiener-Ito integrals. (Work in progress jointly with Matthias Schulte)
Spectral properties of random conjunction matrices
by Mark Rudelson
University of Missouri
Coauthors: Shiva Kasiviswanathan, Adam Smith, and Jon Ullman
We consider a problem in random matrix theory, which arises from computer science. The standard way to release the statistical summary of the information contained in a large data base is to publish
its contingency table, which contains percentages of records having several given common entries. However, if the contingency table is released exactly, one can reconstruct the individual entries by
solving a system of equations. The standard way to protect the privacy of individual records is to add a random noise to the contingency table. Determining the minimal necessary amount of such noise
leads to the problem of estimating the smallest singular value of a special random matrix with dependent entries, which is generated from a random matrix with i.i.d. entries taking values 0 and 1.
Translation Invariant Valuations
Franz Schuster
Vienna University of Technology
Coauthors: Semyon Alesker and Andreas Bernig
As a generalization of the notion of measure, valuations on convex bodies have long played a central role in geometry. The starting point for many important new results in valuation theory is
Hadwiger's remarkable characterization of the continuous rigid motion invariant real valued valuations as linear combinations of the intrinsic volumes. Among many applications, this result allows an
effortless proof of the famous Principal Kinematic Formula from integral geometry.
In the first part of this talk, the decomposition of the space of continuous translation invariant valuations into a sum of SO(n) irreducible subspaces is presented. It will be explained how this
result can be reformulated in terms of a Hadwiger type theorem for translation invariant and SO(n) equivariant valuations with values in an arbitrary (finite dimensional) SO(n) module. From this
perspective the classical theorem of Hadwiger becomes the special case when the SO(n) module is the trivial 1-dimensional one.
A striking recent development in valuation theory explores the connections between isoperimetric inequalities and convex body valued valuations. To be more specific, many powerful geometric
inequalities involve fundamental operators on convex bodies which are valuations, e.g. projection and intersection body maps. In many instances the proofs of these inequalities are based on the
symmetry of certain bivaluations associated with convex body valued valuations.
In the second part, the decomposition of the space of translation invariant valuations into irreducible SO(n) modules is used to study the symmetry of O(n) invariant bivaluations and to establish new
Brunn-Minkowski type inequalities for convex body valued valuations.
Affine Differential Invariants for Convex Bodies
Alina Stancu
Concordia University, Montreal
Coauthors: n/a
In what concerns the affine component of Felix Klein s Erlangen program, the first results were due to Blaschke s school. They were of differential geometric nature and required at least C^4
regularity of closed convex hypersurfaces with, often, positive Gauss curvature everywhere. While this is unsatisfactory for the general study of convex bodies, certain objects the most famous one
being the affine surface area have appeared in affine differential geometry but they were later extended to arbitrary convex bodies with suprising applications. We want to motivate a certain
direction of research which seeks new affine differential invariants and their applications in view of possible extensions to arbitrary convex bodies. The talk will not require any prerequisite.
Estimation of covariance matrices
Roman Vershynin
University of Michigan
Estimation of the covariance matrix of a p-dimensional probability distribution is a basic problem in statistics. A classical estimator is the sample covariance matrix, obtained from a sample of n
independent points. The more classical regime and well studied regime is where n > p. We conjecture that n = O(p) suffices to accurately estimate the covariance matrix of arbitrary distribution with
finite 4-th moments. We discuss some recent progress on this problem, which has a connection to the "Levy flight", a heavy-tailed Brownian motion that exhibits sporadic huge jumps (similar to a
predator's path looking for prey). The other regime, n < p, has recently become quite popular in statistics and its various applications (e.g. genomics) because of limiting sampling capacities
compared with huge dimensions. We will discuss the problem and recent progress in this regime as well.
GL(n) intertwining Minkowski valuations
Thomas Wannerer
Vienna University of Technology
Coauthors: Franz E. Schuster
Recently, M. Ludwig characterized continuous Minkowski valuations which intertwine the general linear group under the assumption that the valuations either are defined on the set of convex bodies
containing the origin or are invariant under translations. We extend these results without any restrictions on the domain or invariance under translations. Part of this work is joint with Franz
Valuations and local functionals
Wolfgang Weil
Karlsruhe Institute of Technology
For the classical motion invariant valuations on convex bodies, the intrinsic volumes, local variants exist, namely measure-valued additive and motion covariant functionals. These are the curvature
measures. In view of applications in integral geometry, we study translation invariant functionals on convex bodies or convex polytopes which admit such a local version. We clarify the connection
between these local functionals and valuations and also discuss whether valuations on convex polytopes which have certain continuouity properties allow an extension to all convex bodies.
Volume Integral Means of Holomorphic Mappings
Jie Xiao
Memorial University
Coauthors: Kehe Zhu
In this talk we discuss some geometric aspects of several complex variables via considering the integral means of holomorphic functions in the unit complex ball with respect to weighted volume
measures, including Sobolev-type embedding and isoperimetric inequalities associated to holomorphic maps, log-convexity of the integral means, and weighted Ricci curvatures.
Towards an Orlicz Brunn-Minkowski theory
Deane Yang
Polytechnic Institute of NYU
Coauthors: Erwin Lutwak and Gaoyong Zhang
Recent extensions of the classical and L_p Brunn-Minkowski theory to an Orlicz Brunn-Minkowski theory are discussed.
The geometry of p-convex intersection bodies.
Vladyslav Yaskin
University of Alberta
Coauthors: J.Kim and A.Zvavitch
Busemann's theorem states that the intersection body of an origin-symmetric convex body is also convex. We provide a version of Busemann's theorem for p-convex bodies. We show that the intersection
body of a p-convex body is q-convex for certain q. Furthermore, we discuss the sharpness of the previous result by constructing an appropriate example. Finally, we extend these theorems to some
general measure spaces with log-concave and s-concave measures.
Shadow boundaries and the Fourier transform
Maryna Yaskina
University of Alberta
We investigate the Fourier transform of homogeneous functions on $\mathbb R^n$ which are not necessarily even. These techniques are applied to the study of nonsymmetric convex bodies, in particular
to the question of reconstructing convex bodies from the information about their shadow boundaries.
On the homothety conjecture
Deping Ye
The Fields Institute
Coauthors: E. Werner
Let K be a convex body in Rn and Kd its floating body. The homothety conjecture asks: "Does Kd=c K imply that K is an ellipsoid?" Here c is a constant depending on d only. We prove that the homothety
conjecture holds true in the class of the convex bodies Bnp, 1 = p = 8, the unit balls of lpn; namely, we show that (Bnp)d = c Bnp if and only if p=2. We also show that the homothety conjecture is
true for a general convex body K if d is small enough.
Some geometric properties of Intersection Body Operator.
Artem Zvavitch
Kent State University
Coauthors: Fedor Nazarov, Dmitry Ryabogin
The notion of an intersection body of a star body was introduced by E. Lutwak: K is called the intersection body of L if the radial function of K in every direction is equal to the (d-1)-dimensional
volume of the central hyperplane section of L perpendicular to this direction.
The notion turned out to be quite interesting and useful in Convex Geometry and Geometric tomography. It is easy to see that the intersection body of a ball is again a ball. E. Lutwak asked if there
is any other star-shaped body that satisfy this property. We will present a solution to a local version of this problem: if a convex body K is closed to a unit ball and intersection body of K is
equal to K, then K is a unit ball.
Back to top | {"url":"http://www.fields.utoronto.ca/programs/scientific/10-11/asymptotic/analysis/abstracts.html","timestamp":"2014-04-17T04:09:21Z","content_type":null,"content_length":"54819","record_id":"<urn:uuid:c54bf957-7a81-4faf-9255-f3ad291e1c7a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
One-Dimensional Stable Distributions
             
Translations of This is the first book specifically devoted to a systematic exposition of the essential facts known about the properties of stable distributions. In addition to its main focus on
Mathematical the analytic properties of stable laws, the book also includes examples of the occurrence of stable distributions in applied problems and a chapter on the problem of statistical
Monographs estimation of the parameters determining stable laws. A valuable feature of the book is the author's use of several formally different ways of expressing characteristic functions
corresponding to these laws.
1986; 284 pp;
softcover This book is written for researchers in the area of probability theory and its applications, for engineers, and for graduate students in these areas.
Volume: 65 • Examples of stable laws in applications
• Analytic properties of the distributions in the family \(\mathfrak S\)
Reprint/Revision • Special properties of laws in the class \(\mathfrak W\)
History: • Estimators of the parameters of stable distributions
fourth printing
List Price: US$98
Member Price:
Order Code: MMONO | {"url":"http://ams.org/bookstore?fn=20&arg1=mmonoseries&ikey=MMONO-65","timestamp":"2014-04-19T05:49:12Z","content_type":null,"content_length":"14759","record_id":"<urn:uuid:028a96b3-2cdd-4032-8d65-54204f680aef>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
quadratic equations
August 18th 2006, 01:01 AM #1
Aug 2006
a train covers a distance of 300km at a constant speed .if the speed of the train is increased by 5km/hr the journey would have taken 2 hour less.find the original speed of train?
a train covers a distance of 300km at a constant speed .if the speed of the train is increased by 5km/hr the journey would have taken 2 hour less.find the original speed of train?
If the original speed is $v \mbox{ km/hr}$, then the time to travel $300 \mbox{ km}$ is:
$t_1=\frac{300}{v}\ \mbox{hr}$,
similarly at the higher speed it would take:
$t_2=\frac{300}{v+5}\ \mbox{hr}$.
But we are told $t_1=t_2+2 \mbox{ hr}$, so:
$<br /> \frac{300}{v}=\frac{300}{v+5}+2<br />$
which we can rearrange to:
$<br /> -2 v^2 + 10 v - 1500=0<br />$
Which has two solutions and you need the poitive one.
August 18th 2006, 01:12 AM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/algebra/5009-quadratic-equations.html","timestamp":"2014-04-21T09:20:03Z","content_type":null,"content_length":"34389","record_id":"<urn:uuid:f801130a-bebf-4bac-bc94-1ea78d6200af>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math.max in JavaScript
JavaScript is a fine language in many ways — a fact that many people have noticed by now. But one of its biggest problems is the design of its standard library. There’s a lot of low-hanging fruit to
complain about; this note covers just Math.max.
Math.max in JavaScript does essentially what you’d expect: it calculates the maximum of several numeric values. It also does something very sensible if you give it no values at all: it returns
If you need to use Math.max, you will be in one of two situations: you might have several numeric variables, or you might have an arbitrarily-long array of numeric values.
In some languages, like Perl 5, the distinction isn’t meaningful:
use List::Util 'max';
my $max_of_array = max(@array);
my $max_of_variables = max($x, $y, $z);
# For that matter, you can use any combination:
my $max_of_all_sorts = max($x, @array, $y, @array2, $z);
But in JavaScript, that approach isn’t possible; the API has to pick one form or the other as the “native” approach, and force the other to be implemented in terms of it. So when designing a standard
library for a JavaScript-like language, which approach should you pick?
The obvious way of tackling that question is to look at the consequences of each decision. Suppose the array form were considered the “native” approach. Then you’d have code like this:
var max_of_array = Math.max(array);
var max_of_variables = Math.max([x, y, z]);
You could also consider making max a method on an array object, instead of a pseudo-static method called on its namespace object:
var max_of_array = array.max();
var max_of_variables = [x, y, z].max();
That seems particularly concise and obvious, but even using the spurious Math namespace, it’s easy to call max on either an array or a list of named variables. And of course, that’s because building
an array from a list of named variables is trivial in terms of both cognitive and syntactic weight.
The other API approach — making the multiple-variables form the “native” one — yields straightforward code when that’s what you’re trying to do:
var max_of_variables = Math.max(x, y, z);
But if you have an array of values, the best you can do is this monstrosity:
var max_of_array = Math.max.apply(Math, array);
Not only do you have to explicitly invoke apply on Math.max, you also have to redundantly say again that max belongs to Math. By any reasonable standard, this approach yields a worse API than the
other one.
So, why does JavaScript use this worse API?
It’s obviously way too late to change JavaScript’s standard library now. And I’m aware that the difference here is of limited impact — It’s not absolutely terrible that JavaScript programmers have to
jump through this hoop to find the maximum of an array of numbers. But that doesn’t justify use of a suboptimal API design in something as widely used as a language’s standard library. The argument’s
somewhat trite, but if an improved API could save one second for each of a million people, that’d be a big win.
I’ll also point out that, since JavaScript is designed for use on the web, excessively-verbose APIs hurt the end user as well as the programmer — the user effectively has to download the source code.
Depressingly enough, though, the situation as presented above isn’t actually as bad as it gets. The first browser to have JavaScript was Netscape Navigator 2.0. (Unsurprisingly, given that the
language was developed in-house at Netscape.) Array literals first made an appearance in JavaScript 1.2, which came with Navigator 4.0. In the absence of array literals, using an array-oriented max
API for a list of variables becomes rather more awkward:
// Convenient form, as above:
var max = Math.max([x, y, z]);
// Or without array literals, ugh:
var max = Math.max(new Array(x, y, z));
Worse even than that, though, is the fact that before ECMAScript edition 3, the Math.max function didn’t calculate the maximum of an arbitrary number of arguments. Instead, it took exactly two
arguments, and returned the greater. That is, JavaScript’s mid-1990s API designers not only failed to find the better of the two obvious choices, they even failed to find the worse one. D’oh. | {"url":"http://aaroncrane.co.uk/2008/11/javascript_max_api/","timestamp":"2014-04-20T20:55:47Z","content_type":null,"content_length":"9531","record_id":"<urn:uuid:07d21a28-e2a8-473d-bfa4-4b1dfb61e7a5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Working with Sequences
Date: 11/14/98 at 09:16:12
From: Kathy Kwan
Subject: Sequences
I have tried to do the following questions twice, but I still couldn't
figure them out. Would you please show me how to do them?
1) Give the next two terms of the sequence:
1, 1, 2, 4, 3, 9, 4, ...
2) Write down the first four terms and the seventh term of the
sequence for which the nth term is given.
a) 2n - 1
b) n - 4
Date: 11/19/98 at 21:20:46
From: Doctor Anderson
Subject: Re: Sequences
Hi, Kathy.
These can be tricky, especially the first one, where you have to find
a pattern. You can think about it for hours, trying a million
different patterns, and never get the right one.
I don't know about you, but my first instinct on question (1) is to
figure out what to add to or subtract from each term to get the next
term. This is often a good method, and we can sometimes find the answer
easily by writing out this sequence:
1-1, 2-1, 4-2, 3-4, 9-3, 4-9, ...
which is the same as:
0, 1, 2, -1, 6, -5,...
So to get the second term, add 0 to the first, to get the third, add 1
to the second, to get the fourth, add 2 to the third, to get the
fifth, add (-1) to the fourth, etc. Well, I don't see a nice pattern,
do you? So although this often works, it doesn't seem to here.
Let's think of another way to look at it. I can't really help you here
without telling you the right way to go. I can't give you the actual
answer, but here is the right direction on this one. Pair up terms
that are next to each other. Do it like this: (1,1), (2,4), (3,9),
and you can't make the next pair yet, call it (4,x). Look at these,
especially (2,4) and (3,9), for a while. What kind of relation does the
second number have with the first, in each pair? This should help you
find x, once you see the pattern. Now, that gives you the next term, so
what about the one after that? Well, look at the sequence of first
terms of each pair. This has a really simple pattern. So far we have
(1,1), (2,4), (3,9), and (4,x), so let's call the next pair (y,z). I
hope you can find y without too much trouble, and you don't even need
to find z (the question doesn't ask for it).
Now, for question (2). I will solve an example like yours, all the way
Let's use the sequence with the nth term being 3n+2. What is another
way of saying the first term? It is the term where n = 1. Well, if
n = 1, then 3n + 2 = 3(1) + 2 = 5. So we can make a simple chart
following this procedure:
n 3n+2
1 3(1) + 2 = 5
2 3(2) + 2 = 8
3 3(3) + 2 = 11
4 3(4) + 2 = 14
7 3(7) + 2 = 23
From here, you should be able to find the first 4 terms and the 7th
term of your problems. Good luck, and if you get stuck, feel free to
ask for more help.
- Doctor Anderson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/56940.html","timestamp":"2014-04-17T21:44:14Z","content_type":null,"content_length":"7800","record_id":"<urn:uuid:85c3eee1-f391-4acc-aa5e-bca779c903f1>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Center of Gravity/Torque
Gravity provides the force on the broom. If the force is equal on both sides of the pivot point (where you have your finger), there is no torque.
The center of mass is the point where the mass is equal on both sides of the pivot point.
Why would there be a difference? Because the force of gravity on an object depends upon that object's distance from the center of the Earth.
[tex] F_g=\frac{GMm}{r^2}[/tex]
If the broom is horizontal, then the pivot point is the closest point to the center of the Earth. The ends of the broom are slightly further away from the center of the Earth and the pull of gravity
on the ends is slightly less than on the pivot point. The broom is non-symmetrical. The mass on the whisk side of the broom is closer to the center of mass than the end of the broom stick, so the
pull of gravity is slightly stronger even though there is the same amount of mass on either side of the pivot point.
Obviously, with an object as short as a broom, the difference between the center of mass and the center of gravity will be miniscule - much less than the width of your finger. But the difference is
still there.
How big? The Earth has a radius of about 6378km. If the broom stick is about 1 meter from center of mass to the end of the broom stick, the end of the broom stick will be
or about 6378km to the nearest km. In other words, you would have an extremely hard time measuring the difference experimentally for a broom stick on Earth.
Satellites do consider the difference. With so little atmosphere, little perturbations like the difference between center of mass and center of gravity start to be more significant. I can't remember
the number off the top of my head, but the gravity gradient torque on the International Space Station is pretty high for environmental torques. Satellites and other spaceships (such as the shuttle)
would tend to wind up with their long axis aligned with their radius if left alone (once the axis is tilted from the horizontal, the gravity gradient difference between the high side and the low side
is even greater). Some of the cheaper satellites use booms on the spaceraft to create a long axis and keep their sensors pointed at the Earth (there's generally almost a fifty-fifty chance which end
winds up lower, so they have to give the sensor end a tiny nudge to make sure the right end winds up pointing down).
Torque causes angular acceleration, which increases the angular velocity. In other words, the natural effect of a single boom would be for the spacecraft to swing its long axis back and forth across
the radius. Damping booms that are slightly shorter than the main gravity boom keep the angular velocity slow so the satellite's long axis stays within about 5 degrees of the radius. | {"url":"http://www.physicsforums.com/showthread.php?t=47208","timestamp":"2014-04-20T16:01:42Z","content_type":null,"content_length":"45954","record_id":"<urn:uuid:d3326d50-ff32-454f-ba7c-128b7fecd1fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - The u^2 coefficient in c on (e,u,u*e,u^2) does NOT distinguish among<br> the four subset x MoSS roll-ups
Date: Dec 7, 2012 12:01 PM
Author: Halitsky
Subject: The u^2 coefficient in c on (e,u,u*e,u^2) does NOT distinguish among<br> the four subset x MoSS roll-ups
Here are the average t and p values for u^2 for each value of subset x
Moss | (Fold,Set,Len) = (all,all,all)
x u^2
Method t p
SxN 35.17 0.457
SxC 38.19 0.416
SxR 36.88 0.434
SxC 34.69 0.438
I think you?ll agree from these numbers that this coefficient is
useless for distinguishing between the four subset x MoSS
combinations, and that we can therefore adopt the c on (e,u,u*e) model
instead. But if you want to see the 1179 rows of detail data for u^2
underneath the above four roll-ups, let me know and I will send it
Also, just out of curiousity, I will run the same analysis for the u*e
coefficient of c on (e,u,u*e) and post back the results. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7933797","timestamp":"2014-04-19T20:41:56Z","content_type":null,"content_length":"1930","record_id":"<urn:uuid:8d7730f3-1603-44fd-a73d-cb2d2ec8dbbc>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research Report 2008
The holy grail of investing
No, it doesn’t involve coconuts or rabbits or the comedy of Monty Python. It’s a simple but elusive concept for those who play the stock market. Low risk, high return — a dream for most investors.
Memorial’s Dr. Jim Stacey, a physicist by trade, took a visual approach to a mathematical problem.
Can the shape of a bell curve answer the question that traditional economic formulas can’t?
Initially thrown off by early research in his MBA studies, Dr. Stacey was ready to throw out the idea until his supervisor encouraged him to approach it in a new way. Soon he was analyzing the peaks
of the bell curve and exploring some fundamental questions about the nature of risk, giving new insights to a long-lived puzzle.
Dr. Stacey, a recent MBA graduate, wanted to apply computation and mathematical analysis to markets. “What's interesting about finance and studying the behaviour of markets is it's like a new
frontier,” he said. “It's a fascinating subject. It's not a physical science, but a social science with rational and irrational components to it.”
His research interest led to a paper for an investments class that looked for a new way to determine the measure of risk in a portfolio. Instead of focusing only on standard deviation when measuring
risk, Dr. Stacey looked at the kurtosis, or shape, of the distribution.
Normally in finance, risk is measured by assessing the standard deviation of stock performance. By looking at the shape of the bell curve, Dr. Stacey was hoping to discover something new. “If you
picture a bell curve, risky stocks tend to have wide peaks and relatively skinny tails. I wanted to figure out if a distribution has a fat tail, is it riskier than or not as risky as a normal
distribution? I wanted to look at shape as a dimension of risk to optimize portfolios.”
Stocks with fat tails are generally more risky, but a narrow peak means that the fluctuations of that stock are smaller. By combining these two properties, the stocks mimic a low-risk portfolio with
significantly fewer stocks than you would need if you were combining stocks following the traditional method of portfolio optimization.
“This research may eventually lead to a viable investment strategy, and it has created a number of interesting questions that remain open. Is it possible to find a meaningful number of stocks with
these properties? I found that when combined, many stocks followed the normal statistical shape. It was difficult to find stocks with a narrow peak and fat tails when combined,” he explained.
The difficulty he ran into almost led him to throw away the results. “I thought this work was a failure. I said to myself, this is hard, the computer isn't able to do what I thought - it's only fit
for the garbage,” explained Dr. Stacey. “But Dr. Alex Faseruk, my adviser, rescued it from the trash. He encouraged me to submit the paper to ASAC and it was only when I was re-writing it from
assignment to submission that I realized there might be something here.”
Dr. Stacey found that it was possible to combine certain stocks and create portfolios that mimic a low-risk portfolio with fewer stocks. Because of the fat tail, these distributions do have a high
risk component, but as the distributions become more peaked, the fluctuations they experience are smaller. If the probability of small fluctuations becomes much larger than the probability of large
fluctuations, then the stock tends to increasingly behave like a risk-free asset. In other words, the familiar relationship that higher returns implies higher risk (and vice versa) might not apply to
these stocks.
As Dr. Stacey said, “It's an intriguing result that raises fundamental questions about the nature of risk. It introduces new quantities and expands portfolio optimization using higher properties of
the underlying statistics.”
“I really give kudos to Alex Faseruk for encouraging me to think outside the box. This research is totally opposite to what you'd expect when researching the issue,” said Dr.Stacey. “I found that the
previous work of Markowitz (the father of modern portfolio theory) was validated while, at the same time, a hint of potentially new insights into the behaviour of risk was revealed.” | {"url":"http://www.mun.ca/research/explore/publications/2008report/impact/holy_grail_investing.php","timestamp":"2014-04-17T15:33:58Z","content_type":null,"content_length":"8211","record_id":"<urn:uuid:1d8a203a-ec17-46c6-9a5e-91a6d18b8a31>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Emperor's Old Clothes, or How the World Sees It...
by Hugh Burkhardt
January 2001
The main advantages of integrated curricula are that they build essential connections, help make mathematics more usable, avoid long gaps in learning, allow a balanced curriculum, and support equity.
I know of no comparable disadvantages, provided that the "chunks" of learning are substantial and coherent.
Building connections
Building a student's robust cognitive structure, one that can be used flexibly and effectively in solving problems, depends on linking new concepts and skills with the student's existing
understanding. This happens through active processing over an extended period, first of weeks as the curriculum points out key links, ultimately over years as the concepts are used in solving
problems across a variety of contexts.
Compartmentalizing mathematics inhibits building such connections. For example, the different functions that represent the scaling of lengths, areas, and volumes are a practical example of links
between algebra and geometry and the real world. The profound fact that doubling all lengths multiplies all areas by 4 and volumes by 8 underlies home-heating calculations and accounts for upper
limits on the size of insects.
Making mathematics usable
The usefulness of mathematics depends on making such links with practical contexts. One has to ask, "Which concepts and skills, tools in my mathematical toolkit, will give me power over this problem?
" For most practical problems, the useful tools will come from more than one area of mathematics. Design and planning tasks usually involve both space and number. Evaluating practical alternatives
often involves data as well, with algebraic modeling adding power. As with all learning, students develop such "applied power" over practical problems only by using their mathematics successfully in
increasingly challenging problems, an activity that is possible only in a well-integrated curriculum.
The amnesia problem<<br /> Year-long chunks of one-flavored curriculum create continuity problems. It is a massive extension of the well-known "summer amnesia" problem—students return to school in
the fall having forgotten much of the previous year's work. Curricula in other countries, like the integrated curricula in the United States, have coherent units that focus on a single aspect of
mathematics and that typically last from three to eight weeks.
Balancing the curriculum
When and for how long should we teach geometry? What about such "new" areas as data, probability and statistics, or discrete mathematics? After a lively program in middle schools, should they
disappear until twelfth grade? What about problem solving over several aspects of mathematics? These balance issues are discussed and disputed in all countries, but finding good solutions, whatever
your criteria, needs flexibility in scheduling that dis-integrated curricula simply do not allow.
Supporting equity
Various issues are at work here. Algebra and geometry favor different learning styles, so a whole year of one seems inequitable. In prosperous middle-class homes where parents value education,
children's tolerance of abstraction, with its delayed gratification, can be built; but if all young people are to have a fair chance to succeed in mathematics, they have a need and a right to a
curriculum where the practical payoff from mathematical power is clear to them, month by month. A year of Euclid is an elitist filter. It would be even more lethal if so many had not been turned off
by the abstractions of first-year algebra. A balanced diet is surely as important in the mathematics curriculum as it is everywhere else.
How do we get there from here?
Since, in human affairs, rational argument is mainly used to justify emotionally driven decisions, the rationale above for an integrated curriculum may have slow and limited influence. The new
curricula are there for the converted. For others, change may best be effected by avoiding confrontation. Choosing traditional titles for the textbooks but having contents that are much more
integrated can be effective. Low cunning is a powerful tool in achieving—or resisting—educational change; I recommend it.
│Hugh Burkhardt is a theoretical physicist and an "educational engineer." Working at the Shell Centre at the University of Nottingham, England and often "on tour" in the United States, he leads the │
│development of tools to support systemic improvement in mathematics education. MARS, the implementation phase of the Balanced Assessment project, is his main current initiative. │
│ │ | {"url":"http://www.nctm.org/resources/content.aspx?id=1666","timestamp":"2014-04-20T21:59:25Z","content_type":null,"content_length":"48698","record_id":"<urn:uuid:1d9442a2-5cd6-448c-882d-379af03c97c5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Given y^2 = tan x, what is dy/dx - Homework Help - eNotes.com
Given y^2 = tan x, what is dy/dx
It is given that `y^2 = tan x` . To determine `dy/dx` use implicit differentiation.
`y^2 = tan x`
`(d(y^2))/dx = (d(tan x))/dx`
=> `2*y*(dy/dx) = sec^2x`
=> `dy/dx = (sec^2x)/(2*y)`
=> `dy/dx = (sec^2x)/(2*sqrt(tan x))`
The required derivative `dy/dx = (sec^2x)/(2*sqrt(tan x))`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/given-y-2-tan-x-what-dy-dx-441447","timestamp":"2014-04-20T17:24:12Z","content_type":null,"content_length":"24345","record_id":"<urn:uuid:5fb27864-fa24-4af3-8075-316d114fd557>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability question relating to harmonic series
April 8th 2011, 10:18 PM #1
Apr 2011
Hello all,
I'm stuck on another simple question, I think the solution is based somewhere on reversing Euler's approximation for harmonic series, yet I'm unable to make the link between the math and the
McBurgerQueen wants to run a competition where the grand prize is a very expensive car. In order to enter the competition customers must posses 1 of each coupon that comes attached in a random
manner to their McBurger.
However McBurgerQueen wants to ensure that everyone entering the competition purchase on average at least 12 burgers.
At least how many different coupons should there be?
I've run a simulation iterating from 2 to 10 coupons over 10^7 selections in each, and it seems that ~6 coupons is the smallest number required, to average 12 burgers and to posses all 6 unique
coupons (5 averages ~11 and 7 averages ~14). I'd really like to know what the pure mathematical way is to solve this problem. any assistance will be greatly appreciated.
Other ways to think of the problem:
1. How many times on average should a 6 sided dice be rolled so as to get each of the 6 sides at least once?
2. I have a fair n-sided dice, which on average takes 12 rolls to get each side at least once, how many sides does the dice have?
Last edited by Sherrie; April 9th 2011 at 12:04 PM.
The question asks you to ensure that everyone buys at least 12 burgers. I think you'll need 12 coupons for that.
If you want to target the average number of burgers (or something other than certainty) then the number of coupoons required will depend on the relative frequency of each coupon. For example you
could do it with 2 coupons if the second coupon only appeared on 1 in 100 burgers.
Hello SpringFan25,
Thanks for the reply, however that is definetly NOT the correct solution - I'm not sure if you have access to Fifty Challenging Problems in Probability with Solutions (Amazon.com: Fifty
Challenging Problems in Probability with Solutions (9780486653556): Frederick Mosteller: Books) but if you do, this problem is essentially the reverse of Problem 14 "collection coupons". Where it
says a cereal brand has 5 different coupons, on average at least how many boxes of cereal must you purchase to to have all 5, the answer is roughly: n*ln(n) + 0.577n + 1/2 where n is the number
of coupons which for n=5 is ~ 11.43
The difference between the type of question you just posted and the one in your first post is the word average.
"How do I ensure that everyone buys at least 12 burgers" is not the same question as "how many coupons are required so that the average number of burgers bought is at least 12".
Moving on to your intended question:
It seems you already have the solution for a closely related problem "if I have n coupons, what is the average number of burgers that will be bought", so a convenient approach is to use trial and
error to find the smallest number of n where the average is at least 12.
Hi SpringFan25,
You're right, I've added the word average to the question. The issue is I'm not interested in a trial-n-error approach (as I've already run a numerical simulation and also done a trial-n-error
using the above mentioned formula, what I would like a is more consistent numerical approach.
btw I'm going to add the following two similar examples, that might help explain the type of problem this is:
Other ways to think of the problem:
1. How many times on average should a 6 sided dice be rolled so as to get each of the 6 sides at least once?
2. I have a fair n-sided dice, which on average takes 12 rolls to get each side at least once, how many sides does the dice have?
Hello all,
I'm stuck on another simple question, I think the solution is based somewhere on reversing Euler's approximation for harmonic series, yet I'm unable to make the link between the math and the
McBurgerQueen wants to run a competition where the grand prize is a very expensive car. In order to enter the competition customers must posses 1 of each coupon that comes attached in a random
manner to their McBurger.
However McBurgerQueen wants to ensure that everyone entering the competition purchase on average at least 12 burgers.
At least how many different coupons should there be?
I've run a simulation iterating from 2 to 10 coupons over 10^7 selections in each, and it seems that ~6 coupons is the smallest number required, to average 12 burgers and to posses all 6 unique
coupons (5 averages ~11 and 7 averages ~14). I'd really like to know what the pure mathematical way is to solve this problem. any assistance will be greatly appreciated.
Other ways to think of the problem:
1. How many times on average should a 6 sided dice be rolled so as to get each of the 6 sides at least once?
2. I have a fair n-sided dice, which on average takes 12 rolls to get each side at least once, how many sides does the dice have?
Coupon Collector's Problem
April 9th 2011, 02:06 AM #2
MHF Contributor
May 2010
April 9th 2011, 08:09 AM #3
Apr 2011
April 9th 2011, 11:50 AM #4
MHF Contributor
May 2010
April 9th 2011, 11:58 AM #5
Apr 2011
April 9th 2011, 09:37 PM #6
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/statistics/177318-probability-question-relating-harmonic-series.html","timestamp":"2014-04-18T07:13:03Z","content_type":null,"content_length":"48028","record_id":"<urn:uuid:5eed77b8-d1ab-428b-aec6-9e4dec3ccffa>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A ball is dropped from the top of a 1,000 foot building. The height of the ball is half its original height after each bounce. What will the height of the ball be after 10 bounces? Using complete
sentences, explain the procedure taken to answer this question.
Best Response
You've already chosen the best response.
given height 1000 ft. with a divisor of 2(10); since question asked after 10 bounces. so therefor it's 20. 1000/20= 50 ft.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ee6f0cfe4b023040d4bd542","timestamp":"2014-04-20T06:32:58Z","content_type":null,"content_length":"27852","record_id":"<urn:uuid:23009696-838e-4cb3-9cb2-5bc902dd5855>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Pith of Performance
During a lunchtime discussion among recent
GCaP class attendees
, the topic of weather came up and I casually mentioned that the weather in Melbourne, Australia, can be very changeable because the continent is so old that there is very little geographical relief
to moderate the prevailing winds coming from the west.
In general, Melbourne is said to have a mediterranean climate, but it can also be subject to cold blasts of air coming up from Antarctic regions at any time, but especially during the winter.
Fortunately, the island state of Tasmania acts as something of a geographical barrier against those winds. Understanding possible relationships between these effects presents an interesting exercise
in correlation analysis.
Gathering Weather Data
Weather data for all major Australia cities are available from the
Bureau of Meteorology
. The subsequent discussion will employ weather records for the past calendar year (2013) collected from
, in Western Australia, and
, in Tasmania. The city of Perth has been in the news lately because it's the base for aircraft searching for wreckage of
Malaysian Airlines flight MH 370.
The available weather indicators include daily min and max temperatures and rainfall.
Figure 1 shows maximum temperatures in degrees Celsius. The trough occurs in the middle of the calendar year because that's the winter season in Australia.
Which city is most strongly correlated with Melbourne's temperatures? It's impossible to decide based on the raw data alone. To answer such questions more rigorously we can use the cross correlation
function (CCF) in R.
Cross Correlation Plots
Applying the
function to the data in Fig. 1:
df.mel <- read.table("~/.../mel.csv",header=TRUE,sep=",")
df.per <- read.table("~/.../per.csv",header=TRUE,sep=",")
df.hob <- read.table("~/.../hob.csv",header=TRUE,sep=",")
df.laun <- read.table("~/.../laun.csv",header=TRUE,sep=",")
mel.ts <- ts(df.mel$MaxT)
per.ts <- ts(df.per$MaxT)
hob.ts <- ts(df.hob$MaxT)
laun.ts <- ts(df.laun$MaxT)
produces the plots shown in Fig. 2.
Like a ripple in a pond, there can be a delay or lag between an event exhibiting itself in one time series and it's effect showing up in the other time series. So, simply calculating the correlation
coefficient at the same point in time for both series is not sufficient.
The CCF is defined as the set of correlations (height of the vertical line segments in Fig. 2) between two time series $x_t + h$ and $y_t$ for lags $h = 0, \pm1, \pm2, \ldots$. A negative value for
$h$ represents a correlation between the x-series at a time before $t$ and the y-series at time $t$. If, for example, the lag $h = -3$, then the cross correlation value would give the correlation
between $x_t - 3$ and $y_t$. Negative line segments correspond to events that are anti-correlated.
The CCF helps to identify lags of $x_t$ that could be predictors of the $y_t$ series.
1. When $h < 0$ (left side of plots in Fig. 2), $x$ leads $y$.
2. When $h > 0$ (right side of plots in Fig. 2), $x$ lags $y$.
For the weather correlation analysis, we would like to identify which series is leading or influencing the Melbourne time series.
Interpreting the CCF Plots
The dominant or fundamental signal over 365 days in Fig. 1 resembles one period of a
sine wave
. The first row in Fig. 3. shows two pure sine waves (red and blue) that are in phase with each other (
left column
). The correlation plot (
right column
) shows a peak at $h=0$, in the middle of the plot, indicating that the two curves are most strongly correlated when there is no horizontal displacement between the curves.
The second row in Fig. 3. shows sine waves that are 90 degrees out of phase with each other (left column ). The correlation plot (right column ) shows that these two curves are most weakly correlated
at zero lag. Conversely, they are more strongly correlated at $h=-16$ (left side of CCF plot) or anti-correlated at $h=+16$ (right side of CCF plot).
The third row in Fig. 3 is similar to the first row but with some Gaussian noise added to both signals. The correlation plot shows a slight loss of symmetry but otherwise doesn't indicate much
additional structure because the randomness of the noise in both signals tends to cancel out.
Figure 4 has the same signals as Fig. 3 but with 365 sample points to match the weather data in Fig. 1. This has the effect of broadening out the correlation plots and, indeed, they do more closely
resemble the correlation plots in Fig. 2.
# Perth-Melbourne analysis
# produces the numerical output:
Autocorrelations of series ‘X’, by lag
-22 -21 -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6
0.542 0.498 0.488 0.511 0.525 0.545 0.563 0.550 0.549 0.554 0.588 0.576 0.599 0.594 0.549 0.540 0.615
-5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11
0.631 0.617 0.656 0.595 0.508 0.475 0.512 0.559 0.605 0.618 0.555 0.500 0.512 0.543 0.548 0.536 0.533
0.525 0.504 0.523 0.520 0.489 0.478 0.494 0.501 0.484 0.503 0.519
pm <- ccf(per.ts,mel.ts)
max.pmc <- max(pm$acf)
# produces the output:[1] 0.6564855
# At what lag?
pm$lag[which(pm$acf > max.pmc-0.01 & pm$acf < max.pmc+0.01)]
# produces the output: [1] -3
We can carry out the same analysis for Hobart-Melbourne time series:
# Hobart-Melbourne analysis
hm <- ccf(hob.ts,mel.ts)
max.hmc <- max(hm$acf)
pm$lag[which(hm$acf > max.hmc-0.01 & hm$acf < max.hmc+0.01)]
# 0.8269252 occurs at lag h = 0
and Launceston-Melbourne time series:
# Launceston-Melbourne analysis
lm <- ccf(laun.ts,mel.ts)
max.lmc <- max(lm$acf)
lm$lag[which(lm$acf > max.lmc-0.01 & lm$acf < max.lmc+0.10)]
# Two lags satisfy this criterion
# 0.801 occurs at lag h = 0
# 0.791 occurs at lag h = -1
Next, we need to interpret all these statistics.
Analysis and Conclusions
It does indeed take about three days for weather to cross the 2000 miles between Perth and Melbourne. But the correlation at lag $h = -3$ is only 0.66, whereas it's closer to 0.80 for Hobart and
Launceston. To help interpret these correlation values we note that the GIS coordinates of the respective cities are:
The prevailing westerly winds originate with the Roaring Forties in the Indian Ocean.^† That name is a reference to 40 degrees south latitude. Melbourne is located at about 38 degrees south latitude.
Perth, on the other hand, is located at a latitude much further north; it's even north of Sydney! In addition, there is a considerable desert region (roughly two thirds of the breadth of the
continent) between Perth and Melbourne. Therefore, we can expect the correlations between Perth and Melbourne temperatures to be weaker than those associated with the Tasmanian cities.
Hobart is further south than Melbourne and, although it's on the eastern side of the Tasmanian island, there is no other land mass between the longitudes at Perth and Hobart. Hence, Hobart and
Melbourne are more strongly correlated than Perth at zero lag.
Launceston is closest to Melbourne by latitude and, at zero lag, has a similar correlation to that for Hobart. No surprise there. There is one difference, however. A similar correlation exists at $h
= -1$, which means Launceston leads Melbourne by a day, even though it is slightly east of Melbourne by about two degrees longitude. How can that be? One possibility is that it represents the effect
of more southerly winds, such as those originating with the Screaming Sixties, circulating approximately counter-clockwise around the east coast of Tasmania. Cross correlated cross winds.
I'll cover more about time series analysis in the upcoming Guerrilla Data Analysis Techniques class.
These are the same winds used by the early European traders, like the Dutch and Portugese, to reach the "spice islands" in the Indonesian archipelago. Hence the term
trade winds
. The basic idea was, you sail down the west coast of Africa, round the Cape of Good Hope, catch the Roaring Forties across the Indian Ocean, and hang a left at about 100 degrees east longitude. All
this was at a time before longitude could be determined accurately. In 1616, the Dutch naval explorer, Dirk Hartog, missed that off-ramp and found himself staring at the west coast of Australia.
Hence, the name of the continent was changed from
Terra Australis
(Southern Land) to
Neu Holland
(New Holland) on maps of the day. In a similar way, another Dutch navigator, Abel Tasman, sighted the west coast of Tasmania and named it
Van Diemen's Land
. Two centuries later, it was renamed after Tasman.
How can you determine the performance impact on SLAs after an N+1 redundant hosting configuration fails over? This question came up in the Guerrilla Capacity Planning class, this week. It can be
addressed by referring to a multi-server queueing model.
N+1 = 4 Redundancy
We begin by considering a small-N configuration of four hosts where the load is distributed equally to each of the hosts. For simplicity, the load distribution is assumed to be performed by some kind
of load balancer with a buffer. The idea of N+1 redundancy is that the load balancer ensures all four hosts are utilized. The common misconception is that none of the hosts should consume more than
75% of their available capacity. Referring to Fig. 1, the total consumed capacity is assumed to be $4 \times 3/4 = 3$ or 300% of the configuration (rather than 400%) so that, when one of the hosts
fails, its lost capacity is compensated by redistributing that load across the remaining three available hosts.
The circles in Fig. 1 represent hosts and rectangles represent incoming requests buffered at the load-balancer. The blue area in the circles signifies the available capacity of a host, whereas white
signifies unavailable capacity. When one of the hosts fails, its load must be redistributed across the remaining three hosts. What Fig. 1 doesn't show is the performance impact of this capacity
N+1 = 4 Performance
The performance metric of interest is response time as it pertains to service targets expressed in SLAs. To assess the performance impact of failover, we model the N+1 configuration as an M/M/4 queue
with per-server utilization constrained to be no more than 75% busy.
When a failover event occurs, the configuration becomes an M/M/3 queue. The corresponding response time curves are shown in Fig. 2. The y-axis is the response time, R, expressed as multiples of the
service period, S. A typical scenario is where the SLA (horizontal line) corresponds to maximum or near maximum utilization. The SLA in this case is a mean response time no greater than 1.45 service
On failover, upon which only three hosts remain available, the SLA will be exceeded because the utilization of each host will be heading for 100% due to the additional load. (See Fig. 1)
Correspondingly, this has the effect of pushing the response time very high up the M/M/3 curve. In order to maintain the SLA, the load would have to be reduced so that it corresponds to a lower
utilization of 68.25%. Fig. 3 shows this effect in more detail.
In practice, proper capacity planning, such as the M/M/m queueing models employed in this discussion, would have revealed that the maximum host utilization should not have exceeded 68.25% busy in the
N+1 configuration.
Large-N Performance
With a large number of hosts the difference in response time after failover becomes less significant. This follows from the fact that response-time curves for an M/M/m queue are flatter at high
utilizations when the number of servers, m, is large. The effect is illustrated in Fig. 4 for N+1 = 16 hosts. (cf. Fig. 2)
However, the most common installations are small-N configurations of the type discussed in the previous section. Therefore, preserving your SLA requires capacity planning based on host utilizations
that match your SLA targets.
Thanks to the GCaP class participants for doing a group-edit on this post in real time.
This article was originally posted in 2007. When I updated the image today (in 2014), it reappeared with the more recent date and I don't know how to override that (wrong) timestamp. This seems to be
a bug in Blogger.
In the aftermath of a discussion about software management with my colleague Steve Jenkin, I looked up the Mythical Man-Month concept in wikipedia. The main thesis of Fred Brooks, often referred to
as "Brooks's law [sic]," is simply stated as:
Adding manpower to a late software project makes it later.
In other words, some number of cooks are necessary to prepare a dinner, but adding too many cooks in the kitchen can inflate the delivery schedule.
Highlighting Facebook's mistakes and weaknesses is a popular sport. When you're the 800 lb gorilla of social networking, it's inevitable. The most recent rendition of FB bashing appeared in a serious
study entitled, Epidemiological Modeling of Online Social Network Dynamics, authored by a couple of academics in the Department of Mechanical and Aerospace Engineering (???) at Princeton University.
They use epidemiological models to explain adoption and abandonment of social networks, where user adoption is analogous to infection and user abandonment is analogous to recovery from disease, e.g.,
the precipitous attrition witnessed by MySpace. To this end, they employ variants of an SIR (Susceptible Infected Removed) model to predict a precipitous decline in Facebook activity in the next few
Channeling Mark Twain^†, FB engineers lampooned this conclusion by pointing out that Princeton would suffer a similar demise under the same assumptions.
Irrespective of the merits of the Princeton paper, I was impressed that they used an SIR model. It's the same one I used, in R, last year to reinterpret Florence Nightingale's zymotic disease data
during the Crimean War as resulting from epidemic spreading.
Another way in which FB was inadvertently dinged by incorrect interpretation of information—this time it was the math—occurred in the 2010 movie, "The Social Network" that tells the story of how FB
(then called Facemash) came into being. While watching the movie, I noticed that the ranking metric that gets written on a dorm window (only in Hollywood) is wrong! The correct ranking formula is
analogous to the Fermi-Dirac distribution, which is key to understanding how electrons "rank" themselves in atoms and semiconductors.
^†"The reports of my death have been greatly exaggerated."
The upcoming GBoot and GCaP training classes are your fast track to enterprise performance and capacity management. You can now register entirely online using either your corporate or personal credit
New topics include:
• The effect of think-time settings in load tests
• Closed vs. open workloads in load testing
Classic topics include:
• There are only 3 performance metrics
• How performance metrics are related to each another
• How to quantify scalability with the Universal Scalability Law (USL)
• IT Infrastructure Library (ITIL) for Guerrillas
• The Virtualization Spectrum from hyperthreads to hyperservices
As usual, all classes are held at our lovely Larkspur Landing Pleasanton location in California. Attendees should bring their laptops to the class as course materials are provided on a flash drive.
Larkspur Landing also provides free wi-fi Internet in their residence-style rooms as well as the training room.
The question of accurately measuring processor utilization with hyper-threading (HT) enabled came up recently in a Performance Engineering Group discussion on Linked-in. Since I spent some
considerable time looking into this issue while writing my Guerrilla Capacity Planning book, I thought I'd repeat my response here (slightly edited for this blog), in case it's useful to a broader
audience interested in performance and capacity management. Not much has changed, it seems.
In a nutshell, the original question concerned whether or not it was possible for a single core to be observed running at 200% busy, as reported by Linux top, when HT is enabled.
This question is an old canard (well, "old" for multicore technology). I call it the "Missing MIPS" paradox. Regarding the question, "Is it really possible for a single core to be 200% busy?" the
short answer is: never! So, you are quite right to be highly suspicious and confused.
You don't say which make of processor is running on your hardware platform, but I'll guess Intel. Very briefly, the OS (Linux in your case) is being lied to. Each core has 2 registers where
inbound threads are stored for processing. Intel calls these AS (Architectural State) registers. With HT *disabled*, the OS only sees a single AS register as being available. In that case, the
mapping between state registers and cores is 1:1. The idea behind HT is to allow a different application thread to run when the currently running app stalls; due to branch misprediction, bubbles
in the pipeline, etc. To make that possible, there has to be another port or AS register. That register becomes visible to the OS when HT is enabled. However, the OS (and all the way up the food
chain to whatever perf tools you are using) now thinks twice the processor capacity is available, i.e., 100% CPU at each AS port.
In a previous post, I applied my rules-of-thumb for response time (RT) percentiles (or more accurately, residence time in queueing theory parlance), viz., 80th percentile: $R_{80}$, 90th percentile:
$R_{90}$ and 95th percentile: $R_{95}$ to a cellphone application and found that the performance measurements were not completely consistent. Since the relevant data only appeared in a journal blog,
I didn't have enough information to resolve the discrepancy; which is ok. The first job of the performance analyst is to flag performance anomalies but most probably let others resolve them—after
all, I didn't build the system or collect the measurements.
More importantly, that analysis was for a single server application (viz., time-to-first-fix latency). At the end of my post, I hinted at adding percentiles to PDQ for multi-server applications.
Here, I present the corresponding rules-of-thumb for the more ubiquitous multi-server or multi-core case.
Single-server Percentiles
First, let's summarize the Guerrilla rules-of-thumb for single-server percentiles (M/M/1 in queueing parlance): R_{1,80} &\simeq \dfrac{5}{3} \, R_{1} \label{eqn:mm1r80}\\ R_{1,90} &\simeq \dfrac{7}
{3} \, R_{1}\\ R_{1,95} &\simeq \dfrac{9}{3} \, R_{1} \label{eqn:mm1r95} where $R_{1}$ is the statistical mean of the measured or calculated RT and $\simeq$ denotes approximately equal. A useful
mnemonic device is to notice the numerical pattern for the fractions. All denominators are 3 and the numerators are successive odd numbers starting with 5. | {"url":"http://perfdynamics.blogspot.com/","timestamp":"2014-04-20T13:43:02Z","content_type":null,"content_length":"168873","record_id":"<urn:uuid:4fe749c2-dd8a-43ac-8338-0beb52ce0b18>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
A cluster-based selective cooperative spectrum sensing scheme in cognitive radio
Developing an effective cooperative spectrum sensing (CSS) scheme in cognitive radio (CR), which is considered as promising system for enhancing spectrum utilization, is necessary. In this paper, a
cluster-based optimal selective CSS scheme is proposed for reducing reporting time and bandwidth while maintaining a certain level of sensing performance. Clusters are organized based on the
identification of primary signal signal-to-noise ratio value, and the cluster head in each cluster is dynamically chosen according to the sensing data qualities of CR users. The cluster sensing
decision is made based on an optimal threshold for selective CSS which minimizes the probability of sensing error. A parallel reporting mechanism based on frequency division is proposed to
considerably reduce the time for reporting decision to fusion center of clusters. In the fusion center, the optimal Chair-Vashney rule is utilized to obtain a high sensing performance based on the
available cluster’s information.
Cognitive radio; Cooperative spectrum sensing; Cluster; Selective combination; Parallel reporting mechanism
1 Introduction
Cognitive radio (CR) has been recently proposed as a promising technology to improve spectrum utilization by enabling secondary access to unused licensed bands. A prerequisite to this secondary
access is having no interference to the primary system. This requirement makes spectrum sensing a key function in cognitive radio systems. Among common spectrum sensing techniques, energy detection
is an engaging method due to its simplicity and efficiency. However, the major disadvantage of energy detection is the hidden node problem, in which the sensing node cannot distinguish between an
idle and a deeply faded or shadowed band [1]. Cooperative spectrum sensing (CSS) which uses a distributed detection model has been considered to overcome that problem [2-12].
Cooperation among CR users (CUs) is usually coordinated by a fusion center (FC). For each sensing interval, CUs will send their sensing data to the FC. In the FC, all local sensing data will be
combined to make a final decision on whether the primary signal is present or absent. An optimal data fusion rule was firstly considered by Chair and Varshney in [13]. Despite a good performance, the
requirement for knowledge of detection and false alarm probabilities at each local node is still a barrier to the optimal fusion rule.
CSS schemes require a large communication resource including sensing time delay, control channel overhead, and consumption energy for reporting sensing data to the FC, especially when the network
size is large. There are some previous works [3-9] that considered this problem. In our previous work [3], we proposed an ordered sequential reporting mechanism based on sensing data quality to
reduce communication resources. A similar sequential ordered report transmission approach was considered for reducing the reporting time in [4]. However, the reporting time of these methods is still
unpredictably long. In [5], the authors proposed to use a censored truncated sequential spectrum sensing technique for saving energy. On the other hand, cluster-based CSS schemes are considered for
reducing the energy of CSS [6] and for minimizing the bandwidth requirements by reducing the number of terminals reporting to the fusion center [7]. In [8], Chen et al. proposed a cluster-based CSS
scheme to optimize the cooperation overhead along with the sensing reliability. In fact, these proposed cluster schemes can reduce the amount of direct cooperation with the FC but cannot reduce the
communication overhead between CUs and the cluster header. A similar problem can be observed in the cluster scheme in [9], though the optimal cluster size to maximize the throughput used for
negotiation is identified. Another consideration of the cluster scheme is to enhance sensing performance when the reporting channel suffers from a severe fading environment [10,11].
In this paper, we propose a cluster-based selective CSS scheme which utilizes an efficient selective method for the best quality sensing data and a parallel reporting mechanism. The selective method,
which is usually adopted in cooperative communications [14,15], is applied in each cluster to implicitly select the best sensing node during each sensing interval as the cluster header without
additional collaboration among CUs. The parallel reporting mechanism based on frequency division is considered to strongly reduce the reporting time of the cluster decision. In the FC, the optimal
Chair-Vashney rule (CV rule) is utilized to obtain a high sensing performance based on the available cluster’s signal-to-noise ratio (SNR). In this way, the proposed cooperative sensing will be
performed with an extremely low cooperation resource while a certain high level of sensing performance is ensured.
The remainder of this paper is organized as follows. In Section 2, some background on spectrum sensing and optimal fusion rule is described. In Section 3, we present system descriptions. The proposed
system model and detailed descriptions of the proposed cluster-based selective CSS scheme are also given in Section 4. Simulation results are shown in Section 5. Finally, the conclusions are drawn in
Section 6.
2 Preliminaries
2.1 Local spectrum sensing
Each CU conducts a spectrum sensing process, which is called local spectrum sensing in distributed scenario for detecting the primary user (PU) signal. Local spectrum sensing at the ith CU is
essentially a binary hypotheses testing problem:
where H[0] and H[1] correspond, respectively, to hypotheses of absence and presence of the PU signal, x[i](t) represents received data at CU [i], h[i] denotes the gain of the channel between the PU
and the CU [i], s(t) is the signal transmitted from the primary user, and n(t) is additive white Gaussian noise. Additionally, channels corresponding to different CUs are assumed to be independent,
and further, all CUs and PUs share a common spectrum allocation.
Among various methods for spectrum sensing, energy detection has been shown to be quite simple, quick, and able to detect the primary signal - even if the feature of the primary signal is unknown.
Here, we consider the energy detection for local spectrum sensing. Figure 1 shows the block diagram of an energy detection scheme. To measure the signal power in a particular frequency region in a
time domain, a band-pass filter is applied to the received signal, and power of the signal samples is then measured at CU. The estimation of received signal power is given at CU [i] by the following
Figure 1. Block diagram of the energy detection scheme.
where x[j] is the jth sample of the received signal and N=2TW in which T and W correspond to detection time and signal bandwidth in hertz, respectively.
If the primary signal is absent, follows a central chi-square distribution with N degrees of freedom; otherwise, follows a noncentral chi-square distribution with N degrees of freedom and a
noncentrality parameter θ[i]=Nγ[i], i.e.,
When N is relatively large (e.g., N>200) [16], x[E] can be well approximated as a Gaussian random variable under both hypotheses H[1] and H[0], according to the central limit theorem such that
where γ[i] is the SNR of the primary signal at the CU.
For the case of local sensing or hard decision fusion, the CUs will make the local sensing decision based on an energy threshold λ[i] as follows:
where D[i]=1 and D[i]=-1 mean that the hypotheses of H[1] and H[0] are declared at the ith CU, respectively. The local probability of detection and the local probability of false alarm can be
determined based on (4) by:
respectively, where Q(.) is the Marcum-Q function, i.e., .
2.2 The optimal fusion rule for global decision
Chair and Varshney provided the optimal data fusion rule in a distributed local hard decision detection system [13]. This optimal rule is in fact the sum of weighted local decisions where the weights
are functions of probabilities of detection and false alarm.
The optimal fusion rule is based on the likelihood ratio test as follows:
where π[0] and π[1] are the prior probabilities of the presence and absence of the PU signal, respectively, and C[00], C[01], C[10], and C[11] are the decision costs. If we choose C[00]=C[11]=0
and C[01]=C[10]=1, the likelihood ratio test now follows the minimum probability of error criterion [17], and (8) can be rewritten as:
Since the decision set {D[i]} is independent, the log-likelihood ratio test corresponding to (9) is as follows:
If S+ and S- denote the set of all i such that {D[i]=1} and {D[i]=-1}, respectively, then (10) can be computed by:
Finally, the Chair-Vashney fusion rule can be rewritten in the form of the weighting formula as follows:
Local false alarm probability p[f][i] and local detection probability p[d][i] are defined in (6) and (7), respectively.
3 System description
In this paper, the CR network, which shares the same spectrum band with a license system, utilizes a cluster-based CSS scheme as shown in Figure 2. The CR network is organized in multiple clusters in
each of which the CUs have an identical average SNR of the received primary signal. This identical SNR assumption can be practical when the clusters are divided according to geographical position,
i.e., adjacent CUs in a small area are gathered into a cluster. The header in each cluster is not fixed but dynamically selected for each sensing interval based on the quality of the sensing data at
each CU. In detail, the node with the most reliable sensing result will take on the cluster header’s roles which include making and reporting the cluster’s decision to the FC. In order to reduce the
reporting time and bandwidth, only the sensing data of the cluster header, which is the most reliable sensing data, is utilized to make the cluster decision. This method means that the decision of a
cluster is made according to the selective combination method. The FC will combine all cluster decisions to make a final decision and broadcast the final sensing decision to the whole network.
The fusion rule in the FC can be any kind of hard decision fusion rules such as an OR rule, AND rule, ‘K out of N’ rule, or Chair-Varshney rule. Without loss of generality, we propose the utilization
of the optimal Chair-Varshney rule at the FC since the SNR value of the received primary signal at the CU is available in this proposed scheme. However, there are three issues with the proposed
scheme that need to be considered:
1. How can the scheme efficiently select the cluster header, which is the node with the best quality for sensing data, for each sensing interval without any extra overhead among nodes in the cluster?
2. How can the cluster header optimally make the cluster decision?
3. What is the method for reporting the cluster decision to the FC?
The answers to these questions are given in the following section.
4 The proposed cluster-based selective CSS scheme
4.1 Selective CSS mechanism
In this subsection, we suggest a cluster header selection based on sensing data reliability. For each sensing interval, the CU with the most reliable sensing data in a cluster is selected to be the
cluster header. Obviously, the reliability of the sensing data can be evaluated by the log-likelihood ratio (LLR) of the sensing result. The LLR value of the received signal energy is given by:
where is the probability density function (PDF) of corresponding to each hypothesis. Since the SNRs of the received primary signals in a cluster are identical, the LLR of the ith user in the c[j]th
cluster can be considered to be derived from the same distribution of LLR Λ. For each cluster, therefore, the LLR value can be normalized such that it has a zero mean as follows:
It is obvious that the reliability of the sensing data will be higher if the absolute value of the normalized LLR is larger. We propose utilization of the absolute value of the normalized LLR as the
reliability coefficient for selecting the cluster header as well as the selective cluster data.
In order to implicitly select the most reliable sensing data among CUs in a cluster without additional data collaboration, one contention time should be determined for each CU as follows:
where κ is a predefined constant such that the contention time is sufficient. Obviously, from this equation, the node with the highest absolute value of the normalized LLR will have the smallest
contention time. In contention, each CU must monitor the reporting channel and wait for a quiescent condition before considering itself as a cluster header, i.e., the node with the most reliable
sensing data, when the contention time expires. The CU who wins the contention will make a local cluster decision and report the cluster decision to the FC based on its own sensing data as follows:
where is equal to the normalized LLR with highest absolute value and is the cluster threshold. Next, we consider the problem of choosing the optimal cluster threshold.
4.2 Cluster threshold determination
In order to make a controllable cluster decision that follows a certain criterion such as the Neyman-Pearson criterion or minimum error probability criterion, one factor to consider is the
probability density function of the cluster’s selective sensing data which is utilized to make the cluster decision. In this subsection, we will formulate this requirement.
First, from (15), the normalized LLR distributions of a CU in the c[j]th cluster are given by:
where and are the cumulative distribution function (CDF) and PDF of the LLR of the received primary signal power at the CUs in the c[j]th cluster, respectively. These LLR distributions are given
where the conditional PDF’s of the LLR under H[0] and H[1] are determined in [12] as follows:
where a=[N^2/4+N log(2γ+1)/γ](2γ+1) and b=2N(2γ+1)/γ.
Since the SNRs of the received primary signal at the CUs in a cluster are identical and the selective data for a cluster is the highest absolute value of the normalized LLR, the distribution of the
selective cluster data will be equal to the distribution of the n[0]th absolute order sample, i.e., the sample with the highest absolute value, where n[0] is the number of CUs in the cluster. In
addition, the PDF of the n[0]th absolute order sample is given by:
The derivation of (22) can be found in the Appendix. Similarly, the conditional PDF of the n[0]th absolute order sample under the H[j] hypothesis, j=0,1, can be achieved.
For a specific value of threshold , the probability of false alarm and the probability of detection of the c[j]th cluster are, respectively, given by:
Since the probabilities of false alarm and the probability of detection of the c[j]th cluster in (23) and (24) mainly depend on the received primary signal SNR and the number of nodes in the cluster,
the cluster threshold can be determined off-line in the initial phase of the cluster establishment based on the Neyman-Pearson criterion or the minimum error probability criterion. For the
Neyman-Pearson criterion, the probability of false alarm is predefined. Also, the cluster threshold is computed based on (23). In this paper, we utilize the minimum error probability criterion to
numerically determine the optimal cluster threshold through the following equation:
4.3 Parallel report mechanism
For implementing the proposed selective mechanism in a cluster, all CUs in a cluster have to monitor the control channel to determine the cluster header during the contention time. One question
raised here is how to arrange the contention time for multiple clusters in the network. Generally, there are two common solutions for this problem. The first approach is to assume that the contention
times of the clusters are carried out sequentially over time. This method requires a strict synchronization among CUs in the network and a long contention time to minimize the collision in contention
due to differences in transmission range. Obviously, this method can cause a long reporting time with a high rate of contention collision. The second approach is to assume that the contention times
of different clusters are conducted in parallel with different subcontrol channels. Since each cluster only reports a 1-bit hard decision to the FC, the subcontrol channel can be reduced to a pair of
frequencies corresponding to two possible values of a cluster decision. This means that a node in a certain cluster only monitors two predetermined frequencies during the contention time, and the
node who wins the contention will transmit only one predefined frequency to the FC according to its cluster decision. Normally, a control channel bandwidth is sufficient for allocating a reasonable
number of frequency pairs to clusters. For example, it is acceptable to divide 50 pairs of frequencies for 50 clusters in a 200-kHz control channel. Figure 3 shows an example of a sensing frame
structure for the proposed parallel report mechanism compared with the conventional fixed allocation direct reporting method.
In this method, the problems of strict synchronization and contention collision, which can occur with the previous method, are completely resolved. Indeed, with this parallel contention and reporting
mechanism, the synchronization among CUs can be looser since there is only one contention time that is identical to the reporting time. No collision between two cluster reports will occur since these
cluster decisions are transmitted at different frequencies. Even in the case that two CUs in a cluster have the same value of the most reliable sensing data, a collision still will not occur since
the two nodes will transmit the same frequency, and at the receiver side, two transmitted frequencies can be considered as two versions of a multipath signal. The remainder problem with this parallel
reporting method is that the FC needs to be equipped with parallel communication devices such as an FFT block, which is usually used in an OFDM receiver, or a filter bank block to detect multiple
reporting frequencies. However, this requirement is not a big issue.
5 Simulation results
The simulation of the proposed cluster-based selective CSS scheme is conducted under the following assumptions:
•The LU signal is a DTV signal as in [18].
•The bandwidth of the PU signal is 6 MHz, and the AWGN channel is considered.
•The local sensing time is 50 μs.
•The probability of the presence and absence of PU signal is 0.5 for both.
•The network has n[0] nodes and can be divided into n[c] clusters. Each cluster includes n[0] nodes.
First, we evaluate the sensing performance of the selective method in the cluster with three different received primary signal SNRs of -14, -12, and -10 dB when the number of nodes in the cluster
changes from 1 to 100. As shown in Figure 4, the probability of error will decrease along with the increase in the number of nodes in the cluster. However, the decreasing rate of probability of error
is low when the number of nodes in the cluster is large, specially when n[0]>10. Therefore, the selective method only provides high sensing efficiency when the number of nodes is in the range of 20.
Figure 4. Probability of sensing error in a cluster decision of the proposed selective method. This is for different numbers of nodes in the cluster when the received primary signal SNR is -14, -12,
and -10 dB.
Second, we assume that the network includes five clusters with different SNR values corresponding to -20, -18, -16, -14, and -12 dB. The error probabilities of the global CV rule-based conventional
direct reporting scheme, the cluster and global CV rule-based conventional cluster reporting scheme, and the proposed CSS scheme are then observed according to different values of cluster size. As
illustrated in Figure 5, the error probabilities of all CSS schemes decrease along with the increase of the cluster size. The direct conventional CV rule-based CSS scheme provides the best sensing
performance. The proposed CSS scheme outperforms the cluster and global CV rule-based conventional cluster CSS scheme when the cluster size is small, i.e., n[0]<8. When the cluster size is large,
i.e., n[0]>8, the sensing error probability of the proposed method is slightly higher than that of the conventional cluster scheme, which utilizes a CV rule at both cluster headers and FC. However,
it is noteworthy that the cost of this better performance with the conventional cluster and direct schemes compared with the proposed scheme are the extremely large amount of overhead, energy
consumption, and reporting time for collecting all decisions from all nodes in the network.
Figure 5. Probability of sensing error of the proposed and conventional CSS schemes. Probability of sensing error of the direct conventional CV rule-based scheme, the cluster and global CV rule-based
conventional cluster reporting scheme, and the proposed CSS schemes for different cluster sizes when the network includes five clusters with a SNR value corresponding to -20, -18, -16, -14, and -12
dB, respectively.
Finally, to clarify the energy efficiency and collection time savings, we first assume that E[0]=kE[1] where E[0] is the energy for transmitting the report from a cluster header to the FC and E[1]
is the energy for transmitting the report from a local node to the cluster header. Similarly, we assume that T[p]=lT[r] where T[p] is the parallel reporting time slot, and T[r] is the fixed
allocation reporting time slot (see Figure 3). We also assume that each cluster utilizes a separate reporting channel to transmit the sensing result from local nodes to the cluster header in the case
of a conventional cluster-based CSS scheme, and the guard interval between time slots is ignored. As a result, the reporting energy consumption and the total reporting time of the direct reporting
(DIR), the conventional cluster (CON), and the proposed (PROP) CSS schemes can be calculated by:
respectively. The energy consumption efficiency (EE) and the reporting time-saving efficiency (TE) of the conventional cluster and the proposed CSS schemes compared with the direct CSS scheme can be
easily obtained by EE [∗]=1-E[∗]/E[DIR] and TE [∗]=1-T[∗]/T[DIR], respectively, where the asterisk (*) can be replaced by CON or PROP.
When the number of cluster is constant at n[c]=5 as assumed in Figure 5, we can obtain the energy consumption efficiency and the reporting time saving as shown in Figure 6. Obviously, both cluster
schemes enable an increase in the energy efficiency and time savings along with the increase in cluster size. As illustrated in Equation 26 and in the simulation result of Figure 6, it can be
concluded that the energy efficiency of the proposed scheme is the upper bound of the conventional cluster scheme for all cases of k. Therefore, the proposed scheme achieves the highest energy
efficiency among cluster schemes.
Figure 6. Energy consumption efficiency of the proposed and conventional cluster-based CSS schemes. This efficiency is compared with the conventional direct reporting-based CSS scheme for different
cluster sizes and different values of k when the network includes n[c]=5 clusters.
Similarly, from Figure 7, we can see that the proposed CSS scheme provides higher reporting time savings than the conventional cluster scheme. In fact, for an acceptable value of l, i.e., l=4, the
proposed scheme produces time savings greater than 80% compared with the conventional direct reporting CSS scheme, while the conventional cluster scheme only remains at 75% of the highest saving
Figure 7. Reporting time saving of the proposed and the conventional cluster-based CSS schemes. This time saving is compared with the conventional direct reporting-based CSS scheme for different
cluster sizes and different value of l when the network includes n[c]=5 clusters.
6 Conclusions
In this paper, we have proposed a cluster-based CSS scheme which includes the selective method in the cluster and the optimal fusion rule in the FC. The proposed selective combination method can
dramatically reduce the reporting time and energy consumption while achieving a certain high level of sensing performance especially when it is combined with the proposed frequency division-based
parallel reporting mechanism.
Derivation of Equation 22
Let Y denote a continuous random variable with PDF f[Y](y) and CDF f[Y](y) and let be a random sample of size n[0] drawn from Y. The corresponding ordered sample derived from the parent Y is ,
which is arranged in increasing order of absolute value such that . In order to determine the PDF of Y[(k)], we define the event D[k,y]={y≤Y[(k)]≤y+Δy}={y≤±|Y[(k)]|≤y+Δy}. Thus, the
probability of event D[k,y] can be calculated by
if y≥0 or
if y<0, where Consequently, the PDF of Y[(k)] is calculated using
By replacing Y with and substituting k=n[0] into (30), Equation 22 can be obtained.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (NRF-2012R1A1A2038831 and NRF-2013R1A2A2A05004535).
Sign up to receive new article alerts from EURASIP Journal on Wireless Communications and Networking | {"url":"http://jwcn.eurasipjournals.com/content/2013/1/176","timestamp":"2014-04-19T22:06:16Z","content_type":null,"content_length":"134551","record_id":"<urn:uuid:7cadee56-fd99-4c17-9534-2cac855a50f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Memory Efficient doubly linked list
Implement doubly linked list which has only one pointer (may not be directly of apointer data type) per node.
An ordinary doubly linked list will require 2 pointers per node, one pointing to the previous node and another pointing to the next node in the list as shown below.
Doubly linked list looks like below
Where blue arrow is indicating the next pointer and red arrow is indicating the previous pointer.
Note that the Previous pointer of first node and the next pointer of last nodes are NULL.
This implementation of linked list has 2 pointer per node. Your question is,
Implement doubly linked list using only one pointer per node. (Note that you should be able to navigate in both forward and backward direction using the single pointer.)
We have only one pointer and that pointer will act as both the forward and the backward pointer. This is where XOR comes into play:
The basic operations on XOR (as also used in this post), has some interesting facts like,
A ^ B = C
C ^ A = B and C ^ B = A
i.e If we have C & A, then we can compute B and if we have C & B, then we can compute A.
We will use this property of XOR operator.
For each Node we will not store either the previous or next pointer, but we will store the XOR of previous & next pointer.
So if we have the below linked list
and if a1, a2, a3, a4 are the addresses of each node respectively, then the XOR list will store
If you are traversing in the forward direction then, you will get the address of next node as below:
│Current Node│Next Node│Value of Pointer =(Add of Prev Node) ^ (Link of Current Node) │Address of Next Node│
│list │a1 │Directly │- │
│a1 │a2 │NULL ^ Link of a2 │NULL^NULL^a2 = a2 │
│a2 │a3 │a1 ^ Link of a2 │a1^a1^a3 = a3 │
│a3 │a4 │a2 ^ Link of a3 │a2^a2^a4 = a4 │
│a4 │NULL │a3 ^ Link of a4 │a3^a3^NULL = NULL │
Hence at each point, you have to store the address of previous node to compute the address of next node.
Address of Next Node = Address of Previous Node ^ pointer in the current Node.
Similarly to compute the address of previous node you need to know the address of next node.
Address of Previous Node = Address of Next Node ^ pointer in the current Node
1. We save a lot of memory space. For each node we are saving one pointer, if there are n nodes then our memory consumption is reduced by O(n).
2. If we want to reverse the doubly linked list (pre will become next & vice-versa) it is O(n) operation (pointers need to be changed for each node). In the above implementation, since we are
storing XOR of prev & next it will remain unchanged, and only the head pointer need to be changed to point to last node.
1. Traversing requires more operations. It is not just going to the next pointer but, performing XOR.
The C++ implementation of this is available here… | {"url":"http://www.ritambhara.in/memory-efficient-doubly-linked-list/","timestamp":"2014-04-18T20:43:50Z","content_type":null,"content_length":"61448","record_id":"<urn:uuid:0812df64-c5a1-4d07-9b4a-0ef1d8f63a6d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantitative Conservation Biology: Theory and Practice of Population Viability Analysis
William F. Morris, Duke University and Daniel F. Doak, University of Colorado at Boulder
William F. Morris, Duke University and Daniel F. Doak, University of Colorado at Boulder
Conservation biology relies not only on the general concepts, but on the specific methods, of population ecology to both understand and predict the viability of rare and endangered species and to
determine how best to manage these populations. The need to conduct quantitative analyses of viability and management has spawned the field of "population viability analysis," or PVA, which, in turn,
has driven much of the recent development of...
Conservation biology relies not only on the general concepts, but on the specific methods, of population ecology to both understand and predict the viability of rare and endangered species and to
determine how best to manage these populations. The need to conduct quantitative analyses of viability and management has spawned the field of "population viability analysis," or PVA, which, in turn,
has driven much of the recent development of useful and realistic population analysis and modeling in ecology in general. However, despite calls for the increased use of PVA in real-world
settings—developing recovery plans for endangered species, for example—a misperception remains among field-oriented conservation biologists that PVA models can only be constructed and understood by a
select group of mathematical population ecologists.
Part of the reason for the ongoing gap between conservation practitioners and population modelers has been the lack of an easy-to-understand introduction to PVA for conservation biologists with
little prior exposure to mathematical modeling as well as in-depth coverage of the underlying theory and its applications. Quantitative Conservation Biology fills this void through a unified
presentation of the three major areas of PVA: count-based, demographic, and multi-site, or metapopulation, models. The authors first present general concepts and approaches to viability assessment.
Then, in sections addressing each of the three fields of PVA, they guide the reader from considerations for collection and analysis of data to model construction, analysis, and interpretation,
progressing from simple to complex approaches to answering PVA questions. Detailed case studies use data from real endangered species, and computer programs to perform all described analyses
accompany the text.
The goal of this book is to provide practical, intelligible, and intuitive explanations of population modeling to empirical ecologists and conservation biologists. Modeling methods that do not
require large amounts of data (typically unavailable for endangered species) are emphasized. As such, the book is appropriate for undergraduate and graduate students interested in quantitative
conservation biology, managers charged with preserving endangered species, and, in short, for any conservation biologist or ecologist seeking to better understand the analysis and modeling of
population data.
“This book is unique because it is written much like a tutorial that describes the process of PVA in a well-organized and easy-to-follow manner and does so in sufficient detail (with specific
real-world examples) to serve as a starting point for more detailed treatments. . . . The book is sure to appeal to a wide audience and should serve as a useful resource for those who wish to better
understand PVAs, as well as for more experienced theoreticians and conservation practitioners.”
—Carlyle Brewster, American Entomologist
“William Morris and Daniel Doak have written a comprehensive volume on population viability analyses (PVA) that integrates the theory of population ecology with the analytical tools necessary to
synthesize observational data on the dynamics of populations. . . . The book is a thorough, quantitative treatment and should be approachable by conservation biologists and ecologists with an
interest in learning the details and application of PVA.”
—William R. Clark, Landscape Ecology
“This is a book we've needed for a decade. PVA is not only a bridge between science and policy, it is the flagship technology of conservation biology. When applied to strongly interacting species, it
is also a foundational tool of ecosystem conservation. Thank goodness, therefore, that this book is readable. Morris and Doak draw on a treasury of experience and eschew complexity except where
necessary. Their book is truly the definitive step in making PVA accessible to students and practitioners. Hooray!”
—Michael Soulé, The Wildlands Project
“Population viability analysis (PVA) has become one of the core tools of conservation practice. Although several recent books develop the theory of PVA, Morris and Doak are the first to produce a
'how to' handbook that is faithful to rigorous population theory, yet down-to-earth enough in presentation that a field biologist could learn the craft of PVA without any prior training. This is the
one book that puts the modern quantitative tools of conservation biology in the hands of the practitioner. No method is mentioned without also carefully working through an application using real data
at a patient pace, yet with lively prose. I hope that agency biologists, conservation planners, and land managers around the world either read this book or make sure someone on their staff is
familiar with its practical wisdom.”
—Peter Kareiva, Lead Scientist for The Nature Conservancy
“If you teach a course in introductory ecology, population ecology, conservation biology, or applied ecology, then you should have a copy of this book.... Morris and Doak have done a real service by
providing an effective self-contained manual for anybody who wants to do a state-of-the-art PVA but doesn't know how. If you do PVA or teach PVA, this book is required reading.”
—Stephen P. Ellner, Écoscience
“This important book makes it clear that well-designed demographical studies and PVAs are nowadays among the basics for any wildlife population to be studied and managed. It provides crucial tools
for a quantitative wildlife monitoring and conservation in the new millennium.”
—Falk Huettmann, The Canadian Field-Naturalist
Available here are listings of all MATLAB programs included as boxes in the text. There are some corrections in the zip file below (CORRECTIONS.m) to the programs as they appeared in print.
Download all the files in a .zip archive. (76k)
Box 2.1 - randdraw.m MATLAB code to simulate population trajectories using Equation 2.1, and drawing each year's annual growth rate from a list of observed rates.
Box 3.3 MATLAB code defining five functions used in calculating the extinction time cumulative distribution function and its confidence limits:extcdf.m, stdnormcdf.m, gammarv.m, betarv.m, and
chi2rv.m. These functions are used by many other programs in subsequent chapters.
Box 3.4 - extprob.m MATLAB code to calculate extinction probabilities and bootstrap confidence intervals.
Box 4.1 - tbar_ceiling.m MATLAB code to plot the mean time to extinction for the ceiling model (Equation 4.1) as functions of the carrying capacity and initial population size.
Box 4.4 - theta_logistic.m MATLAB code to predict the probability of extinction using the theta logistic model (Equation 4.2).
Box 4.5 - demstoch.m A MATLAB program to simulate growth of a density-dependent population with both environmental and demographic stochasticity. It would be straightforward to hybridize this code
with the program "theta_logistic" in Box 4.4 to compute the probability of extinction under both types of stochasticity.
Box 4.6 - ricker_corr.m MATLAB code to calculate the probability of quasi-extinction for the Ricker model with temporally autocorrelated environmental effects.
Box 4.7 - extremes.m. MATLAB code to calculate extinction risk in the presence of catastrophes and bonanzas.
Box 5.1 - correct_sigma2.m MATLAB code to correct a raw estimate of σ^2 for sampling variation when census counts represent means from replicate samples.
Box 5.2 - dennisholmes.m MATLAB code defining the function "dennisholmes", which estimates µ and σ^2 using the method of Holmes (2001).
Box 7.1 - eigenall.m MATLAB code defining the function "eigenall", which calculates eigenvalues and eigenvectors of the matrix A. This function is used by many other programs in Chapters 7, 8, 9, and
Box 7.2 - box7_2.m Fragment of MATLAB code that uses the function "eigenall" defined in Box 7.1 to generate Equations 7.8, 7.9, and 7.11 using the semi-palmated sandpiper projection matrix in
Equation 7.5.
Box 7.3 - iidenv.m MATLAB code to simulate growth of a structured population in an iid stochastic environment. This program calls on the data filehudmats.m.
Box 7.4 - stoc_log_lam.m MATLAB code to estimate log ls by simulation and by Tuljapurkar's approximation. This program calls on the data filehudmats.m.
Box 7.5 - simext.m MATLAB code to simulate the extinction time cumulative distribution function. This program calls on the data file hudmats.m.
Box 7.6 - box7_6.m Fragment of MATLAB code to calculate the extinction time CDF for mountain golden heather using Tuljapurkar's approximation (Equations 7.12 and 7.13) and Equation 3.5. This program
calls on the data filehudmats.m.
Box 8.1 - white.m MATLAB code to use White's method to correct for sampling variation.
Box 8.2 - kendall.m MATLAB code to use Kendall's method to correct for sampling variation.
Box 8.3 - betaval.m A second MATLAB function to make beta-distributed random numbers (see Box 3.3 for a different method). "betaval" returns a beta-distributed value with the specified CDF value. The
program BetaDemo is also included, showing the use of betaval. This function is used by many other programs in Chapters 8, 9, and 11.
Box 8.4 - lnorms.m A MATLAB m-file for the function lnorms, which returns random lognormal values. This function is used by many other programs in Chapters 8, 9, and 11.
Box 8.5 - stretchbetaval.m A MATLAB m-file defining the function stretchbetaval, which returns stretched beta-distributed values. Note that this program uses betaval, defined in Box 8.3. The program
StBetaDemo shows the use ofstretchbetaval.m. This function is used by many other programs in Chapters 8, 9, and 11.
Box 8.6 - corrrates.m A simple example program to generate correlated random vital rates using an estimated correlation matrix between vital rates.
Box 8.7 - stnormfx.m A MATLAB file for the function stnormalfx, which provides a good standard normals (Abramowitz and Stegun 1964). This function is used by many other programs in Chapters 8, 9, and
Box 8.8 - AnalyzeCorrs.m A MATLAB program going through steps needed to calculate a correlation matrix and look for the problems caused by sparse sampling or small numbers of observations, using
correlations for the desert tortoise.
Box 8.9 - BetweenYrCorrNEW.m A program, BetweenYrCorrNEW.m, to demonstrate the simulation of within-year correlations, autocorrelations, and cross-correlation in vital rates. Note that this program
has been modified and is now quite different from BetweenYrCorr.m as described in the book. Also, this box contains a function that is called by the program, onestep.m.
Box 8.10 - vitalsimNEW.m A program, vitalsimNEW.m, to calculate the extinction time CDF and the stochastic growth rate for Hudsonia, using simulations that include correlation, autocorrelation, and
cross-correlation. Note that this program has been modified and is now quite different from VitalSim.m as described in the book. Also, this box contains a function that is called by the program,
onestep.m, and two other files that it calls: hudcorrs.matand the matrix definition program hudmxdef.m.
Box 8.11 - multiresult.m A function to decide the fates of a set of individuals simultaneously, given a set multinomial probabilities of different outcomes (after Caswell 2001, pg. 459).
Box 8.12 - lynx.m MATLAB code to perform a density-dependent demographic PVA for the Iberian lynx.
Box 8.13 - salmon_dd.m A MATLAB program to simulate deterministic, density-dependent growth of a salmon population.
Box 9.1 - vitalsens.m A MATLAB program to find the sensitivities and elasticities of λ[1]to vital rates.
Box 9.2 - limitsens.m MATLAB code to simulate random matrices between user-defined limits.
Box 9.3 - secder.m A MATLAB function to calculate the second derivatives of deterministic growth rate (or first derivatives of sensitivities) owith respect to matrix elements (modified from Caswell
2001). This function is used to calculate stochastic sensitivities (see Box 9.4).
Box 9.4 - StochSens.m A MATLAB program to calculate the sensitivities and elasticities of stochastic growth rate (λ[s]) to means, variances, and covariances of matrix elements. This program calls on
the data file tortmxs.mat.
Box 9.5 - stochsenssim.m A MATLAB program that performs stochastic simulations to estimate the sensitivities and elasticities of stochastic growth rate (λ[s]) and extinction probabilities to mean,
variance, and covariance of matrix elements. This program calls on the data files tortbetas.mat andtortcorrmx.mat, as well as the matrix definition program maketortmx.m. The program also calls the
function betaset.m.
Box 10.1 - joincount.m A MATLAB program to calculate join-count statistics for spatial correlations between binary data. This program calls on the data filedelug90s.txt.
Box 11.1 - logregB.m A MATLAB program to find the maximum likelihood parameter values for a logistic regression model of metapopulation dynamics. This program calls on the function in Box 11.2 -
logregA.m, which does most of the actual calculations. This program calls on the data file delugA.txt.
Box 11.2 - logregA.m A MATLAB function that provides an estimate of the log-likelihood of a set of occupancy, extinction, and colonization data, given a set of parameters provided by the program in
Box 11.1 - logregB.m.
Box 11.3 - logregextsim.m A program to simulate a logistic regression model for patch-based metapopulation dynamics. This program calls on the data filedelugA.txt.
Box 11.4 - MultisiteCount.m A stochastic simulation for a count-based multi-site PVA. This program is a modification of single-population matrix models developed in Chapter 8. This program calls on
the matrix definition programrailmxdef.m.
Box 11.5 - DemoMetaSim.m A MATLAB program to perform demographic, multi-site simulations. Much of the machinery needed for this program is identical to that in the vital rate-based simulation models
presented previously, and in this code we refer to pieces of vitalsim.m in Box 8.10 that need to be inserted to make a fully functional program. This program calls on the data file coryrates.mat and
the matrix definition program makecorymx.m | {"url":"http://www.sinauer.com/quantitative-conservation-biology-theory-and-practice-of-population-viability-analysis.html","timestamp":"2014-04-19T19:33:43Z","content_type":null,"content_length":"44472","record_id":"<urn:uuid:79edeea9-0201-4209-9dec-245138c87aa7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
AST 307 - Introductory Astronomy
AST 307 · Introductory Astronomy Fall 2003
AST 307
Homework #2
Due Friday Sep. 12
1. We talked about the apparent motion of Mars on the celestial sphere in class. It usually moves in the prograde direction, but moves appears to move retrograde when we are passing it. Draw a
picture showing Venus and the Earth orbiting the Sun, and use it to figure out when Venus moves prograde and retrograde.
2. Halley's comet has a perihelion distance (closest distance to the Sun) of 0.36 AU and an aphelion distance (farthest from the Sun) of 36 AU. Its speed at perihelion is 70 km/sec. (I made up some
round numbers to make the answers come out reasonably simple. They aren't quite right, but they're close.) Use Kepler's laws to answer the following questions.
a) What is the semimajor axis, a, of Halley's orbit?
b) What is Halley's speed at aphelion?
c) What is the period of Halley's orbit?
d) What is the speed of an object in circular orbit 0.36 AU from the Sun?
e) What is the speed of an object in circular orbit 36 AU from the Sun?
f) Notice how these last two speeds compare to Halley's speed at those
two distances. Is the difference sensible (qualitatively)?
3. Assume you are in a spaceship floating in space far from any stars or planets, and you have your rocket engines off. Assume your mass is 50 kg, the mass of your spaceship is 500 kg, and the
length of your spaceship is 10 m.
a) If you push off from one end of the spaceship with a force of 10 Nt, what is your acceleration while you are pushing?
b) If you push for one second, what is your speed after you stop pushing and start coasting away from the end of the spaceship? Assume you and the spaceship started out with zero velocity.
c) At this time what is the speed of the spaceship?
d) You coast down to the other end of the spaceship. How long does it take for you to get to the other end? Ignore your height.
e) When you reach the other end of the spaceship, you push just hard and long enough to come to a stop relative to the spaceship. At this point what is the speed of the spaceship?
f) How far did the spaceship move during all of this?
If you didn't take the motion of the spaceship into account in determining how long it took you to coast the length of the ship, go back and redo that question taking it into account. | {"url":"http://www.as.utexas.edu/astronomy/education/fall03/lacy/hw_02.html","timestamp":"2014-04-16T16:31:37Z","content_type":null,"content_length":"4107","record_id":"<urn:uuid:a3cd5e58-d774-4795-85fd-f3ed2606087a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research projects for 2014
Dr Patrick Baird and Dr Patrick Gill, NPL
We have a number of projects involving laser-cooled ions and atoms. These are principally aimed at new frequency standards in the optical region, i.e., the development of new optical clocks. There
are opportunities for Oxford doctoral students to conduct research in these areas in collaboration with NPL.
A laser-cooled, single ion optical clock
The 2S1/2 – 2F7/2 octupole transition at 467 nm in 171Yb+ is hugely forbidden by quantum mechanical selection rules, and as a result has extremely long lifetime against decay, and a correspondingly
ultra-narrow natural linewidth (~ nHz). By electromagnetically trapping a single 171Yb+ ion and laser cooling it to about 1 mK under ultra-high-vacuum (UHV) conditions, it is possible to prepare an
ion virtually at rest in an isolated environment. The narrow octupole transition can then be probed with a very stable, narrow-linewidth semiconductor laser to provide a highly accurate atomic clock.
In addition, by simultaneously probing a second optical clock transition at 436 nm – this time an electric quadrupole transition: 2S1/2 – 2D3/2) using the same 171Yb+ ion, the frequency ratio between
the two transitions can be accurately determined; this gives a very sensitive measure of any possible temporal variation in the fine structure constant, α. There is opportunity for an Oxford doctoral
student to join an NPL research team developing these ideas. Experimental techniques used in this project include ion trapping, laser-cooling, laser stabilisation using optical cavities, single ion
quantum detection and UHV systems.
The project offers funding through an EPSRC CASE award.
In the first instance, please contact: Dr Patrick Baird.
Cold Atoms in an Optical Lattice Potential
There is an opportunity for an Oxford student to undertake research on the NPL neutral lattice clock project. This research involves holding neutral strontium atoms in an optical lattice field in
order to probe a clock transition; this offers potential for a future high-accuracy optical frequency standard and clock. Atomic clock systems bring together many aspects of experimental atomic
physics research and in particular this project brings together magneto-optical trapping of neutral atoms together with cooling techniques, involving both broad- and narrow-line cooling transitions
to cool clouds of neutral Sr atoms to microKelvin temperatures. The use of a “magic wavelength”, optical lattice trap, formed by counter-propagating laser beams, holds the atoms in space without
perturbing the clock transition. The development and stabilisation of ultra-narrow linewidth lasers which are necessary for excitation of the clock transition will also be an essential part of the
The project offers the possibility of funding through an EPSRC CASE award; for non-UK applicants there is a Marie-Curie funding for which a separate application is necessary - see next section for
more information.
In the first instance, please contact: Dr Patrick Baird.
Email Telephone
P [dot] Baird1 [at] physics [dot] ox [dot] ac [dot] uk 01865 272 204
Marie-Curie Early Stage Researcher
Cold neutral atom optical lattice clocks
There is a vacancy for a Marie-Curie Early Stage Researcher within the EU Initial Training Network project Future Atomic Clock Technology (FACT) co-ordinated by the University of Birmingham, to
undertake experimental research on cold atom optical lattice clocks at the UK National Physical Laboratory (NPL) under a DPhil (PhD) postgraduate studentship at the University of Oxford.
The post will be held within the group of Prof. Patrick Gill at NPL, in collaboration with Dr. Patrick Baird at the University of Oxford. The duration is 36 months, starting as soon as possible after
1st April 2014, and certainly by 1st October 2014. The salary is in line with the EC rules for Marie Curie grant holders (ITN Early-Stage Researchers), and is subject to a country-specific cost of
living adjustment. The gross salary* is €51,000 per annum plus a monthly mobility allowance depending on family circumstances. Additional funds are also available to cover the DPhil tuition costs and
participation in Marie-Curie network research and training activities. (*Net salary will result from deducting compulsory social security contributions as well as direct taxes, from the gross
To be eligible for the Marie Curie ESR grant, the candidate should have less than 4 years research experience (including previous postgraduate training). Further, the candidate must not have spent
more than 12 months in the UK in the 3 years prior to appointment. The position is open to non-UK EU applicants and also non-EU applicants (subject to FACT network quota limits in the latter case).
Details are given at: http://ec.europa.eu/research/mariecurieactions
The thesis will cover experimental research in cold atom physics, whereby an ultra-narrow optical transition in cold neutral strontium atoms held in an optical lattice field are probed by an
ultra-stable laser to provide a high-accuracy optical clock frequency, with potential for redefinition of the SI second. This project brings together magneto-optical trapping of neutral atoms
together with laser cooling techniques, to cool clouds of neutral Sr atoms to micro-Kelvin temperatures above absolute zero. A “magic wavelength” optical lattice trap, formed by counter-propagating
laser beams, holds the atoms in space without perturbing the clock transition. Development of ultra-narrow linewidth lasers to probe the clock transition is also an essential part of the project.
During the last two decades, atomic clocks and frequency standards have become an important resource for emerging quantum technologies, with impact ranging from satellite navigation to
synchronisation of high speed communication networks. Optical atomic clocks are now demonstrating sensitivities at the few parts in 1018, opening up applications in fundamental physics, in
“relativistic geodesy”, where ultraprecise clocks sense the general relativistic gravitational redshift (with application to oil and mineral exploration, and climate research), and in future
satellite and deep space navigation. However, the underlying technologies associated with such clocks are still primarily lab-based. The FACT ITN research training programme covers all aspects of
optical atomic clock technology, from the cold atom reference and ultra-stable lasers to frequency comb synthesis, precision frequency distribution and commercial system technology.
Further details on NPL, the University of Oxford research, and the FACT ITN can be found at:
Candidates should submit an application letter and CV to NPL in parallel with a standard Oxford Atomic & Laser Physics research application ( http://www2.physics.ox.ac.uk/study-here/postgraduates/
atomic-and-laser-p... ) to the Graduate Admissions Office, identifying the NPL EU Marie-Curie studentship in answer to the question on intended source of funding. The closing date for the application
to NPL is 28th February 2014, but candidates should submit their application to Oxford as soon as possible. Candidates are also reminded that they need to satisfy the University of Oxford
postgraduate research entrance requirements before an offer of the NPL position can be made.
Additional information can be obtained from:
P [dot] Baird1 [at] physics [dot] ox [dot] ac [dot] uk
Patrick [dot] gill [at] npl [dot] co [dot] uk
Dr Marco Barbieri
D.Phil in ultrafast optical metrology
We invite applications from first-class students to join our group in Oxford for a D.Phil project on coherent diffractive imaging, to start in 2014 (start date negotiable), supervised by Ian Walmsley
(University of Oxford).
The Ultrafast Group in Oxford has build solid expertise in the spatio-temporal characterisation of ultrashort pulses of visible light, establishing SPIDER as one of the most accurate and flexible
methods. Recent research directions include the extension of these techniques to XUV generated by a process of high-harmonic generation: light so produced can be used as a probe for fast molecular
dynamics. The student will develop metrological methods aimed at these applications, in particular time-resolved spatial and spectral characterisation of XUV pulses for coherent diffractive imaging.
The student will gain experimental skills in ultrafast and nonlinear optics, vacuum systems, and data analysis. There is also considerable opportunity for collaboration with the team of S. Hooker in
the Dept of Physics, A. Konsunsky at the Dept of Engineering, A. Wyatt at RAL and Jon Marangos at Imperial College.
Email: Ian Walmsley , Marco Barbieri
I [dot] Walmsley1 [at] physics [dot] ox [dot] ac [dot] uk
m [dot] barbieri1 [at] physics [dot] ox [dot] ac [dot] uk
Dr Laura Corner
Resonantly Enhanced Multiple Pulse Laser Plasma Acceleration
The Lasers for Accelerators (L4A) group at the John Adams Institute for Accelerator Science is a versatile team focused on several areas of cutting edge research in the boundary between accelerators
and lasers. In particular, the group is developing an new method for laser plasma acceleration, involving driving the plasma oscillation with trains of low energy laser pulses. This would enable the
use of tabletop laser for plasma acceleration, rather than the use of national scale facilities. In this context, the L4A group is focusing on developing a suitable laser to drive plasma oscillations
by a train of laser pulses in order to build a 1 GeV electron accelerator operating at 1 kHz. A PhD student project is available in this area within the JAI. The student would be working on
developing new laser technologies in our laser lab in Oxford, including investigating methods of shaping a pulse train to efficiently excite a plasma oscillation and methods of coherent combination
and enhancement of photonic crystal fibre laser pulses. In conjunction with this work, the student would also be working on diagnostic methods for measuring the amplitude of plasma oscillation and
using this to evaluate the best laser pulse trains for excitation. The project would therefore be primarily experimental in nature, and suit a student with an interest in laser or accelerator
science. However, there is scope for the student to develop theoretical and simulation work, especially on the plasma and diagnostics side of the project. The student would be based in Oxford, but
there are opportunities to present work at international conferences and attend specialist workshops abroad. We welcome enquiries from candidates who may be interested in this project.
Supervisors: Dr Laura Corner and Dr Roman Walczak
Application deadline: January 10th
Funding source: STFC studentship for eligible students
Duration: 3 years
Email Telephone
l [dot] corner1 [at] physics [dot] ox [dot] ac [dot] uk 01865 273470
Professor Christopher Foot
Single-atom imaging of strongly correlated quantum systems at nanokelvin temperatures
(keywords: Bose-Einstein condensation, laser cooling of atoms, and quantum simulation)
This project is part of a programme of work in the Ultra-cold Quantum Matter group to carry out quantum simulation of strongly correlated systems--such systems with a high degree of quantum
entanglement are computationally hard to investigate and therefore the exciting new approach of making analogues of such systems using cold atoms is very effective. In Oxford we have developed an
apparatus to produce rapidly rotating clouds of ultracold atoms that are equivalent to correlated systems in ultra-high magnetic fields. We have also built an experimental apparatus for trapping
quantum gases of rubidium and potassium at nanokelvin temperatures. By detecting the quantum state of individual atoms we shall be able to readout the state of the system. This research combines
techniques of laser cooling and trapping of atoms with high numerical aperture optical imaging system. The experimental work is funded by an EPSRC grant, and further details of the original proposal
may be found at: http://gow.epsrc.ac.uk/NGBOViewGrant.aspx?GrantRef=EP/J008028/1
Email Telephone
c [dot] foot1 [at] physics [dot] ox [dot] ac [dot] uk +44 (0) 1865 272256
Professor Gianluca Gregori
High Energy Density Laboratory Astrophysics - Scaling the Cosmos to the Laboratory
We are looking for DPhil experimental/theoretical positions to study and simulate in the laboratory extreme astrophysical conditions. The research is focussed on the following themes:
1. Investigation of the equation of state of ultra dense matter as the one occurring in the core of giant planets (such as Jupiter and many exoplanets). The experimental work involves using high
power laser facilities to compress the matter to densities above solid and then applying x-ray techniques to probe its microscopic state. Interested students can also focus their work onto
theoretical topics involving strongly coupled and partially degenerate plasmas - which are particularly relevant for describing white dwarf structure.
2. The understanding of the generation and amplification of magnetic fields in the Universe. We are particularly interested at the role of turbulence (and dynamo) in producing the present day values
of magnetic fields in cluster of galaxies. Experiments on large laser facilities are planned in order to simulate in the laboratory intra-cluster turbulence and measure the resultant magnetic field
generation and amplification by dynamo.
3. Quantum gravity with high power lasers. The idea is to use high intensity lasers to drive electrons to very high accelerations and then observe effects connected to the Unhruh-Hawking radiation.
The ideal candidate is expected to work in defining the required experimental parameters and the proposal for a future experiment.
Our group has access to several laser facilities (including the National Ignition Facility, the largest laser system in the world). Students will also have access to a laser laboratory on campus
(currently hosting the largest laser system in the department), where initial experiments can be fielded. Further details can be found by browsing our research web-page http://www.physics.ox.ac.uk/
Perspective candidates are encouraged to contact Dr Gregori for further information.
Email Telephone
g [dot] gregori1 [at] physics [dot] ox [dot] ac [dot] uk +44 (0) 1865 282639
Professor Simon Hooker
Multi-pulse laser wakefield acceleration
In a laser wakefield accelerator an intense laser pulse propagating through a plasma excites a trailing plasma wave via the action of the ponderomotive force, which acts to expel electrons from the
region of the laser pulse. The longitudinal electric field in this plasma wakefield can be as high as 100 GV / m, more than three orders of magnitude larger than that found in conventional RF
accelerators such as those used at CERN. Particles injected into the correct phase of the plasma wave can therefore be accelerated to energies of order 1 GeV in only a few tens of millimetres.
The laser wakefield accelerator (LWFA) has many potential applications. However, most of these - including, in the long term, laser-driven particle colliders - will require the accelerator to operate
at much higher pulse repetition rates than is possible with the Ti:sapphire lasers used today.
An interesting new approach being developed by a collaboration between groups in the sub-departments of Particle Physics and Atomic & Laser Physics: multi-pulse LWFA, in which a train of low-energy
laser pulses drives the plasma wave. If the pulses in the train are spaced by the plasma period then the wakes excited by each pulse interfere coherently to form a large-amplitude wave at the back of
the pulse train. The advantage of this approach is that it opens the possibility of using novel laser systems - such as thin-disk and fibre lasers - which can operate at very high pulse repetition
rates and with excellent overall efficiency.
Multi-pulse LWFA is a new idea and many interesting questions remain to be solved. This project offers scope for numerical simulations aimed at understanding the limitations of this approach and
experimental work to demonstrate its potential.
For further details see: http://www2.physics.ox.ac.uk/research/plasma-accelerators-and-ultrafast-....
Email Telephone
s [dot] hooker1 [at] physics [dot] ox [dot] ac [dot] uk +44 (0) 1865 282209
Professor Simon Hooker
Ultrafast lensless imaging with OPA-driven high-harmonic generation
An intense laser pulse interacting with the atoms or molecules of a gas can drive a highly nonlinear polarization which in turn generates coherent radiation at harmonics of the laser. This
high-harmonic generation (HHG) process can generate ultrafast (in the attosecond range) pulses of radiation in the important soft x-ray spectral region, with potential applications in ultrafast
imaging and in time-resolved science.
The efficiency of HHG is low unless steps are taken to phase-match or quasi-phase-match (QPM) the generation process. The wavelength of the driving laser is also important in determining the
brightness and range of photon energies which can be generated. Our group has developed several techniques for QPM and demonstrated that these methods can increase the energy in the harmonic beam by
more than an order of magnitude.
In this project we will use femtosecond optical parametric amplifiers (OPAs) to determine the optimum driving laser wavelength for generating x-rays of a given photon energy. We will also investigate
a new QPM technique based on modulation of the polarization state of the driving laser in a birefringent waveguide.
The bright x-ray beams generated by these methods will be used for coherent diffraction imaging. This method images objects without conventional optics (which are not available at these wavelengths);
it does so by recording the intensity diffraction pattern of the object and using sophisticated algorithms to overcome the "phase-retrieval problem" to deduce the object. Using a wavelength-tunable
x-ray source we will seek to image specific elements in a sample, and as a test of these ideas we will the size and shape of particles formed in precipitation-strengthened Al alloy.
This work will be supported by a 4-year research programme funded by EPSRC.
Further information available from: http://www2.physics.ox.ac.uk/research/plasma-accelerators-and-ultrafast-....
Email Telephone
s [dot] hooker1 [at] physics [dot] ox [dot] ac [dot] uk +44 (0) 1865 282209
Application deadline: n/a
Duration: 4 years
Professor Dieter Jaksch
1) Simulation of two-dimensional strongly-correlated quantum systems using high-performance tensor network theory algorithms
Tensor network theory (TNT) provides efficient and highly accurate methods for simulating many-body quantum systems, which cannot be represented exactly for all but the smallest systems due to the
exponential growth of the number of parameters required with system size. The many-body wave function, and the operators that act on them, are represented as a network of tensors (multi-dimensional
arrays of numbers) which is manipulated by performing a series of tensor manipulations such as reshaping, contracting and factorising. This computational/numerical DPhil project focusses on
developing new algorithms for treating two-dimensional systems that utilise TNT, and will be added to the existing TNT software library. This high-performance library has been developed in our group
for the last two years, and already has many users. The project will provide algorithms that are the first of their kind freely available as part of a software library, and will be used not only by
members of our group, but research groups throughout the UK. The code is being developed in C, with OpenMP and MPI also being used to implement a hierarchical parallelisation scheme. The DPhil
student will collaborate with members of our group to design routines that can be used by them to solve physics problems. The student will also have the chance to work with scientific software
engineering experts who provide us with advice on producing high-quality sustainable software and on optimising our codes for running on large–scale supercomputers, such as the national
supercomputing cluster ARCHER.
2) Optically steering, manipulating and cooling strongly correlated electron systems
In the past decade there have been pioneering experiments which have shown how laser light can manipulate, measure and selectively cool not only atoms or ions, but also single modes in macroscopic
opto-mechanical devices. A major long-term aim in our group, in collaboration with Prof. Andrea Cavalleri Oxford/Hamburg experimental group, is to apply these techniques to strongly correlated
electron systems such as Mott insulators and cuprate superconductors. Broadly this work plans to identify and realize "hidden" phases of materials, that are metastable out-of-equilibrium states which
only exist while the system is driven. A key example of this would be to devise techniques to engineer the coupling of a stacked cuprate material to a surrounding cavity so that emissions into the
cavity mode result in the cooling of superconducting phase fluctuations. These phase fluctuations are thought to be responsible for the transition to a non-superconducting state, thus even moderate
cooling of this specific degree of freedom, as opposed to the entire material, may provide a novel route to stabilizing superconductivity above its critical temperature.
This theory DPhil project will work towards this grand challenge by investigating in detail the interaction between various strongly correlated electron systems and light both in and out of a cavity.
Regimes of moderate and strong coupling to desired degrees of freedom, such as superconducting order parameter for stacks of Josephson junctions, will be determined. Various approaches will be
pursued, such as using the cavity to perform continuous weak measurements to steer the state of the system, or strongly driving structural modes of the material to dynamically modulate its electronic
properties. Combinations of phenomenological, mean-field, and numerical techniques will be applied to characterise the response of the system. Insight from these studies should lay the foundations
for gauging the physical parameter space in which techniques for phase cooling are possible, and to what extent.
3) Quantum simulation with Rydberg atoms in optical lattices
Rydberg atoms are atoms in states of high principal quantum number that interact via the strong and very well controlled dipole-dipole (DD) interaction. The long-range nature of the DD interaction
allows one to go beyond the standard regime of cold atom experiments where contact interactions prevail. In the regime of ultra-cold Rydberg atoms, the DD interaction is much larger than kinetic
energies giving rise to very low effective temperatures. Systems of DD interacting Rydberg atoms are thus ideal candidates for the investigation of strongly correlated quantum systems, the simulation
of condensed matter models and the development of novel quantum materials and their applications in quantum information theory.
This theory DPhil project considers Rydberg atoms in optical lattices and aims at the realization, characterisation and simulation of strongly correlated quantum systems in this setup. The project
benefits from a close collaboration with Prof Wenhui Li at the Centre for Quantum Technologies (CQT) in Singapore, who is setting up an experiment with Rydberg atoms in optical lattices. The DPhil
student is encouraged to engage in this collaboration. During the course of the project the candidate will become acquainted with quantum optics and quantum many body systems, and develop numerical
as well as analytical skills for the description of these systems.
4) Quantum probes of quantum systems, impurities in cold atomic gases
Researchers in many areas have recently separately considered extracting information about a quantum system by bringing it temporarily and coherently into contact with another smaller quantum system,
a probe, which is then measured. This has several advantages over traditional methods for measuring properties of a quantum system: It has the potential to be non-destructive; the potential to
exploit entanglement and superposition of a perhaps spatially-extended probe in order to extract information directly about complicated correlation functions; and can involve strong interactions and
thus occur on smaller time-scales than, say, linear response, and measure non-equilibrium properties. This theoretical DPhil project focuses primarily on impurity atom probes of cold atomic gases,
how they could be realised, what information could be extracted or new regimes probed, and the role such probes could play in current or near-future experiments (our theory group is in contact with
the experimental groups of Chris Foot, Oxford, and Stefan Kuhr, Strathclyde). Initial avenues of exploration would include using a highly-trapped impurity atom (localised to a few nm rather than the
μm resolution of light) to probe a cold atomic gas on length-scales at which mean-field descriptions break down and the corpuscular nature of the gas appears, or using multiple atomic probes to
identify whether number conservation occurs in the Bose-Einstein condensation of an atomic gas. This collaborative project will build upon the numerical and analytical expertise of the group in
describing the evolution of impurities in cold atom systems, giving the student a firm background in these methods specifically and the vast and exciting area of cold atom physics in general.
d [dot] jaksch1 [at] physics [dot] ox [dot] ac [dot] uk
Dr Axel Kuhn
1) Bi-directional quantum interfaces between trapped atoms and light.
This project is based on three key technologies that have been recently explored in our group, namely the trapping and manipulation of individual atoms in optical tweezers, high-finesse fibre-tip
optical cavities and the quantum-controlled single-photon emission from coupled atom-cavity systems. The combination of these three streams to a novel key technology is highly exciting as it will be
one cornerstone in scaling the atomic approach to quantum computing to useful dimensions.
For further information about this project, please consult our webpage (http://www2.physics.ox.ac.uk/research/the-atom-photon-connection) and our most recent publications prior to contacting us.
Email Telephone
a [dot] kuhn1 [at] physics [dot] ox [dot] ac [dot] uk 01865 272 333
2) Hybrid quantum gates and circuits in atom-photon quantum networks.
This project is aiming at the combination of integrated optics with single photons from strongly coupled atom-cavity systems. Beside demonstrating linear-optical quantum gates, multi-mode
interferometers and photonic quantum walks, the major idea of this new project is to investigate the usefulness of time-bin encoding within photons and to exploit atom-photon entanglement in the
photon-emission process to effectively extend a purely optical processing scheme with atomic quantum memories.
For further information about this project, please consult our webpage (http://www2.physics.ox.ac.uk/research/the-atom-photon-connection) and our most recent publications prior to contacting us.
Email Telephone
a [dot] kuhn1 [at] physics [dot] ox [dot] ac [dot] uk 01865 272 333
Dr David Lucas
Ion Trap Quantum Computing with Lasers and Microwaves
Quantum computers offer the prospect of dramatic increases in information processing power, but this potential will only be realized if the qubits which hold the quantum information can be
manipulated sufficiently precisely, and if the system can be scaled up to larger numbers of qubits. At Oxford, we have recently built the best qubit in the world, using a single ion held in an ion
trap. The qubit has a coherence time of 50 seconds, its quantum state can be prepared and read out with a fidelity of over 99.9%, and we can perform single-qubit quantum logic gates with a fidelity
measured to be 99.9999%. The key to achieving these results, which now define the state of the art, was the use of microwave techniques: the ion trap itself is a novel design, being the first trap to
incorporate on-board microwave circuitry, and is built using a technology which is in principle scalable to much larger numbers of qubits. We are looking for a highly motivated first-class student to
join this project.
Dr Igor Mekhov
Theoretical Quantum Optics of Ultracold Quantum Gases and Nanostructures
Both quantum optics and many-body physics of the lowest achievable temperatures are very active fields of modern research. However, the interaction between them is far from being complete. For
example, in the most theoretical and experimental works on ultracold atoms, the role of light is reduced to a classical tool for preparing intriguing atomic states. In contrast, the main goal of this
project is to develop a theory of the phenomena, where the quantum natures of both ultracold matter and light play equally important roles. The experiments on this ultimate quantum level of the
light-matter interaction became possible just several years ago, which makes the interaction between the theory and experiment promising.
This project is focused on the development of models describing the interaction of quantum many-body systems (ultracold atoms or molecules) with quantized light. This interaction can be realised, for
example, by trapping the quantum gas inside an optical cavity. In this case, the trapping potential cannot be described by a prescribed function as it is the case in most works. The potential is a
quantum and dynamical variable, and the self-consistent solution for light and atomic dynamics is required. Moreover, the light can be used as a probe for the atomic state. As this is a fully quantum
problem, the quantum nature of the measurement procedure has to be taken into account. In all those problems, the quantum light-matter entanglement plays a key role, allowing the application for
quantum information processing, quantum simulations of condensed matter systems, and quantum metrology.
The ultimate goal of this project is to develop a theory of quantum control for strongly correlated many-body systems, which is currently unavailable. The theoretical methods to be used originate
from atomic, optical and condensed matter physics. Moreover, the models developed for atoms will be applied for the solid state systems used in quantum nanophotonics (e.g. quasiparticles in
semiconductor microcavities).
The candidates with interest and expertise in either Atomic, Molecular and Optical or Condensed Matter Physics are welcomed to apply. For more information, please, visit http://www2.physics.ox.ac.uk/
Duration: 3 years
Email Telephone
i [dot] mekhov1 [at] physics [dot] ox [dot] ac [dot] uk 01865 272 330
Professor Peter Norreys
Laser-Plasma Interaction Physics for Inertial Fusion and Extreme Field Science
Intense lasers have extraordinary properties. They can deliver enormous energy densities to target, creating states of matter in the laboratory that are otherwise only found in exotic astrophysical
phenomena, such as the interiors of stars and planets, the atmospheres of white dwarfs and neutron stars, and in supernovae. The behaviour of matter under these extreme conditions of density and
temperature is a fascinating area of study, not only for understanding of fundamental processes that are, in most cases, highly non-linear and often turbulent but also for their potential
applications for other areas of the natural sciences. These include the development of:
• Inertial confinement fusion - applications include energy generation and the transition to a carbon-free economy and the development of the brightest possible thermal source for neutron scattering
• New high power optical and X-ray lasers using non-linear optical properties in plasma
• Novel particle accelerators via laser and beam-driven wakefields - including the AWAKE project at CERN, as a potential route for a TeV e-e+ collider
• Unique ultra-bright X-ray sources
• New physics at the intensity frontier
The understanding of these physical processes requires a combination of observation, experiment and high performance computing models on the latest supercomputers. My team is versatile, combining
experiment, theory and computational modelling. I can offer projects in all of these areas – please contact me to discuss your interests and let’s take it from there.
Application deadline: n/a
Funding source: EPSRC DTA - tbc
Duration: 3 years
p [dot] norreys1 [at] physics [dot] ox [dot] ac [dot] uk
Dr Josh Nunn and Professor Ian Walmsley
1) D.Phil. in quantum memory and photonic applications
We invite applications from first-class students to join our group in Oxford for a D.Phil project on quantum memory, to start in 2014 (start date negotiable), supervised jointly by Joshua Nunn and
Ian Walmsley.
The Ultrafast Group in Oxford has pioneered a GHz bandwidth optical memory in cesium vapour based on Raman scattering, with a world-leading time-bandwidth product exceeding 3000. Memories of this
kind are a key component of future quantum technologies. The next steps in our research are to demonstrate single-photon operation, which requires engineering the density of scattering states to
suppress noise and boost the memory efficiency. We are also working on an integrated memory in hollow-core optical fibre. In this project, the student will combine these ideas and develop a broadband
quantum-capable integrated memory in the first two years of work. This will then be applied to the synchronisation of photon sources, or other photonic primitives, in the final year. The project is
focussed on enabling the next generation of quantum photonic processors by synchronisation. The student will gain experimental skills in quantum optics, fibre optic and waveguide design, optical
cavities, non-linear optics and atomic physics, as well as theoretical modelling of coherent light-matter interactions and quantum networks. There is also considerable opportunity for collaboration
with other teams in the group working on integrated photonic circuits and advanced photodetection, and with other research groups both nationally (Southampton; Imperial) and internationally
(Paderborn, Germany; NIST, USA), which are jointly funded.
For more details please contact Josh Nunn or Ian Walmsley:
j [dot] nunn1 [at] physics [dot] ox [dot] ac [dot] uk
I [dot] Walmsley1 [at] physics [dot] ox [dot] ac [dot] uk
2) D.Phil. in diamond photonics with NV ensembles
We invite applications from first-class students to join our group in Oxford for a D.Phil project on diamond photonics, to start in 2014 (start date negotiable), supervised jointly by Joshua Nunn and
Ian Walmsley.
Quantum information processing using light offers radical new technologies such as super-fast quantum computers and super-secure quantum communication. But a key stumbling block is the ability to
engineer strong light-matter interactions. These are required both to mediate photonic logic gates, and to enable the efficient storage and switching of photonic wavepackets to form a scalable
architecture. Our group has developed one of the world's leading quantum memories based on Raman scattering in an atomic vapour. We are now investigating ways to integrate this memory inside a solid
medium. In this project, the student will design an implementation of the memory based on the Raman interaction in an ensemble of nitrogen-vacancy (NV) defect centres in diamond, and build an
experiment to demonstrate this solid state Raman memory. Looking further ahead, the same Raman interaction could be used to generate heralded, near-deterministic non-linearities at the single photon
level based on Stokes scattering followed by measurement-induced quantum back-action. These research tasks are open-ended but the demonstration of a broadband light-matter interface in the solid
state will be a transformative step forward in quantum photonics that will have a broad impact on the community. The student will have the opportunity to participate in active collaborations with
Steven Prawer in Melbourne (fabrication) and Gerard Milburn in Brisbane (theory).
For more details please contact Josh Nunn or Ian Walmsley:
j [dot] nunn1 [at] physics [dot] ox [dot] ac [dot] uk
I [dot] Walmsley1 [at] physics [dot] ox [dot] ac [dot] uk
Professor Andrew Steane
1) Exploring the limits of quantum coherence
This project is mainly theoretical, but will involve acquiring in-depth knowledge of at least one type of experimental method, namely trapping and cooling of charged particles ("ion trapping"). The
aim of the project is to explore the idea that quantum interference has natural limits associated with the scale of the physical system. One such limit would be owing to spontaneous collapse
processes that are implied by some quantum descriptions of space time. Another limit would be owing to Unruh radiation (a thermal radiation associated with accelerating through the vacuum). After
these areas have been explored, it is expected that the project will develop towards understanding the possible role of quantum coherence in biological settings.
a [dot] steane1 [at] physics [dot] ox [dot] ac [dot] uk
Dr Brian Smith
1) Photonic time-frequency quantum information:
Photonic time-frequency quantum information: The first project aims to develop techniques to generate, control, and measure the time-frequency state of individual photons for quantum applications
including precision measurement and computation. This will utilize photons produced via the nonlinear optical process of spontaneous parametric down conversion. By using high-speed phase modulation
techniques to modify the time-frequency state, this project will explore different approaches to implement basic quantum operations, such as single particle logic operations. Different approaches to
characterization of the photonic states produced such as the use of fast time-resolving detectors, precise spectral detection or hybrid measurements will be examined to optimize the information
2) Continuous-variable quantum information:
The second project will focus on approaches to implement quantum information primitives such as logic operations and readout for so-called continuous variable encoding, which utilizes the amplitude
and phase of a single mode of the electromagnetic field. Work here will develop methods to create, process and detect quadrature-encoded quantum information. This will involve development of two-mode
pulsed squeezed light sources for synchronized operations, techniques to manipulate and measure these states for quantum information applications such as the distribution of entanglement.
We are looking for self-motivated individuals who are interested in both the theoretical and experimental issues surrounding quantum information and optical quantum technologies who will contribute
to these projects. For more information please contact Brian Smith.
b [dot] smith1 [at] physics [dot] ox [dot] ac [dot] uk
Professor Ian Walmsley and Dr Steven Kolthammer
1) Linear optical quantum computation
Optics provides a rich physical system to investigate how quantum mechanics opens up fundamentally new approaches to information processing. This project will investigate two important experimental
aspects of optical quantum computing: (1) computational complexity as an essential feature of simple quantum optical apparatus; (2) physical requirements for universal quantum computing. Progress on
these fronts will leverage state-of-the-art methods developed in the Ultrafast Quantum Optics group to generate, manipulate, and measure quantum states of light. Integrated photonic chips will be
used to operate large arrays of photon sources and construct complex many-mode photonic circuits. Superconducting detectors will be used to count single photons with unsurpassed precision. On the one
hand, this project strives to build limited-purpose quantum processors that provide direct evidence for the realization of quantum-enhanced computation. On the other hand, this project will achieve
the first experimental demonstration of essential optical protocols for universal quantum computation and identify the primary technical obstacles to large-scale quantum computation.
The student will gain expertise in experimental quantum optics and optical quantum computation, and a thorough understanding of both the physics of optics and information. The experimental focus of
the project will involve a broad range of laboratory proficiencies, from experimental design to data acquisition and analysis, and technical skills, such as using and designing integrated optical
devices, cryogenic photodetectors, and ultrashort pulsed lasers. Guided by the experimental state of the art, the student will learn leading theoretical approaches to quantum information and devise
laboratory tests that reveal new aspects of quantum computing. This project will engage closely with formal collaborators worldwide, including integrated optics researchers at the University of
Southampton and the University of Paderborn.
For further information please contact Dr Steven Kolthammer or Professor Ian Walmsley.
i [dot] walmsley1 [at] physics [dot] ox [dot] ac [dot] uk
s [dot] kolthammer [at] physics [dot] ox [dot] ac [dot] uk
Professor Ian Walmsley and Dr Steven Kolthammer
2) Large-scale quantum states of light distributed in time
Over the last decade, experimental quantum optics has played a pivotal role in our growing understanding of the essential features of quantum theory, from foundational principles to consequences in
information theory. While optics continues to be an important platform for such studies, a key limitation concerns the scale of quantum optical systems that can be effectively manipulated. To date,
work to overcome this has primarily focused on controlling quantum light in spatial mode structures of increasing complexity – for example, by building large multimode interferometers or manipulating
the orbital angular momentum of light. In this project, we instead take our motivation from classical telecommunication, in which extremely high bit rates are achieved by encoding information in the
time-frequency structure of light. In particular, we will develop experimental tools to manipulate the temporal structure of quantum light, which will provide access to extremely large Hilbert spaces
for photonic states. Two routes will be investigated: fast nonlinear-optical or electro-optical switching will be combined with guided-wave delays, and fast polarization switching will be combined
with birefringent retarders. Both methods are naturally suited to nonlinear-optical sources of quantum light driven with high repetition by ultrashort laser pulses. These new tools will then be
applied to investigating unexplored quantum phenomena from multipartite entanglement to multi-particle quantum walks.
The student will gain expertise in experimental quantum optics, nonlinear optics, and integrated photonics. Alongside a thorough training in optical physics, the student will engage with on-going
research in quantum theory and quantum information. While research is expected to have a strong laboratory emphasis – including experimental design, testing, and analysis – this will be balanced with
a fundamental motivation to access new regimes of quantum light and understand their implications for quantum information.
For further information please contact Dr Steven Kolthammer or Professor Ian Walmsley.
i [dot] walmsley1 [at] physics [dot] ox [dot] ac [dot] uk
s [dot] kolthammer [at] physics [dot] ox [dot] ac [dot] uk
Professor Ian Walmsley and Dr Steven Kolthammer
3) Optical quantum state engineering
Quantum physics is in an exciting period of development spurred on by new connections to information theory as well as new experimental methods to control quantum systems. In operational terms,
quantum systems allow for distinct advantages over their classical counterparts for tasks including computation, communication, and measurement. A major goal for researchers is to identify the
underlying physical phenomena that account for these differences. In this project, we seek to construct and analyse new quantum-mechanical states of light in the laboratory to investigate both
fundamental questions about quantum theory and quantum applications in information science.
The Ultrafast Quantum Optics group at Oxford has extensive expertise in generating and measuring quantum states of light – from single photons to bright squeezed vacuum – using ultrashort pulsed
lasers and nonlinear optics. We have recently developed an array of new methods employing guided-wave optics that allow access to photonic states with unprecedented scale. Nonlinear interactions in
micron-scale waveguides generate multi-photon quantum states in well-controlled optical modes. Photonic chips enable complex and robust phase-stable linear optical manipulation. Superconducting
photodetectors – the highest efficiency photon counters in the world – provide for precise optical measurements. The aim of this project is to characterise these new tools in a rigorous
quantum-mechanical framework, and then to use them to generate, manipulate and measure novel, large-scale quantum states. Studies will investigate optical measurements that span both the wave and
particle nature of light. Large multimode quantum states will allow new investigations of quantum correlation, including multipartite quantum discord and entanglement.
We are looking for a candidate with a strong interest in fundamental physics who also enjoys solving challenging technical problems. Work will be carried out as part of a large team of experimental
and theoretical quantum physicists, both in Oxford and collaborating institutions worldwide. The student will gain expertise in experimental and theoretical quantum optics, as well as extensive
experience with optical physics, quantum information, integrated photonics, and experimental design.
For further information please contact Dr Steve Kolthammer or Professor Ian Walmsley.
i [dot] walmsley1 [at] physics [dot] ox [dot] ac [dot] uk
s [dot] kolthammer [at] physics [dot] ox [dot] ac [dot] uk
Professor Ian Walmsley and Dr Animesh Datta
4) Optical quantum metrology
A striking consequence of quantum theory is that fundamental limits on the information gained by a quantum-mechanical measurement apparatus can exceed that possible with a classical instrument. The
field of quantum metrology seeks to identify situations in which such advantages can arise, the essential quantum resources required of the apparatus, and the approaches with which quantum-enhanced
measurements can be realized in the laboratory. Despite continuing improvements in the experimental control of quantum systems, practical quantum metrology is limited due to the inherent sensitivity
of quantum probes to undesired environmental disturbances. In this project, we seek to clarify the required resources for quantum-enhanced metrology and to devise new optical strategies for quantum
measurements that are robust against imperfections including dephasing and dissipation.
Through both theoretical and experimental progress, we will develop approaches that impact the fundamental limitations of real laboratory measurements. Of particular interest is the simultaneous
measurement of multiple parameters – quantum imaging – for which multipartite entanglement is of interest. We will study the role of probes with both fixed and indefinite photon number, as well as
probes and detection methods which are both Gaussian and non-Gaussian in nature. Experiments will leverage state-of-the-art methods developed by the Ultrafast Quantum Optics group for the generation,
manipulation, and detection of quantum states of light derived from ultrashort laser pulses; these studies will investigate precise optical interferometry and eventually measure real world samples.
For this challenging project, we seek a candidate with a strong interest in fundamental quantum physics, drawn to both theoretical problems and detailed experimentation. The student will gain
expertise in quantum theory, the information theory of measurement, and experimental quantum optics. The project will be conducted within our team researchers studying a wide variety of quantum
optics and quantum information at Oxford and will also include opportunities to work with formal collaborators in the UK and abroad.
For further information please contact Dr Animesh Datta or Professor Ian Walmsley.
i [dot] walmsley1 [at] physics [dot] ox [dot] ac [dot] uk
a [dot] datta1 [at] physics [dot] ox [dot] ac [dot] uk
Professor Justin Wark
1) Stellar Physics with X-Ray Lasers
Stellar environments are hot and dense, leading to ionized matter at high temperatures, and with interatomic spacings sufficiently close that the bound states of a particular ion are strongly
influenced by their neighbours, reducing the energy needed to ionize the atom (a phenomenon known as ionization potential depression. Similar circumstances arise in the centre of inertial confinement
fusion capsules. Research within our group has recently shown that we can use the world's most powerful x-ray laser, based at SLAC California, to make solid density matter at 2 million degrees, and
then probe it to find out exactly the value of the ionization potentials.
Surprisingly we found that the values were completely at odds with the standard theory that has been in wide use for over half a century.
With the aid of ab initio quantum calculations we are starting to
understand why this is the case, but much more work needs to be done.
In this project the student will be engaged in further experiments to make 'miniature stars' in the laboratory, as well as embark on fundamental quantum calculations of the properties of atoms under
similar conditions to those that exist half way to the centre of the sun. The results could also have a direct impact on the quest to produce virtually limitless energy via inertial fusion
techniques, and the group has strong formal links with the US National Ignition Facility.
j [dot] wark1 [at] physics [dot] ox [dot] ac [dot] uk
2) Making Giant Planets in the Laboratory
New planets, far from our solar system, are being discovered on an almost daily basis. Some idea of their composition (and thus perhaps means of formation and ability to support life) can be gained
simply from knowing their mass and radius - from which we can make inferences about their composition, but only if we know the compressibility of matter at ultra high pressures. For example, the
centre of Jupiter has a pressure of order 70 million atmospheres. It is thus possible that 'rocky' matter can exist at ultra high pressures, but whilst we can carry out ab initio quantum calculations
of its properties, up to now we have not been able to make such matter in the laboratory. In this project, the student will do just that.
By using the largest laser system in the world - the National Ignition Facility in California - we will compress solid matter, for a few nanoseconds, to pressures many times those at the centre of
the earth. Some of the laser beams are used to create a laser-plasma based x-ray source, that can produce nanosecond diffraction images of the sample, allowing the probing of its structure.
j [dot] wark1 [at] physics [dot] ox [dot] ac [dot] uk
3) Laboratory astrophysics with high power lasers
Topic: We plan to use high power lasers in order to study cosmic ray acceleration at supernova shocks. Using the fact that hydrodynamic equations are scale invariant, supernova shock waves can be
reproduced in the laboratory using large lasers that simulate the initial blast. The project will aim at studying the dynamics of shocks in presence of magnetic fields and their role in transferring
energy to charged particles. Experiments will be conducted at laser facilities in the UK as well as overseas.
j [dot] wark1 [at] physics [dot] ox [dot] ac [dot] uk
g [dot] gregori1 [at] physics [dot] ox [dot] ac [dot] uk
Details on these projects can be found at | {"url":"http://www2.physics.ox.ac.uk/study-here/postgraduates/atomic-and-laser-physics/dphil-projects","timestamp":"2014-04-17T21:23:07Z","content_type":null,"content_length":"80926","record_id":"<urn:uuid:3dadd5a1-b881-47b0-bfa8-3210270b7d4e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Increase Current 7805 Voltage Regulator
7805 voltage regulator is one variant of the three terminal Positive Voltage Regulator. IC 7805 operated at maximum load current at a positive fixed voltage 5V 1A, but work on the maximum current
will increase the thermal level.
Aluminum heatsink is needed to reduce the level of heat
, though internally IC 7805 has a thermal over-load protection.
The voltage regulator that works at the maximum rate is an unfavorable condition. Besides the heat level increases, it will likely be fatal if an increase in the load current due to short circuit. A
good idea is to increase load current capability by adding an external transistor.
Based on the manual of IC 7805 has given a basic circuit to operate the IC at higher load currents. Download
7805 datasheet manual
7805 Voltage Regulator Circuit
In the following circuit to increase 7805 output current up to 3 amperes, must be added a complementary NPN transistor MJ2955. The advantage of this technique is to overcome when there is
short-circuit as described above. When there is a voltage drop, then the circuit will reduce the maximum current consumption. So the problem can be solved.
7805 Voltage Regulator Circuit
Transistor BD240C in the
DC voltage regulator
circuit serves as a current limiting. The transistor is open when the voltage at 10R+0.22R be higher than 0.6-0.7 Volts, which leads to a reduction to zero of the T2 base current. The voltage at
which the short-circuit protection starts to act, is given by voltage sum on 0.22R and 10R. Base voltage is determined by resistors 10R and 150R. In this circuit did not happen over-thermal when
short-circuit occured, the maximum current was only 0.5 Ampere.
Hope this simple explanation may help in making a higher current-voltage regulator using IC 7805. You can open the manual datasheet of the 7805 voltage regulator for more information. | {"url":"http://dc-voltage-regulator.blogspot.com/2012/12/increase-current-7805-voltage-regulator.html","timestamp":"2014-04-18T02:57:51Z","content_type":null,"content_length":"48854","record_id":"<urn:uuid:1ff16deb-2c0d-41c1-9892-6d85741f4074>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Program by Day
Current as of Saturday, October 18, 2008 00:36:13
Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org
2008 Fall Eastern Section Meeting
Middletown, CT, October 11-12, 2008 (Saturday - Sunday)
Meeting #1042
Associate secretaries:
Lesley M Sibner
, AMS
Saturday October 11, 2008
• Saturday October 11, 2008, 7:30 a.m.-4:00 p.m.
Meeting Registration
Lobby, Exley Science Center
• Saturday October 11, 2008, 7:30 a.m.-4:00 p.m.
Book Sale and Exhibit
Room 139, Exley Science Center
• Saturday October 11, 2008, 8:00 a.m.-10:50 a.m.
Special Session on Model Theory and Its Applications, I
Room 121, Exley Science Center
Thomas Scanlon, University of California, Berkeley scanlon@math.berkeley.edu
Philip H. Scowcroft, Wesleyan University pscowcroft@wesleyan.edu
Carol S. Wood, Wesleyan University cwood@wesleyan.edu
• Saturday October 11, 2008, 8:00 a.m.-10:50 a.m.
Special Session on Low-Dimensional Topology, I
Room 141, Exley Science Center
Constance Leidy, Wesleyan University cleidy@wesleyan.edu
Shelly Harvey, Rice University shelly@rice.edu
• Saturday October 11, 2008, 8:00 a.m.-10:50 a.m.
Special Session on Computability Theory and Effective Algebra, I
Room 638, Exley Science Center
Joseph S. Miller, University of Connecticut joseph.miller@math.uconn.edu
David Reed Solomon, University of Connecticut solomon@math.uconn.edu
Asher Kach, University of Connecticut kach@math.uconn.edu
• Saturday October 11, 2008, 8:00 a.m.-10:45 a.m.
Special Session on Algebraic Geometry, I
Room 137, Exley Science Center
Eyal Markman, University of Massachusetts, Amherst markman@math.umass.edu
Jenia Tevelev, University of Massachusetts, Amherst tevelev@math.umass.edu
• Saturday October 11, 2008, 8:30 a.m.-10:50 a.m.
Special Session on Algebraic Topology, I
Woodhead Lounge, Exley Science Center
Mark A. Hovey, Wesleyan University mhovey@wesleyan.edu
Kathryn Lesh, Union College leshk@union.edu
• Saturday October 11, 2008, 8:30 a.m.-10:50 a.m.
Special Session on Analysis on Metric Measure Spaces and on Fractals, I
Room 201, Exley Science Center
Piotr Hajlasz, University of Pittsburgh hajlasz@pitt.edu
Luke Rogers, University of Connecticut rogers@math.uconn.edu
Robert S. Strichartz, Cornell University str@math.cornell.edu
Alexander Teplyaev, University of Connecticut teplyaev@math.uconn.edu
• Saturday October 11, 2008, 8:30 a.m.-10:50 a.m.
Special Session on Convex and Integral Geometry, I
Room 58, Exley Science Center
Monika Ludwig, Polytechnic University of New York mludwig@poly.edu
Daniel Klain, University of Massachusetts, Lowell daniel_klain@uml.edu
Franz Schuster, Vienna University of Technology schuster@dmg.tuwien.ac.at
• Saturday October 11, 2008, 8:30 a.m.-10:50 a.m.
Special Session on Riemannian and Lorentzian Geometries, I
Room 422, Public Affairs Center
Ramesh Sharma, University of New Haven rsharma@newhaven.edu
Philippe Rukimbira, Florida International University rukim@fiu.edu
• Saturday October 11, 2008, 8:30 a.m.-10:50 a.m.
Special Session on Number Theory, I
Room 405, Exley Science Center
Wai Kiu Chan, Wesleyan University wkchan@wesleyan.edu
David Pollack, Wesleyan University dpollack@wesleyan.edu
• Saturday October 11, 2008, 8:30 a.m.-10:50 a.m.
Special Session on Geometric Function Theory and Geometry, I
Room 421, Public Affairs Center
Petra Bonfert-Taylor, Wesleyan University pbonfert@wesleyan.edu
Katsuhiko Matsuzaki, Okayama University matsuzak@math.okayama-u.ac.jp
Edward C. Taylor, Wesleyan University ectaylor@wesleyan.edu
• Saturday October 11, 2008, 8:30 a.m.-10:50 a.m.
Special Session on Real and Complex Dynamics of Rational Difference Equations with Applications, I
Room 136, Public Affairs Center
Mustafa Kulenovic, University of Rhode Island mkulenovic@mail.uri.edu
Gerasimos Ladas, University of Rhode Island gladas@math.uri.edu
• Saturday October 11, 2008, 9:00 a.m.-10:50 a.m.
Special Session on History of Mathematics, I
Room 309, Exley Science Center
Robert E. Bradley, Adelphi University bradley@adelphi.edu
Lawrence A. D'Antonio, Ramapo College of New Jersey ldant@ramapo.edu
Lee J. Stemkoski, Adelphi University stemkoski@adelphi.edu
• Saturday October 11, 2008, 9:00 a.m.-10:50 a.m.
Special Session on Complex Geometry and Partial Differential Equations, I
Room 104, Public Affairs Center
Jacob Sturm, Rutgers University sturm@andromeda.rutgers.edu
Jian Song, Rutgers University jiansong@math.rutgers.edu
• Saturday October 11, 2008, 9:00 a.m.-10:50 a.m.
Special Session on Geometric Group Theory and Topology, I
Room 109, Exley Science Center
Matthew Horak, University of Wisconsin-Stout horakm@uwstout.edu
Melanie Stein, Trinity College melanie.stein@trincoll.edu
Jennifer Taback, Bowdoin College jtaback@bowdoin.edu
• Saturday October 11, 2008, 11:10 a.m.-12:00 p.m.
Invited Address
Flows and canonical metrics in Kaehler geometry.
Room 150, Exley Science Center
Duong Hong Phong*, Columbia University
• Saturday October 11, 2008, 2:00 p.m.-2:50 p.m.
Invited Address
SL($n$)invariant notions of surface area.
Room 150, Exley Science Center
Monika Ludwig*, Polytechnic Institute of New York University
• Saturday October 11, 2008, 3:00 p.m.-4:50 p.m.
Special Session on Algebraic Topology, II
Woodhead Lounge, Exley Science Center
Mark A. Hovey, Wesleyan University mhovey@wesleyan.edu
Kathryn Lesh, Union College leshk@union.edu
• Saturday October 11, 2008, 3:00 p.m.-5:20 p.m.
Special Session on History of Mathematics, II
Room 309, Exley Science Center
Robert E. Bradley, Adelphi University bradley@adelphi.edu
Lawrence A. D'Antonio, Ramapo College of New Jersey ldant@ramapo.edu
Lee J. Stemkoski, Adelphi University stemkoski@adelphi.edu
• Saturday October 11, 2008, 3:00 p.m.-5:50 p.m.
Special Session on Model Theory and Its Applications, II
Room 121, Exley Science Center
Thomas Scanlon, University of California, Berkeley scanlon@math.berkeley.edu
Philip H. Scowcroft, Wesleyan University pscowcroft@wesleyan.edu
Carol S. Wood, Wesleyan University cwood@wesleyan.edu
• Saturday October 11, 2008, 3:00 p.m.-5:50 p.m.
Special Session on Low-Dimensional Topology, II
Room 141, Exley Science Center
Constance Leidy, Wesleyan University cleidy@wesleyan.edu
Shelly Harvey, Rice University shelly@rice.edu
• Saturday October 11, 2008, 3:00 p.m.-5:50 p.m.
Special Session on Computability Theory and Effective Algebra, II
Room 638, Exley Science Center
Joseph S. Miller, University of Connecticut joseph.miller@math.uconn.edu
David Reed Solomon, University of Connecticut solomon@math.uconn.edu
Asher Kach, University of Connecticut kach@math.uconn.edu
• Saturday October 11, 2008, 3:00 p.m.-5:20 p.m.
Special Session on Analysis on Metric Measure Spaces and on Fractals, II
Room 201, Exley Science Center
Piotr Hajlasz, University of Pittsburgh hajlasz@pitt.edu
Luke Rogers, University of Connecticut rogers@math.uconn.edu
Robert S. Strichartz, Cornell University str@math.cornell.edu
Alexander Teplyaev, University of Connecticut teplyaev@math.uconn.edu
• Saturday October 11, 2008, 3:00 p.m.-5:50 p.m.
Special Session on Convex and Integral Geometry, II
Room 58, Exley Science Center
Monika Ludwig, Polytechnic University of New York mludwig@poly.edu
Daniel Klain, University of Massachusetts, Lowell daniel_klain@uml.edu
Franz Schuster, Vienna University of Technology schuster@dmg.tuwien.ac.at
• Saturday October 11, 2008, 3:00 p.m.-5:45 p.m.
Special Session on Algebraic Geometry, II
Room 137, Exley Science Center
Eyal Markman, University of Massachusetts, Amherst markman@math.umass.edu
Jenia Tevelev, University of Massachusetts, Amherst tevelev@math.umass.edu
• Saturday October 11, 2008, 3:00 p.m.-5:50 p.m.
Special Session on Riemannian and Lorentzian Geometries, II
Room 422, Public Affairs Center
Ramesh Sharma, University of New Haven rsharma@newhaven.edu
Philippe Rukimbira, Florida International University rukim@fiu.edu
• Saturday October 11, 2008, 3:00 p.m.-6:15 p.m.
Special Session on Complex Geometry and Partial Differential Equations, II
Room 104, Public Affairs Center
Jacob Sturm, Rutgers University sturm@andromeda.rutgers.edu
Jian Song, Rutgers University jiansong@math.rutgers.edu
• Saturday October 11, 2008, 3:00 p.m.-5:20 p.m.
Special Session on Number Theory, II
Room 405, Exley Science Center
Wai Kiu Chan, Wesleyan University wkchan@wesleyan.edu
David Pollack, Wesleyan University dpollack@wesleyan.edu
• Saturday October 11, 2008, 3:00 p.m.-5:50 p.m.
Special Session on Geometric Function Theory and Geometry, II
Room 421, Public Affairs Center
Petra Bonfert-Taylor, Wesleyan University pbonfert@wesleyan.edu
Katsuhiko Matsuzaki, Okayama University matsuzak@math.okayama-u.ac.jp
Edward C. Taylor, Wesleyan University ectaylor@wesleyan.edu
• Saturday October 11, 2008, 3:00 p.m.-5:20 p.m.
Special Session on Geometric Group Theory and Topology, II
Room 109, Exley Science Center
Matthew Horak, University of Wisconsin-Stout horakm@uwstout.edu
Melanie Stein, Trinity College melanie.stein@trincoll.edu
Jennifer Taback, Bowdoin College jtaback@bowdoin.edu
• Saturday October 11, 2008, 3:00 p.m.-5:50 p.m.
Special Session on Real and Complex Dynamics of Rational Difference Equations with Applications, II
Room 136, Public Affairs Center
Mustafa Kulenovic, University of Rhode Island mkulenovic@mail.uri.edu
Gerasimos Ladas, University of Rhode Island gladas@math.uri.edu
• Saturday October 11, 2008, 3:00 p.m.-3:55 p.m.
Session for Contributed Papers
Room 618, Exley Science Center
• Saturday October 11, 2008, 6:00 p.m.-7:30 p.m.
Wesleyan University Reception
Beckham Room, Fayerweather (next to Usdan Campus Center)
Inquiries: meet@ams.org | {"url":"http://ams.org/meetings/sectional/2154_program_saturday.html","timestamp":"2014-04-21T04:55:31Z","content_type":null,"content_length":"95661","record_id":"<urn:uuid:c2548cfa-21e1-4926-8a55-32557951cf4e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Rank Plus Nullity Theorem
Let A be a matrix. Recall that the dimension of its column space (and row space) is called the rank of A. The dimension of its nullspace is called the nullity of A. The connection between these
dimensions is illustrated in the following example.
Example 1: Find the nullspace of the matrix
The nullspace of A is the solution set of the homogeneous equation A x = 0. To solve this equation, the following elementary row operations are performed to reduce A to echelon form:
Therefore, the solution set of A x = 0 is the same as the solution set of A′ x = 0:
With only three nonzero rows in the coefficient matrix, there are really only three constraints on the variables, leaving 5 − 3 = 2 of the variables free. Let x [4] and x [5] be the free variables.
Then the third row of A′ implies
The second row now yields
from which the first row gives
Therefore, the solutions of the equation A x = 0 are those vectors of the form
To clear this expression of fractions, let t [1] = ¼ x [4] and t [2] = ½ x [5] then, those vectors x in R ^5 that satisfy the homogeneous system A x = 0 have the form
Note in particular that the number of free variables—the number of parameters in the general solution—is the dimension of the nullspace (which is 2 in this case). Also, the rank of this matrix, which
is the number of nonzero rows in its echelon form, is 3. The sum of the nullity and the rank, 2 + 3, is equal to the number of columns of the matrix.
The connection between the rank and nullity of a matrix, illustrated in the preceding example, actually holds for any matrix: The Rank Plus Nullity Theorem. Let A be an m by n matrix, with rank r and
nullity ℓ. Then r + ℓ = n; that is,
rank A + nullity A = the number of columns of A
Proof. Consider the matrix equation A x = 0 and assume that A has been reduced to echelon form, A′. First, note that the elementary row operations which reduce A to A′ do not change the row space or,
consequently, the rank of A. Second, it is clear that the number of components in x is n, the number of columns of A and of A′. Since A′ has only r nonzero rows (because its rank is r), n − r of the
variables x [1], x [2], …, x [n] in x are free. But the number of free variables—that is, the number of parameters in the general solution of A x = 0—is the nullity of A. Thus, nullity A = n − r, and
the statement of the theorem, r + ℓ = r + ( n − r) = n, follows immediately.
Example 2: If A is a 5 x 6 matrix with rank 2, what is the dimension of the nullspace of A?
Since the nullity is the difference between the number of columns of A and the rank of A, the nullity of this matrix is 6 − 2 = 4. Its nullspace is a 4‐dimensional subspace of R ^6.
Example 3: Find a basis for the nullspace of the matrix
Recall that for a given m by n matrix A, the set of all solutions of the homogeneous system A x = 0 forms a subspace of R ^n called the nullspace of A. To solve A x = 0, the matrix A is row reduced:
Clearly, the rank of A is 2. Since A has 4 columns, the rank plus nullity theorem implies that the nullity of A is 4 − 2 = 2. Let x [3] and x [4] be the free variables. The second row of the reduced
matrix gives
and the first row then yields
Therefore, the vectors x in the nullspace of A are precisely those of the form
which can be expressed as follows:
If t [1] = 1/7 x [3] and t [2] = 1/7 x [4], then x = t [1](−2, −1, 7, 0) [T] + t [2](−4, 12, 0, 7) [T], so
Since the two vectors in this collection are linearly independent (because neither is a multiple of the other), they form a basis for N(A): | {"url":"http://www.cliffsnotes.com/math/algebra/linear-algebra/real-euclidean-vector-spaces/the-rank-plus-nullity-theorem","timestamp":"2014-04-20T23:39:46Z","content_type":null,"content_length":"120127","record_id":"<urn:uuid:cb4b3134-186a-4988-aca2-359ce7d53057>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics of Computation
ISSN 1088-6842(online) ISSN 0025-5718(print)
Notes on some new kinds of pseudoprimes
Author: Zhenxiang Zhang
Journal: Math. Comp. 75 (2006), 451-460
MSC (2000): Primary 11A15; Secondary 11A51, 11Y11
Published electronically: September 15, 2005
MathSciNet review: 2176408
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: J. Browkin defined in his recent paper (Math. Comp. 73 (2004), pp. 1031-1037) some new kinds of pseudoprimes, called Sylow
In this paper, in contrast to Browkin's examples, we give facts and examples which are unfavorable for Browkin's observation to detect compositeness of odd composite numbers. In Section 2, we
tabulate and compare counts of numbers in several sets of pseudoprimes and find that most strong pseudoprimes are also Sylow fold Carmichael Sylow pseudoprime, if it is a Sylow Sylow uniform
pseudoprime to bases
Similar Articles
Retrieve articles in Mathematics of Computation with MSC (2000): 11A15, 11A51, 11Y11
Retrieve articles in all journals with MSC (2000): 11A15, 11A51, 11Y11
Additional Information
Zhenxiang Zhang
Affiliation: Department of Mathematics, Anhui Normal University, 241000 Wuhu, Anhui, People’s Republic of China
Email: zhangzhx@mail.ahwhptt.net.cn, ahnu_zzx@sina.com
DOI: http://dx.doi.org/10.1090/S0025-5718-05-01775-8
PII: S 0025-5718(05)01775-8
Keywords: Strong pseudoprimes, Miller tests, Sylow $p$-pseudoprimes, elementary Abelian $p$-pseudoprimes, $k$-fold Carmichael Sylow pseudoprimes, Sylow uniform pseudoprimes
Received by editor(s): September 18, 2004
Published electronically: September 15, 2005
Additional Notes: This work was supported by the NSF of China Grant 10071001, and the SF of the Education Department of Anhui Province Grant 2002KJ131
Article copyright: © Copyright 2005 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication. | {"url":"http://www.ams.org/journals/mcom/2006-75-253/S0025-5718-05-01775-8/home.html","timestamp":"2014-04-24T22:09:19Z","content_type":null,"content_length":"46412","record_id":"<urn:uuid:524fab62-4263-4127-ab5e-c2917ae1695a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
*Official* The School Help Thread
Okay, so to simplify the one that you just gave me, you're first going to distribute the 6x to both terms in the parentheses. So you end up with two answers which you'll stick back together to form
the final answer.
6x * 3x = 18x^2
6x * -x = -6x^2
Stick them back together:
18x^2 - 6x^2
Now, you'll notice that they can be combined to give a final answer. If you had something like 18x^2 - 6x, you wouldn't be able to combine them because one is an x^2, and the other is a simple x.
So, just do simple subtraction, and you get 12x^2. So, 6x (3x -x) can be simplified to 12x^2.
Just something to note, whenever you've got two terms that can be combined in the parentheses (x and x, or x^2 and x^2, so on...), combine them before distributing the term outside the parentheses.
So, in this case, you'd actually have 6x(2x), and that simplifies to 12x^2.
As a more complex example, let's simplify 2x(3x^2 - 12).
2x * 3x^2 = 6x^3
2x * -12 = -24x.
6x^3 -24x | This is your final answer. Can't simplify an x^3 and an x beyond this point, so leave it at this.
A little more complex:
2x(3x^2 - 2) + 5x(2x^2 + 7)
Now you're going to distribute the two to the parentheses right next to it, and do the same with the 5. Then stick it all together, and combine like terms to get your final answer.
6x^3 -4x + 10x^3 + 35x
Combine like terms (the -4x and the 35x, and the 6x^3 and the 10x^3)
16x^3 + 31x
You can even do the same thing with x's and y's in the same equation, and I can give you a few examples and problems if you want. But for now, here are some problems to work:
1. 3x(10x - 12x^2) - 2x(2x + 4)
2. 2x(2x^400 + 12) + 2x(2x^399 +12)
3. 5x(10 + 12) - 6x(12x^2 - 10x)
Spoiler: (Highlight this box to see the hidden message.)
1. -36x^3 + 26x^2 - 8x
2. 4x^401 + 4x^400 +48x
3. -72x^3 + 60x^2 + 110x
Watch your signs. It may help to think of a negative like this:
3x(10x - 12x^2) + (-2x(2x +4))
Distribute the negative with the 2x, and you'll get the right sign every time. | {"url":"http://www.pbnation.com/showthread.php?p=77252409","timestamp":"2014-04-24T07:25:57Z","content_type":null,"content_length":"193229","record_id":"<urn:uuid:a39d4907-9f22-403a-b2ec-59079e507038>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Audubon Park, NJ Geometry Tutor
Find an Audubon Park, NJ Geometry Tutor
...I PROVE my expertise by showing you my perfect 800 score on the SAT Subject Test, mathematics level 2. I'm not too shabby at reading and writing, either. Unlike the one-size-fits-all test-prep
courses, and the overly-structured national tutoring companies, I always customize my methods and presentation for the student at hand.
23 Subjects: including geometry, English, calculus, algebra 1
...While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and pre-calculus as well as contemporary math. My background is in engineering and business,
so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain information.
13 Subjects: including geometry, calculus, statistics, algebra 1
...The Detailed Version of the Details: You might be asking yourself, "Why would a mechanical engineer be a good tutor for me or my child?" While I was working as a writing tutor, I was being
trained to think critically and solve complex problems using calculus and differential equations. I bring...
37 Subjects: including geometry, English, reading, precalculus
...I like to keep the environment relaxed, yet I demand private study and some productive work ethic both in and out of the tutoring session. I am not there to do your homework, I'm there to help
you through the concepts so you can do the homework yourself. After all, I won't be there when you take the test!
14 Subjects: including geometry, chemistry, precalculus, algebra 2
...I attended Merion Mercy Academy and graduated top of my class. During my senior year I attended Saint Joseph's University as a part-time student studying physics and economics. My high school
curriculum included all standard core curriculum in all Honors or AP classes with electives in music and law.
42 Subjects: including geometry, reading, writing, English
Related Audubon Park, NJ Tutors
Audubon Park, NJ Accounting Tutors
Audubon Park, NJ ACT Tutors
Audubon Park, NJ Algebra Tutors
Audubon Park, NJ Algebra 2 Tutors
Audubon Park, NJ Calculus Tutors
Audubon Park, NJ Geometry Tutors
Audubon Park, NJ Math Tutors
Audubon Park, NJ Prealgebra Tutors
Audubon Park, NJ Precalculus Tutors
Audubon Park, NJ SAT Tutors
Audubon Park, NJ SAT Math Tutors
Audubon Park, NJ Science Tutors
Audubon Park, NJ Statistics Tutors
Audubon Park, NJ Trigonometry Tutors
Nearby Cities With geometry Tutor
Audubon, NJ geometry Tutors
Barrington, NJ geometry Tutors
Brooklawn, NJ geometry Tutors
Collingswood geometry Tutors
Glendora, NJ geometry Tutors
Gloucester City geometry Tutors
Haddon Heights geometry Tutors
Hi Nella, NJ geometry Tutors
Laurel Springs, NJ geometry Tutors
Mount Ephraim geometry Tutors
Oaklyn geometry Tutors
Philadelphia Ndc, PA geometry Tutors
West Collingswood Heights, NJ geometry Tutors
Westville, NJ geometry Tutors
Woodlynne, NJ geometry Tutors | {"url":"http://www.purplemath.com/audubon_park_nj_geometry_tutors.php","timestamp":"2014-04-21T02:27:00Z","content_type":null,"content_length":"24534","record_id":"<urn:uuid:066b51db-f653-4a7b-b739-d0170b041c55>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
│One way to work through the following exercises is to obtain a paper print-out and use Logo software. You may use any Logo software or that which is given on this website. Only Logo commands from │
│the most common versions of Logo are contained in these exercises. The answers to the odd-numbered exercises are at the end of these exercises, and the answers to all the Logo exercises are in the │
│Instructor's Resource Manual which accompanies Mathematics for Elementary Teachers: A Conceptual Approach, Fifth Edition. │
Sketch the figure that the turtle will draw by carrying out the commands in exercises 1 and 2.
FD 60 BK 120 FD 60 RT 90 FD 50 LT 90 FD 60 BK 120 FD 60
PENUP RT 90 FD 50 LT 90 PENDOWN FD 60 BK 120 FD 60
2. FD 50 LT 90 FD 30 RT 90 BK 40 LT 90 FD 20 LT 90 FD 30 HOME
In Example A, the turtle drew six spokes of a wheel when given the command LINE and instructed to make right turns of 60E . Use this approach and the REPEAT command to write the commands for drawing
the figures in exercises 3 and 4.
3. 10 spokes of a wheel
4. 18 spokes of a wheel
5. Write the commands for drawing the isosceles triangle in the following figure. The two equal sides have a length of 70 units.
6. Write the commands for drawing the parallelogram in the following figure. The sides have lengths of 40 and 70 units.
Write the commands for drawing a scalene triangle that satisfies the given conditions in exercises 7
and 8. Sketch the triangle.
7. Three acute angles
8. Two acute angles and one obtuse angle
Write the commands for drawing the polygons in exercises 9 and 10:
9. A nonconvex hexagon
10. A convex pentagon whose sides are not all congruent
Define a procedure for drawing regular polygons in exercises 11 and 12.
11. a. Pentagon b. Octagon
12. a. Dodecagon b. Decagon
13. Write a procedure called RSTAIRCASE for drawing the right side of the following figure, and have the turtle return to the home position. Then define a procedure called LSTAIRCASE by
interchanging the RT and LT commands. The two procedures RSTAIRCASE and LSTAIRCASE should produce the complete figure.
14. Write a procedure called RTREE for drawing the right side of a tree. Have the turtle return to its start position. Then define a procedure called LTREE by interchanging the RT and LT
commands. The two procedures RTREE LTREE should procedure the complete tree.
Use the procedure FLAG from Figure 15 to write the commands for drawing figures with the given numbers of rotation symmetries in exercises 15 and 16.
15. 10 rotation symmetries
16. 9 rotation symmetries
17. Revise the procedure CIRCLE by replacing FD 1 by FD .8 so that it will draw a smaller circle. Then use this procedure to define a procedure called VENN for drawing three circles for a Venn
18. Use the procedure CIRCLE to define a procedure called SLINKY for drawing overlapping circles, as shown here.
The procedure ARC from Figure 11 can be used twice to obtain a petal. Define a procedure named PETAL, use it to define the procedures in exercises 19 and 20.
19. Define a procedure named FLOWER to draw a flower with eight petals, as shown here.
20. Define a procedure called STARLIGHT to draw a flower with five petals.
21. Write the commands for drawing three concentric squares, as shown in the following figure.
22. Write a procedure called GRIDSQUARES that instructs the turtle to draw a 3 x 5 array of nonintersecting squares of the same size.
a. Understanding the Problem Here is a 2 x 3 array of squares. Sketch a 3 x 5 array. Notice that the size of the squares and the distance between them are not given.
b. Devising a Plan First we need a procedure to draw a square. Write a procedure to draw a 10 x 10 square. Then since the same size square is needed 15 times, you can use the REPEAT command.
c. Carrying out the Plan Write the commands to draw the array. (Hint: Do one row or one column at a time.)
d. Looking Back Revise your program to increase the spaces between the squares.
23. Mr. Carbrero uses base-five pieces for activities with his elementary school students to provide background for place value. He wants to obtain a procedure so his computer will print out
grids of various sizes for other bases. Define a procedure that will print grids of size 12 x 12.
24. Write a procedure called PENTATRI to draw a pentagon with triangles as shown in the following figure. Then show how this procedure can be revised to obtain procedures called HEXATRI and
HEPTATRI to obtain a hexagon and heptagon, respectively, with triangles as shown in the figures.
25. Write a procedure for printing six rows of six circles each, as shown in the figure, and call it CIRCLEGRID.
26. A procedure for drawing a figure with many parts, such as a face, can best be created by using subprocedures. The following procedure called FACE has eight subprocedures. In each of these
subprocedures, except possibly the one for the nose (see the hint below), the turtle should start from and return to its home. Design a face and define the subprocedures for drawing it. (Hint
: Once a subprocedure for the left eye, left ear, or left side of mouth has been defined, the right eye, etc., can be drawn by using the concept of symmetry to obtain a new procedure. The
turtle can be used for the nose.)
27. Regular polygons can be drawn by using Logo commands to specify constant forward moves and constant turns, where the number of degrees in the turn is a factor of 360 E. For example, the
following command produces a regular pentagon with sides of length 50.
REPEAT 5 [FD 50 RT 72]
Some interesting results occur when the number of degrees in the turn is not a factor of 360E, as shown in the following figure.
a. Experiment with the following command by selecting different whole numbers for N and different degrees for X. (Note: Select N large enough so the turtle will complete the figure and
select the forward move small enough so the figure is printed on the screen.)
REPEAT N [FD 80 RT X]
b. What happens when the number of degrees for X is less than or equal to 120 E? Between 120E and 180 E?
28. Logo commands can be used to instruct the computer to create this five-pointed star. Beginning at H (home), the turtle moves north to I, takes a right turn and moves to C, takes a right turn
and moves to D, etc., making five equal forward moves and five equal turns and ending at H, facing north. What is the measure of each right turn, and what is the measure of the angle at I.
Answers to Odd-numbered Logo Exercises
3. REPEAT 5 [LINE RT 36]
5. RT 52 FD 70 RT 76 FD 70 HOME
7. RT 50 FD 40 RT 110 FD 60 HOME
9. Other answers are possible.
FD 40 RT 45 FD 40 RT 80 FD 60 RT 120 FD 30 LT 110 FD 50 HOME
a. TO PENTAGON
REPEAT 5 [FD 50 RT 72]
b. TO OCTAGON
REPEAT 8 [FD 40 RT 45]
13. TO RSTAIRCASE
REPEAT 3 [RT 90 FD 15 LT 90 FD 15]
RT 90 FD 15 PENUP HOME
TO LSTAIRCASE
REPEAT 3 [LT 90 FD 15 RT 90 FD 15]
LT 90 FD 15
15. Other answers are possible.
REPEAT 10 [FLAG RT 36]
17. TO CIRCLE
REPEAT 360 [FD .8 RT 1]
TO VENN
PENUP BK 22 PENDOWN CIRCLE
PENUP LT 90 FD 40 RT 90
PENDOWN CIRCLE PENUP RT 90
FD 40 LT 180 PENDOWN CIRCLE
19. TO PETAL
ARC RT 90 ARC RT 90
TO FLOWER
REPEAT 8 [PETAL RT 45]
21. PENUP BK 10 RT 90 BK 10 LT 90
PENDOWN REPEAT 4 [FD 20 RT 90]
PENUP BK 10 RT 90 BK 10 LT 90
PENDOWN REPEAT 4 [FD 40 RT 90]
PENUP BK 10 RT 90 BK 10 LT 90
PENDOWN REPEAT 4 [FD 60 RT 90]
23. TO LINE
FD 60 BK 120 FD 60
TO VERTLINES
REPEAT 13 [LINE PENUP RT 90 FD 10 LT 90 PENDOWN]
TO GRID
VERTLINES PENUP BK 60 LT 90 FD 70 PENDOWN
VERTLINES END
25. TO SMALL CIRCLE
REPEAT 45 [FD 1 RT 8]
TO ROWOFCIRCLES
REPEAT 6 [SMALLCIRCLE RT 90 PENUP FD 15 LT 90 PENDOWN]
LT 90 PENUP FD 90 RT 90 BK 15 PENDOWN
Note: ROWOFCIRCLES prints a row of six circles and places the turtle on the left edge of the next row of circles.
TO CIRCLESGRID
LT 90 PENUP FD 60 RT 90 FD 40 PENDOWN REPEAT 6
27b. If the number of degrees for X is less than or equal to 120 and a factor of 360, a polygon should be produced. However, the polygon may not fit onto the computer screen if its sides
are too long, and if its sides are too short it may look like a circle. The number of sides for the polygon is 360 divided by the number of degrees.
If the number of degrees for X is less than 120 and not a factor of 360, a star-shaped figure (or a figure that appears to be a disc with a hole in it) will be formed. For example, the
following three figures were produced by letting N = 50 and using the number of degrees shown under the figure.
If the number for X is between 120 and 180, the figure obtained is a star. Here are three examples for the number of degrees shown below each figure. | {"url":"http://www.mhhe.com/math/ltbmath/bennett_nelson/conceptual/student/exercises/exercises.htm","timestamp":"2014-04-21T09:36:13Z","content_type":null,"content_length":"14905","record_id":"<urn:uuid:b10445b7-d53a-492d-8b78-39fef3c29b3b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Name the 11 dimensions in M-Theory.
by PFVincent
Tags: dimensions, m-theory, quantum, string theory
P: 5 Through a period of 60 years we developed quantum physics that enables us to imagine far beyond the capacity of an ordinary human mind. Quantum physics
developed several String theories that were united in creation of M-Theory. M-Theory invovles 11 dimensions including the 4 dimensions that control our
natural world (length, breadth, height, and time). I was wondering if someone can provide me with the names, measurement, and describtion of the other 7.
Thank you for your answer and time.
Through a period of 60 years we developed quantum physics that enables us to imagine far beyond the capacity of an ordinary human mind. Quantum physics developed several String theories that were
united in creation of M-Theory. M-Theory invovles 11 dimensions including the 4 dimensions that control our natural world (length, breadth, height, and time). I was wondering if someone can provide
me with the names, measurement, and describtion of the other 7. Thank you for your answer and time.
P: 5 i am a new to the world of physics and i would really much like to understand stuff that i don't quite seem to understand but can reason with. so please be cautious if i
mentioned anything out of hand and sounding "dumb". thank you =X
i am a new to the world of physics and i would really much like to understand stuff that i don't quite seem to understand but can reason with. so please be cautious if i mentioned anything out of
hand and sounding "dumb". thank you =X
P: 478 Four dimensions: [tex]x,y,z,t[/tex]
7 other dimensions : [tex]w_1,w_2,w_3,w_4,w_5,w_6,w_7[/tex]
Emeritus Name the 11 dimensions in M-Theory.
Sci Advisor They don't have names, and they haven't been measured. I don't know string theory, but the elegant universe describes the 10-dimensional spacetime as the familiar 4-dimensional
spacetime with a 6-dimensional Calabi-Yau space attached to each point. It isn't really possible for someone who hasn't studied a lot of math to understand what that is. The 11th
PF Gold dimension of M-theory is apparently something even stranger.
P: 8,989
P: 1 It has been argued that time may infact be 3 dimensions and not one. That past, present and future are like hight width and depth leaving 5 dimensions to be named instead of 7.
It has been argued that time may infact be 3 dimensions and not one. That past, present and future are like hight width and depth leaving 5 dimensions to be named instead of 7.
Sci Advisor That makes no sense at all. The extra dimensions in the string theories are spatial dimensions, and your claim isn't even consistent with the definition of the word "dimension".
PF Gold
P: 8,989
P: 587 This question is asked in here often enough I think maybe the Authorities should consider finding a really good explanation and making it a FAQ sticky or something.
This question is asked in here often enough I think maybe the Authorities should consider finding a really good explanation and making it a FAQ sticky or something.
Related Discussions
M-Theory : listing the 11 dimensions Beyond the Standard Model 36
Moving Dimensions Theory, aka MDT General Discussion 1
Question about M-Theory dimensions Beyond the Standard Model 2
Theory of Intelligent Dimensions (IDT) General Physics 10 | {"url":"http://www.physicsforums.com/showthread.php?t=284692","timestamp":"2014-04-17T18:33:28Z","content_type":null,"content_length":"35954","record_id":"<urn:uuid:7813f153-2274-46b8-b0a4-173781faf149>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
River Oaks, TX Math Tutor
Find a River Oaks, TX Math Tutor
...I still use the formula capability often. I do not have practice with pivot tables, but do have good knowledge of general techniques. I gained considerable OTJ experience writing complex SQL
queries for a large mainframe relational database.
15 Subjects: including algebra 1, algebra 2, calculus, chemistry
...All difficulties in mathematics can be broken down into very simple pieces and our minds seem to do this with natural ease when we allow it to happen. Trust me, nothing is too difficult to
conquer. I look forward to helping you soon! - JanetI have taught pre-algebra in High School and tutored many students as well.
20 Subjects: including probability, algebra 1, algebra 2, prealgebra
...I have done test preparation tutoring for individuals studying for the verbal/writing part of the SAT, for the ISEE, and for the TExES, as well as for Special Ed. Certification. I have also
taught an SAT verbal/writing class for the Nepalese Society in Irving, Texas.
43 Subjects: including logic, algebra 1, prealgebra, writing
...Her grades improved drastically. She had a hard time seeing what the teacher was doing, but my way was easy to understand. Algebra 2 is a difficult course, but it is a challenge that can be
conquered with the right help.
10 Subjects: including algebra 1, algebra 2, grammar, geometry
My goal as an instructor/tutor is to help students appreciate chemistry and apply it in the real world. I am currently teaching my second semester of chemistry lecture and lab, and have
approximately four years of peer tutoring experience, ranging from general chemistry, to organic chemistry, and s...
7 Subjects: including algebra 2, algebra 1, chemistry, genetics
Related River Oaks, TX Tutors
River Oaks, TX Accounting Tutors
River Oaks, TX ACT Tutors
River Oaks, TX Algebra Tutors
River Oaks, TX Algebra 2 Tutors
River Oaks, TX Calculus Tutors
River Oaks, TX Geometry Tutors
River Oaks, TX Math Tutors
River Oaks, TX Prealgebra Tutors
River Oaks, TX Precalculus Tutors
River Oaks, TX SAT Tutors
River Oaks, TX SAT Math Tutors
River Oaks, TX Science Tutors
River Oaks, TX Statistics Tutors
River Oaks, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Azle Math Tutors
Blue Mound, TX Math Tutors
Forest Hill, TX Math Tutors
Fort Worth Math Tutors
Fort Worth, TX Math Tutors
Kennedale Math Tutors
Lake Worth, TX Math Tutors
Richland Hills, TX Math Tutors
Saginaw, TX Math Tutors
Sansom Park, TX Math Tutors
Watauga, TX Math Tutors
Westover Hills, TX Math Tutors
Westworth Village, TX Math Tutors
White Settlement, TX Math Tutors
Willow Park, TX Math Tutors | {"url":"http://www.purplemath.com/river_oaks_tx_math_tutors.php","timestamp":"2014-04-17T13:05:27Z","content_type":null,"content_length":"23888","record_id":"<urn:uuid:deed1f52-09ab-4b82-bb3b-4c1281d4e3b8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 131 - Home
Eldredge Oehrtman Raish
10334 at 8:00a 12385 at 10:10a 12989 at 1:25p
12384 at 10:10a 12393 at 3:35p
The final exam will be Monday, December 9^th, from 4:15 to 6:45 pm in McKee L152.
Calculus Textbooks for Spring 2014 [
End of Course Surveys [
Course Materials
A classic video about sphere eversion for when you need a study break this week [
To settle infinity dispute, a new law of logic [
Researchers find bacteria in the gut of mice communicate with the brain to influence behavior [
"If a nonnegative quantity was so small that it is smaller than any given one, then it certainly could not be anything but zero. To those who ask what the infinitely small quantity in mathematics is,
we answer that it is actually zero. Hence there are not so many mysteries hidden in this concept as they are usually believed to be. These supposed mysteries have rendered the calculus of the
infinitely small quite suspect to many people. Those doubts that remain we shall thoroughly remove in the following pages, where we shall explain this calculus."
-- Leonhard Euler | {"url":"http://www.unco.edu/nhs/mathsci/facstaff/oehrtman/math131/","timestamp":"2014-04-19T19:38:10Z","content_type":null,"content_length":"6600","record_id":"<urn:uuid:18eb1242-3c38-42d0-b664-e09a197668c0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Covariance matrix formula interpretation - what am I missing?
up vote 1 down vote favorite
I'm reading a paper that outlines the calculation of a covariance matrix like the following:
What is the order of this matrix? My interpretation of the formula is something akin to dot products, but it's producing a vector not the expected matrix. Can someone explain where my interpretation
is incorrect?
na.numerical-analysis linear-algebra linear-programming matrices
1 A link to the paper would help, but to me it seems most plausible that the $x_i$'s are column vectors (say of length $n$) and hence $x_ix_i^T$ has $(j,k)$ entry $x_i^jx_i^k$, and all matrices are
$n$-by-$n$. If you're working over the real numbers, this looks like a decomposition of a symmetric positive semidefinite matrix as a sum of positive rank one matrices. – Jonas Meyer Apr 22 '10 at
Yes, now that I have looked at the paper, it seems to be implicitly assumed that vectors are columns by default. Notice for example that the coefficients of $x_i$ with respect to an orthonormal
basis $(e_j)_j$are given on page 3 by $a_j=x_i^Te_j$. – Jonas Meyer Apr 22 '10 at 20:24
Very helpful, thank you! – fbrereto Apr 22 '10 at 20:29
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged na.numerical-analysis linear-algebra linear-programming matrices or ask your own question. | {"url":"http://mathoverflow.net/questions/22249/covariance-matrix-formula-interpretation-what-am-i-missing","timestamp":"2014-04-19T02:21:09Z","content_type":null,"content_length":"50574","record_id":"<urn:uuid:494338fd-b27e-4ee9-b196-98495383ead7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wilson, Robin J. [WorldCat Identities]
Fri Mar 21 17:15:41 2014 UTClccn-n791004880.25Lewis Carroll in numberland : his fantastical mathematical logical life : an agony in eight fits /0.480.79Edge-colourings of graphs /
98306516Robin_Wilson_(mathematician)n 79100488332604Uilson, R.Uilson, R. 1943-Wilson, R.Wilson, R. J.Wilson, R. J. 1943-Wilson, R. J. (Robin J.)Wilson, R. J. (Robin J.), 1943-Wilson, R. J. (Robin
James), 1943-Wilson, Robin.Wilson, Robin, 1943-Wilson, Robin J.Wilson, Robin J., 1943-....Wilson, Robin JamesWilson, Robin James, 1943-ウィルソン, R. Jウィルソン, ロビンlccn-n87939410Anderson,
Marlow1950-edtlccn-n91000580Katz, Victor J.edtlccn-n79067394Beineke, Lowell W.edtlccn-n92122461Flood, Raymondctbedtlccn-n80088844Fauvel, Johnctbedtlccn-n79056546Carroll,
Lewis1832-1898lccn-n80019543Sullivan, Arthur1842-1900lccn-n50011216Biggs, Normanlccn-n82216520Lloyd, E. Keithlccn-n80045872Gilbert, W. S.(William Schwenck)1836-1911Wilson, Robin
J.HistoryBiographySourcesConference proceedingsCriticism, interpretation, etcMathematicsGraph theoryMathematiciansCarroll, Lewis,Mathematical recreationsFour-color problemMusical temperamentMusic
theory--MathematicsPostage stampsGermanyAstronomersMöbius, August Ferdinand,Combinatorial analysisMap-coloring problemEngland--OxfordMathematics--Study and teaching (Higher)Combinatorial
geometryUniversity of OxfordCombinatorial designs and configurationsGreat BritainCartographyProof theoryMusic--MathematicsAstronomyMathematical optimizationTrees (Graph theory)
MatricesMusic--Instruction and studyAlgorithmsGilbert, W. S.--(William Schwenck),Sullivan, Arthur,D'Oyly Carte Opera
Robin JIntroduction to graph theoryGraph theory has recently emerged as a subject in its own right, as well as being an important mathematical tool in such diverse subjects as operational research,
chemistry, sociology and genetics. This book provides an introduction to graph theory+-+284497730513389ocn054028528book20040.37Sherlock Holmes in Babylon : and other tales of mathematical
historyHistoryCovering a span of almost 4000 years, from the ancient Babylonians to the eighteenth century, this collection chronicles the enormous changes in mathematical thinking over this time, as
viewed by distinguished historians of mathematics from the past and the present. Each of the four sections of the book (Ancient Mathematics, Medieval and Renaissance Mathematics, The Seventeenth
Century, The Eighteenth Century) is preceded by a Foreword, in which the articles are put into historical context, and followed by an Afterword, in which they are reviewed in the light of current
historical scholarship. In more than one case, two articles on the same topic are included, to show how knowledge and views about the topic changed over the years. This book will be enjoyed by anyone
interested in mathematics and its history - and in particular by mathematics teachers at secondary, college, and university levels+-+619003833512508ocn775429082file20090.35Anderson, MarlowWho Gave
You the Epsilon? And Other Tales of Mathematical HistoryHistory+-+5410038335123713ocn227016411book20080.25Wilson, Robin JLewis Carroll in numberland : his fantastical mathematical logical life : an
agony in eight fitsBiographyAs Wilson demonstrates, Carroll--who published serious, if occasionally eccentric, works in the fields of geometry, logic, and algebra--made significant contributions to
subjects as varied as voting patterns and the design of tennis tournaments, in the process creating imaginative recreational puzzles based on mathematical ideas. --from publisher
description+-+1275158485112528ocn051730619book20020.33Wilson, Robin JFour colors suffice : how the map problem was solvedHistory"On October 23, 1852, Professor Augustus De Morgan wrote a letter to a
colleague, unaware that he was launching one of the most famous mathematical conundrums in history - one that would confound thousands of puzzlers for more than a century. This is the amazing story
of how the "map problem" was solved." "The problem posed in the letter came from a former student: What is the least possible number of colors needed to fill in any map (real or invented) so that
neighboring countries are always colored differently? This deceptively simple question was of minimal interest to cartographers, who saw little need to limit how many colors they used. But the
problems set off a frenzy among professional mathematicians and amateur problem-solvers, among them Lewis Carroll, an astronomer, a botanist, an obsessive golfer, the Bishop of London, a man who set
his watch only once a year, a California traffic cop, and a bridegroom who spent his honeymoon coloring maps. In their pursuit of the solution, mathematicians painted maps on doughnuts and horseshoes
and played with patterned soccer balls and the great rhombicubotedron." "It would be more than one hundred years (and countless colored maps) later before the result was finally established. Even
then, difficult questions remained, and the intricate solution - which involved no fewer than 1,200 hours of computer time - was greeted with as much dismay as enthusiasm." "Providing a clear and
elegant explanation of the problem and the proof, Robin Wilson tells how a seemingly innocuous question baffled great minds and stimulated exciting mathematics with far-flung applications. This is
the entertaining story of those who failed to prove, and those who ultimately did prove, that four colors do indeed suffice to color any map."--BOOK
JACKET+-+987576641596328ocn002911643book19760.56Biggs, NormanGraph theory 1736-1936HistorySources+-+716480346532485019ocn052145428book20030.50Music and mathematics : from Pythagoras to
fractalsCriticism, interpretation, etc"From Ancient Greek times, music has been seen as a mathematical art, and the relationship between mathematics and music has fascinated generations. This
collection of wide-ranging, comprehensive and fully illustrated papers, authored by leading scholars, presents the link between these two fields in a lucid manner that is suitable for students of
both subjects, as well as the general reader with an interest in music. Physical, theoretical, physiological, acoustic, compositional, and analytical relationships between music and mathematics are
unfolded and explored with focus on tuning and temperament, the mathematics of sound, and modern compositional techniques."--BOOK JACKET+-+123660346565617ocn056138132file20010.39Wilson, Robin
JStamping through mathematicsHistory"The book contains almost four hundred postage stamps relating to mathematics, ranging from the earliest forms of counting to the modern computer age. Featured are
many of the mathematicians who contributed to this story - influential figures, such as Pythagoras, Archimedes, Newton and Einstein - and some of the areas whose study aided this development - such
as navigation, astronomy and art." "The stamps appear enlarged and in full colour with full historical commentary, and are listed at the end of the book."--BOOK
JACKET+-+34943423856131ocn010751278book19840.27Wilson, Robin JGilbert & Sullivan : the official D'Oyly Carte picture history59811ocn027264636book19930.53Möbius and his band : mathematics and
astronomy in nineteenth-century GermanyHistoryBiographyMost people have heard of the Mobius band. But the work and influence of August Mobius are more far-reaching than a topological toy. For some
fifty years of the nineteenth century, August Mobius taught astronomy and researched in mathematics at Leipzig University. During those years, which saw the German nation move towards unification,
German mathematics developed into the most powerful and influential in the world and German astronomers became the world leaders. How did this come about? In this fascinating, richly illustrated, and
accessible book, leading scholars assess the contribution of Mobius and others of his time to the practice of mathematics and astronomy today. Mobius and his band has been written for all those
interested in the historical development of ideas and their legacy, and thus both the general reader and specialists in particular fields will find much of
interest+-+341480346532453819ocn004952495book19780.66Beineke, Lowell WSelected topics in graph theoryTopological graph theory; The proof of the heawood conjecture; The appel-haken of the four-color
theorem; Edge-colorings of graphs; Hamiltonian graphs; Tournaments; The reconstruction problem; Minimax theorems in graph theory; Line graphs and line digraphs; Eigenvalues of graphs; Strongly
regular graphs; Ramsey graph theory50427ocn042762095book20000.50Aldous, Joan MGraphs and applications : an introductory approach"Graphs and Applications is based on a highly successful Open
University course and the authors have paid particular attention to the presentation, clarity and arrangement of the material, making it ideally suited for independent study and classroom use. An
important part of learning graph theory is problem solving; for this reason large numbers of examples, problems (with full solutions) and exercises (without solutions) are included. Many chapters
also present application case studies."--BOOK JACKET+-+K40788703641310ocn007552522book19790.70Wilson, Robin JApplications of graph theory3857ocn012550733book19820.70Wilson, Robin JApplications of
combinatoricsConference proceedings37117ocn019739538book19890.47Wilson, Robin JGraphs : an introductory approach : a first course in discrete
mathematics+-+990061629536712ocn040331113book19990.56Wilson, Robin JMathematical conversations : selections from the mathematical intelligencerThis volume contains approximately fifty articles that
were published in "The Mathematical Intelligencer" during its first eighteen years. The selection exhibits the wide variety of attractive articles that have appeared over the years, ranging from
general interest articles of a historical nature to lucid expositions of important current discoveries. The articles are introduced by the editors+-+813134238534311ocn054974510book20040.76Beineke,
Lowell WTopics in algebraic graph theory"This book contains ten expository chapters written by acknowledged international experts in the field. Their well-written contributions have been carefully
edited to enhance readability and to standardize the chapter structure, terminology and notation throughout the book. To help the reader, there is an extensive introductory chapter that covers the
basic background material in graph theory, linear algebra and group theory. Each chapter concludes with an extensive list of references."--Jacket+-+924653670532010ocn003776838book19770.79Fiorini,
StanleyEdge-colourings of graphs3146ocn041966992book19990.76Fauvel, JohnOxford figures : 800 years of the mathematical sciencesHistory"This is the story of the intellectual and social life of a
community, and of its interactions with the wider world. For 800 years mathematics has been researched and studied at Oxford, and the subject and its teaching have undergone profound changes during
that time. This book reveals the richness and influence of Oxford's mathematical tradition and the characters who helped to shape it."--BOOK JACKET+-+745870346530912ocn011044994book19840.79Holroyd,
F. CGeometrical combinatorics+-+1275158485+-+1275158485Fri Mar 21 15:58:49 EDT 2014batch27682 | {"url":"http://worldcat.org/identities/lccn-n79-100488/","timestamp":"2014-04-20T12:10:58Z","content_type":null,"content_length":"14400","record_id":"<urn:uuid:0c44943a-4857-456f-8916-fa36c6dda567>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate a Partial Correlation Coefficient in R: An Example with Oxidizing Ammonia to Make Nitric Acid
May 5, 2013
By Eric Cai - The Chemical Statistician
Today, I will talk about the math behind calculating partial correlation and illustrate the computation in R with an example involving the oxidation of ammonia to make nitric acid using a built-in
data set in R called stackloss. In a separate post, I will also share an R function that I wrote to estimate partial correlation. In a later post, I will discuss the interpretation of the partial
correlation coefficient at length.
I read Pages 234-237 in Section 6.6 of “Discovering Statistics Using R” by Andy Field, Jeremy Miles, and Zoe Field to learn about partial correlation. They used a data set called “Exam Anxiety.dat”
available from their companion web site (look under “6 Correlation”) to illustrate this concept; they calculated the partial correlation coefficient between exam anxiety and revision time while
controlling for exam score. As I discuss further below, the plot between the 2 above residuals helps to illustrate the calculation of partial correlation coefficients. The plot above makes
intuitive sense; if you take more time to study for an exam, you tend to have less exam anxiety, so there is a negative correlation between revision time and exam anxiety.
They used a function called pcor() in a package called “ggm”; however, I suspect that this package is no longer working properly, because it depends on a deprecated package called “RBGL” (i.e. “RBGL”
is no longer available in CRAN). See this discussion thread for further information. Thus, I wrote my own R function to illustrate partial correlation.
Partial correlation is the correlation between 2 random variables while holding other variables constant. To calculate the partial correlation between X and Y while holding Z constant (or
controlling for the effect of Z, or averaging out Z),
1) perform a normal linear least-squares regression with X as the target and Z as the predictor
$X_i = \beta_0 + \beta_1Z_i + \varepsilon_i$
2) calculate the residuals in Step #1
$\text{residuals}(X)_i = X_i - \hat{X}_i$
3) perform a normal linear least-squares regression with Y as the target and Z as the predictor
$Y_i = \gamma_0 + \gamma_1Z_i + \tau_i$
4) calculate the residuals in Step #3
$\text{residuals}(Y)_i = Y_i - \hat{Y}_i$
5) calculate the correlation coefficient between the residuals from Steps #2 and #4; the result is the partial correlation between X and Y while controlling for the effect of Z
$\text{Partial Correlation of X and Y Controlling for Z} = \hat{\text{Corr}}[\text{residuals}(X)_i, \text{residuals}(Y)_i]$
$\hat{\text{Corr}}$, of course, is an estimator of the correlation coefficient; 3 such common estimators are the Pearson product-moment correlation coefficient (Pearson’s r), the Spearman rank
correlation coefficient (Spearman’s rho), and the Kendall rank correlation coefficient (Kendall’s tau).
Example – Oxidation of Ammonia to Nitric Acid in the Stackloss Data Set
There is a built-in data set in R called stackloss with measurements on operations in a plant that oxidized ammonia (NH[3]) to make nitridc acid (HNO[3]). This paper by Taylor, Chilton and Handforth
describes the process pretty clearly on its first page.
1) Pass a mixture of air and ammonia through red-hot catalytic wire gauze to make nitric oxide.
4NH[3] + 5O[2] → 4NO + 6H[2]O
2) As the nitric oxide cools, it reacts with excess oxygen to make nitrogen dioxide.
2NO + O[2] → 2NO[2]
3) The nitrogen dioxide reacts with water to form nitric acid.
3NO[2] + H[2]O ⇌ 2HNO[3] + NO
I want to calculate the partial correlation coefficient between air flow speed and water temperature while controlling for acid concentration. Following the procedure above, here is the plot of the
relevant residuals.
The Pearson correlation coefficient is a commonly used estimator for the correlation coefficient, but hypothesis testing based on Pearson’s r is known to be problematic when dealing with non-normal
data or outliers (Bishara and Hittner, 2012). Let’s check for the normality underlying our data. Here is the Q-Q plot of the residuals of regressing air flow on acid concentration.
Note the 2 “dips” below the identity line in the intervals [-5, 0] and [0,10] along the horizontal axis. These “dips” suggest to me that there is a systematic deviation away from normality for these
Furthermore, recall from my earlier blog post that you can check for normality by the ratio of the interquartile range to the standard deviation. If this ratio is not 1.35, then there is reason to
suspect that the distribution is not normal. For the residuals from regressing air flow on acid concentration, this ratio is 0.75, which is far enough away from 1.35 to suggest non-normality.
Given that there are only 21 residuals, and given that the 2 ways to check for normality above suggest non-normality, it is best to conclude that there is insufficient evidence to suggest that the
data come from a normal distribution. (This is different from asserting that the data definitely come from a non-normal distribution. Given the low sample size, I just can’t be certain that the
population is normally distributed.) Thus, I will not use Pearson’s r to estimate the partial correlation. (Someone with the handle ars in a discussion thread in Cross Validated referred a paper
by Kowalski (1972) about Pearson’s r for further thoughts.)
I estimate the partial correlation using Spearman’s rho, which, using the cor() function, is calculated as 0.5976564. However, is this estimate statistically significant? To answer this question,
we can conduct a hypothesis test.
For Spearman’s rho, the sampling distribution is
$T = \hat{\rho}_p \sqrt{(n - 2 - k) \div (1 - \hat{\rho}_p^2)}$,
$T \sim t_{n-2-k}$
In our example, T = 3.162624, and the p-value is 0.005387061. Thus, there is evidence to suggest that this correlation is significant.
In comparing Spearman’s rho and Kendalls’ tau in a set of presentation slides titled Overview of Non-Parametric Statistics, Mark Seiss from the Department of Statistics at Virginia Tech stated
”…most researchers conclude there is little basis to choose one over the other”. Thus, I chose to use Spearman’s rho purely out of my familiarity with it. I welcome your comments about why
Kendall’s tau is better.
In a discussion at Cross Validated, someone with the handle onestop referred to a paper by Newson (2002) that cited a book by Kendall & Gibbons (1990); onestop reproduced this
quotation: ”…confidence intervals for Spearman’s r[S] are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters, but the sample Spearman’s r[S] is much more easily
calculated without a computer”. I’m curious about whether or not this result is transferrable to hypothesis testing.
Bishara, A. J., & Hittner, J. B. (2012). Testing the significance of a correlation with nonnormal data: Comparison of Pearson, Spearman, transformation, and resampling approaches.
Andy Field, Jeremy Miles, and Zoe Field. (2012). Discovering statistics using R. Sage Publications Limited.
Kowalski, Charles J. “On the effects of non-normality on the distribution of the sample product-moment correlation coefficient.” Applied Statistics (1972): 1-12.
Newson R. Parameters behind “nonparametric” statistics: Kendall’s tau,Somers’ D and median differences. Stata Journal 2002; 2(1):45-64.
Taylor, Guy B., Thomas H. Chilton, and Stanley L. Handforth. “Manufacture of Nitric Acid by the Oxidation of Ammonia1.” Industrial & Engineering Chemistry23.8 (1931): 860-865.
R Script for Entire Analysis
Here is the R code for computing everything in this analysis:
##### Partial Correlation - An Example with the Oxidation of Ammonia to Make Nitric Acid
##### By Eric Cai - The Chemical Statistician
# extract variables from the Stackloss data set (already built in R)
air.flow = stackloss[,1]
water.temperature = stackloss[,2]
acid = stackloss[,3]
# conduct normal linear least-squares regression with air.flow as target and acid as predictor
regress.air.acid = lm(air.flow~acid)
# extract residuals from this regression
residuals.air.acid = residuals(regress.air.acid)
# conduct normal linear least-squares regression with water.temperature as target and acid as predictor
regress.water.acid = lm(water.temperature~acid)
# extract residuals from this regression
residuals.water.acid = residuals(regress.water.acid)
# plot the 2 sets of residuals against each other to see if there is a linear trend
png('INSERT YOUR DIRECTORY PATH HERE/residuals plot air flow and water temperature controlling acid concentration.png')
plot(residuals.air.acid, residuals.water.acid, xlab = 'Residuals from Regressing Air Flow on Acid Concentration', ylab = 'Residuals from Regressing Water Temperature on Acid Concentration', main = 'Partial Correlation Between Air Flow and Water Temperature\n Controlling for Acid Concentration')
# normal q-q plot of residuals of regressing air flow on acid to check for normality
png('INSERT YOUR DIRECTORY PATH HERE/q-q plot air flow residuals on acid concentration.png')
qqplot(qnorm(seq(0,1,length=101), mean(residuals.air.acid), sd(residuals.air.acid)), residuals.air.acid, xlab = 'Theoretical Quantiles from Normal Distribution', ylab = 'Sample Quqnatiles of Air Flow Residuals', main = 'Normal Quantile-Quantile Plot \n Residuals from Regressing Air Flow on Acid Concentration')
# add the identity line
# check if the ratio of interquartile range to standard deviation is 1.35 (this is a check for normality)
five.num.summary.airflow.acid = summary(residuals.air.acid)
q3.airflow.acid = five.num.summary.airflow.acid[5]
q1.airflow.acid = five.num.summary.airflow.acid[2]
(q3.airflow.acid - q1.airflow.acid)/sd(residuals.air.acid)
# sample size
n = length(air.flow)
# use Spearman correlation coefficient to calculate the partial correlation
partial.correlation = cor(residuals.air.acid, residuals.water.acid, method = 'spearman')
# calculate the test statistic and p-value of the Spearman correlation coefficient
test.statistic = partial.correlation*sqrt((n-3)/(1-partial.correlation^2))
p.value = 2*pt(abs(test.statistic), n-3, lower.tail = F)
Filed under:
Applied Statistics
Basic Chemistry
Descriptive Statistics
R programming
applied statistics
basic chemistry
correlation coefficient
descriptive statistics
kendall correlation
kendall correlation coefficient
kendall's tau
linear regression
nitric acid
nitric oxide
nitrogen dioxide
normality test
partial correlation
partial correlation coefficient
pearson correlation
pearson correlation coefficient
pearson's r
q-q plot
qq plot
quantile-quantile plot
spearman correlation
spearman correlation coefficient
spearman's rho
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/how-to-calculate-a-partial-correlation-coefficient-in-r-an-example-with-oxidizing-ammonia-to-make-nitric-acid/","timestamp":"2014-04-18T20:44:58Z","content_type":null,"content_length":"60103","record_id":"<urn:uuid:6f391e13-dd6c-492c-9e9b-dd8de147a873>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random Sampling
What is Random Sampling?
One of the best ways to achieve unbiased results in a study is through random sampling. Random sampling includes choosing subjects from a population through unpredictable means. In its simplest form,
subjects all have an equal chance of being selected out of the population being researched.
In random sampling, three methods are most common when conducting surveys. Random number tables, more recently known as random number generators, tell researchers to select subjects at an interval
generated randomly. Mathematical algorithms for pseudo-random number generators may also be used. Another method used is physical randomization devices, which could be as simple as a deck of playing
cards or an electronic device called ERNIE.
One of the biggest benefits of using random sampling in a survey is the fact that, since subjects are obviously randomized, it is the best way to ensure that results are unbiased. It is also much
faster and often less expensive to use random sampling and as a result is a much more efficient way to obtain results. Additionally, random sampling consistently provides results that are valid,
making it easy for researchers to draw conclusions about large populations.
As with any survey, there is no way to guarantee that the results that come from a sample in a random survey are 100% accurate, although the results do tend to be more accurate than those obtained
through other methods. The sample may not be representative of the larger population, which can incur a sampling error, but the chance of this occurring can be determined early in the survey by
mathematical theories. Despite the problems associated with this method, it’s important to remember that every survey comes with measures of uncertainty.
When to Use Random Sampling
When surveying a large population it may not make sense to survey everyone in the population, as this would be very time consuming and often quite expensive. Random sampling in this case would be
proportionate to the size of the population, and the results from surveying the samples would be later used to infer how the population as a whole may have responded and to draw conclusions about the
larger group.
Interpreting Data
Once the random sampling survey has been conducted, the next step is to interpret the data received from the selected group. It’s necessary to organize the information that has been gathered before
analyzing the data. It’s important to determine the confidence and error levels in the survey as well to make sure the data is as accurate as possible. You may interpret that data as following a
certain distribution, known as Gaussian distribution, by ranking and comparing results.
Random sampling is a quick and easy way to obtain unbiased results about a population being surveyed. Because many other methods of surveying can come with a huge risk of bias, random sampling is
often a top choice when designing surveys. Despite the margin of error that comes with any survey, random sampling is the best way to get the most accurate information. | {"url":"http://www.randomsampling.org/","timestamp":"2014-04-21T01:59:27Z","content_type":null,"content_length":"7087","record_id":"<urn:uuid:f64b1180-0b2e-49ac-90a1-bfbd74385e38>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
hi shivusuja
That's what I thought the question meant. Problem is: those angles are not equal.
I use geometry software called Sketchpad. It's a vector geometry program and can give measurements of lengths and angles, accurate to many decimal places. I've got it set to 2dp. When I select points
to make measurements, the software automatically assigns letters in order (A, B C ...) so the point P has label E, the centre of the circle has label F, and the point Q has label G.
So the angles you want are shown as BCG, GCD, BAE and EAD. The screenshot below shows the Sketchpad values.
I don't know what to suggest next.
Where did the question come from? Have you worded it exactly as set?
Last edited by bob bundy (2013-03-02 20:42:44)
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=255537","timestamp":"2014-04-20T13:38:04Z","content_type":null,"content_length":"23720","record_id":"<urn:uuid:d100befc-81dc-48c6-ab3d-50124b3fd9e2>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Delta Sigma Converters: Modulation
View from the Peak
February 9th, 2010 by Mike Perkins
The web is filled with introductions to Delta Sigma modulation (also sometimes referred to as Sigma Delta modulation) in the context of Delta Sigma converters. Unfortunately, the ones I’ve looked at
fail to intuitively motivate how the modulator works. Therefore, my goal in this post is to show how the structure of a first-order ΔΣ modulator can be simply understood. In particular, I show how
its structure can be derived from a trivial 10-line C program that performs the ΔΣ modulator function.
The basic problem solved by a ΔΣ modulator is this: given a fixed input value ν that lies in the range [-1, 1], the modulator outputs a pulse train satisfying the following criteria:
• Each pulse has the same fixed width τ
• Each pulse has an amplitude of -1 or 1 only
• The time average value of the pulse train emitted converges to ν
To derive from first principles a system diagram to perform this task, I find it useful to proceed as follows: First, write down a few equations to make precise what we’re after; next, write a
computer program that emits a pulse train satisfying the equations; and finally, draw a block diagram that realizes the computer program.
So let’s get started. First, a waveform f(t) has an average value on the interval [0,T] of
Let l(i) be the magnitude of the pulse of duration t emitted in the interval [iτ, (i + 1)τ]; l(i) must be 1 or ‑1. The average value of the pulse train waveform f(t) corresponding to a sequence of
pulses is given by
So we don’t need to worry about τ, which is nice! Here we have defined S(N) to be the sum of the l(i) values from 0 to N – 1, so S(1) = l(0), S(2) = l(0) + l(1), and so on.
How can we write a computer program to decide whether the next pulse to output should be a 1 or a -1? I think the first algorithm most of us would think of is the following: “keep track of the
average value, S(N) / N, at each instant. If the current average is less than ν, output a 1 next; otherwise output a -1.” I will skip a formal proof that the pulse train emitted by this algorithm
converges as desired, but it is intuitively reasonable that it does, because each additional output pulse moves the average up or down by a smaller amount than the previously emitted pulse, and each
pulse nudges the average in the desired direction.
It’s straightforward to write a computer program that accomplishes the above. In C-like pseudo code the program looks like this:
while (1) {
S_N = compute_current_S_N();
if ( (S_N/N) < v )
This can be equivalently written as
while (1) {
S_N = compute_current_S_N();
if( N*v – S_N > 0 )
Now it’s time to draw a system diagram to implement this algorithm. Clearly it will contain some sort of feedback, because the pulse value to output next depends on the past sequence of output
values. There will also be a need to compute a difference, and somehow we will need to accumulate the values S(N) and Nν. Consider the system diagram below:
and the feedback line l. With respect to the feedback line, the box with a z in it means “delay by one time instant”; the indexes of l on either side of this box reflect that delay. The box with a
plus sign in it is an instantaneous summer, which has been configured in this case to take the difference between its two inputs. The large box is a quantizer, and the graph within it shows the input
/output transfer function q( ).The graph means that any input value greater than 0 will result in an output value of 1, while any input value less than 0 will result in an output value of ‑1.
Finally, the box with a capital sigma (Σ) in it is an accumulator, also know as an integrator. Its output is the sum of all its previous inputs.
To see if this system diagram results in the required stream of pulses l(i), we need to analyze the variable y. This is most easily done by creating a table as shown below:
│ N │ l(N-1) │ y(N) │ l(N) │
│ 0 │ N/A │ y(0) is initialized to 0 │ l(0) is initialized to 1 │
│ 1 │ l(0) = 1 │ y(1) = ν-l(0) │ l(1)=q( y(1) ) │
│ 2 │ l(1) │ y(2) = (ν-l(0)) + (ν-l(1)) │ l(2) = q( y(2) ) │
│ │ │ = 2ν – l(0) – l(1) │ │
│ 3 │ l(2) │ y(3) = 3ν – l(0) – l(1) – l(2) │ l(3) = q( y(3) ) │
│ │ │ = 3ν – S(3) │ │
│ …and in the general case │
│ N │ l(N-1) │ y(N) = Nν – S(N) │ l(N) = q( y(N) ) │
The quantizer is executing the “if then” portion of the computer program, and the summer and accumulator are computing the running sum needed as the argument for the if function (i.e., as the input
to the quantizer). This system will do the trick!
This diagram is often written in a slightly different form to more closely model its realization in hardware as follows:
When used as part of an analog-to-digital converter, the modulator is followed by a digital filter and a decimator. The digital filter computes the averages encoded in the bitstream; the decimator
reduces the sample rate from the high rate of the 1-bit ADC to the Nyquist rate associated with the final multi-bit samples. The binary representation of the decimated samples is used as the AD
output bits.
Part II discusses filtering and decimation in greater detail, and presents some simulation results.
Mike Perkins, Ph.D., is a managing partner of Cardinal Peak and an expert in algorithm development for video and signal processing applications.
Tags: delta sigma
2 Responses to “Delta Sigma Converters: Modulation”
That’s great. I’ve been searching for a solution for a long time now. I almost gave up some days ago when I realized that I just can’t do it. And you know, there is a large number of people who are
trying to fix it, but don’t know how. Thank you very much for sharing the information with us.
Cardinal Peak offers fast, ultra-reliable engineering services for Digital Video & Audio, Embedded Devices, and Mobile & Set Top Apps. Focused on increasing our clients' engineering ROI, we
complement their internal resources with project-based contract engineering, ongoing engineering services and onsite staffing.
Receive Cardinal Peak Updates
Sign up to receive periodic posts from our partners and senior engineering staff in email.
Why should my company outsource engineering services? What should we look for in a provider?
"Cardinal Peak was a natural choice for us. They were able to develop a high-quality product, based in part on open source, and in part on intellectual property they had already developed, all for a
very effective price."
Bruce Webber, VP Engineering, VBrick | {"url":"http://www.cardinalpeak.com/blog/delta-sigma-converters-modulation/","timestamp":"2014-04-16T15:59:02Z","content_type":null,"content_length":"30936","record_id":"<urn:uuid:10ed134b-936d-4478-85d5-49167c5a0db6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tools
Offers hundreds of lesson plans and learning activities for teachers and students from prekindergarten through calculus. You can quickly explore the lessons and downloadable software tools by grade
level and topic (number theory, integers, etc.).
Help with Algebra!
Lessons range from the preliminaries (absolute value, negative numbers, etc.) to intermediate and advanced algebra that can challenge high flying math students.
Volunteer network of mathematicians.
Massacshuetts DOE Mathematics Curriculum Framework
All of the activities, problems, and definitions are linked directly to learning standards, enabling teachers to focus on a particular standard or on a subject area. These will be continually updated
and expanded with newly released MCAS items.
National Library of Virtual Manipulatives for Interactive Mathematics
With the aid of JAVA applets, students can visualize such concepts as the Pythagorean Theorem or comparing fractions.
Figure This!
Features real world math questions designed to motivate middle school students to learn higher level math operations involving angles, volumes, number patterns, etc.
University of Tennessee Math Archives
Offers a host of math-related links, teaching materials, and shareware for teachers and students at all levels of math.
Allows one to quickly access a wide range of economic data from the United States and all over the world. Information about our nation's inflation, unemployment levels, productivity, new factory
orders, and the price of crude oil can all be accessed. The homepage also contains links to economic data from Canada, Britain, Germany, the European Union, France, Italy, Russia, and China.
Math DL: Loci
Brings together a wide range of educational resources, interesting pieces of math history, and other ephemera for general consumption.
MAA Minute Math
Features a host of problems from the MAA's American Mathematics Competitions. Questions deal with sums, geometry, and positive integers. | {"url":"https://www.fivecolleges.edu/partnership/resources/resources_math","timestamp":"2014-04-19T09:44:27Z","content_type":null,"content_length":"24817","record_id":"<urn:uuid:e1f095a8-ce3c-4df0-ad5f-f54084d8692d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
chain rule
2.) g(x) = [6*x]/[(x^3 - 5)^(1/4)] Lets change it a little. We could do the quotient rule, but that's boring and more tedious in my opinion. g(x) = [6*x]*[(x^3 - 5)^(-1/4)] For the derivative, you
have to do the product rule AND the chain rule. Derivative of the 1st term, times the second, plus the derivative of the second term times the 1st. 6*[(x^3 - 5)^(-1/4)] + [(-1/4)*(x^3 - 5)^(-5/4)*3*x
^2]*(6*x) Simplify that and you're done. It simplifies to: [3*x^3 - 60]/[2*(x^3 - 5)^(5/4)]
3.) f(x) = [(3*x^2 + 5)^4]/[(2-3*x^4)^3]; I'll use quotient rule this time to show some variety and hopefully help you solve some of the other questions on your own. The quotient rule states: [f(x)/g
(x)]' = [f'(x)*g(x) - f(x)*g'(x)]/[(g(x))^2] In other words, simply put, the derivative of the top times the bottom minus the top times the derivative of the bottom. For your problem: f'(x) = [[4*
(3*x^2 + 5)^3*6*x]*[(2-3*x^4)^3] - [(3*x^2 + 5)^4]*[3*(2-3*x^4)^2*-12*x^3]]/[(2-3*x^4)^3]^2 It says to leave it in unsimplified form. So there you have it. Remember when you take the derivative of a
composition of terms to a power, you have to take the power, place it in front, multiply by the original, reduce the power, and then remember to go back and take the derivative of the inside.
4.) Your turn: by now you should be able to do #4 very quickly. Do it yourself. I will give you what the answer will be. See if your answer matches up. h'(1) = 7.408. EDIT: It asks for h'(1) to two
decimal places. Thus, 7.41. | {"url":"http://mathhelpforum.com/calculus/8522-chain-rule-print.html","timestamp":"2014-04-20T19:43:07Z","content_type":null,"content_length":"6402","record_id":"<urn:uuid:150eac29-7d7c-4193-b1b4-04f29ee0e24d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
If you want to box your contents
October 14th 2010, 09:47 AM #1
Oct 2009
If you want to box your contents
The command, \boxed{Contents}, with math, /math (plus brackets) will create a box which can be very handy. The following website:
LaTeX Spaces and Boxes
has variations worth looking into.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/math/159622-if-you-want-box-your-contents.html","timestamp":"2014-04-20T16:35:18Z","content_type":null,"content_length":"26617","record_id":"<urn:uuid:e39e865c-fcfc-4680-af86-aed4a42d76f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra problem: Can you help?
General Question
Algebra problem: Can you help?
Afternoon, I’m just doing a bit of algebra revision on the BBC GCSE Bitesize website and one question has stumped me: I can’t figure out where I’m going wrong.
Here is the question:
4. If a = b – c, then:
a. c = b – a
b. c = a + b
c. c = a – b
Apparently a is the right answer, but I can’t work out how to get there.
I’ve been taught that when letters/numbers change sides they change their sign as well, therefore I answered b.
Any help is appeciated :) (I could just ask my Maths teacher on Monday but I’m rather impatient)
Observing members: 0 Composing members: 0
8 Answers
Yes, but “a” changed sides as well; therefore it would be negative in the answer…
Yes, as above. -c =a – b therefore +c= -a +b
Add C to both sides and subtract A from both sides, voila !
Keep in mind that you can do whatever you want to one side of the equation as long as you do the exact same operation to the other side.
Welcome to Fluther.
@dabbler has given you the correct answer and the methodology.
Ah right I see where I was going wrong now. Thank you.
The above answers are correct. I’ll now show it in even more detail.
a = b – c
a + c = b – c + c
a + c = b
a + c – a = b – a
c = b – a
You can always substitute numbers to check your answer.
if a = 7, b = 10 and c = 3 then the answers says 3 = 10 – 7
On a multiple choice test, you can use this approach to get the answer by process of elimination, but I would not recommend doing that.
I’ve been taught that when letters/numbers change sides they change their sign as well… Not a very reliable rule, as you’ve already seen. A better rule, which you can always count on, is that you
must perform the same operation on both sides of the equals sign – as mentioned by @dabbler.
In this problem, a = b – c, you can add c to both sides, then subtract a from both sides, to arrive at the result c = b – a.
Answer this question
This question is in the General Section. Responses must be helpful and on-topic. | {"url":"http://www.fluther.com/150796/algebra-problem-can-you-help/","timestamp":"2014-04-16T22:53:31Z","content_type":null,"content_length":"39567","record_id":"<urn:uuid:8581197b-83b3-40d4-a888-5952e7f19cb4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Least index resolution of degeneracy in quadratic programming
- MATH. OPER. UND STAT. SER. OPTIMIZATION , 1992
"... Three generalizations of the criss-cross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange
pivots for the linear complementarity problem obtained from a convex quadratic program. A finite criss- ..."
Cited by 13 (8 self)
Add to MetaCart
Three generalizations of the criss-cross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange
pivots for the linear complementarity problem obtained from a convex quadratic program. A finite criss-cross method, based on least-index resolution, is constructed for solving the LCP. In proving
finiteness, orthogonality properties of pivot tableaus and positive semidefiniteness of quadratic matrices are used. In the last section some special cases and two further variants of the quadratic
criss-cross method are discussed. If the matrix of the LCP has full rank, then a surprisingly simple algorithm follows, which coincides with Murty's `Bard type schema' in the P matrix case. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=9430469","timestamp":"2014-04-19T14:19:42Z","content_type":null,"content_length":"12598","record_id":"<urn:uuid:f8218f4f-52e5-4096-8e70-2378d6676718>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
A conic is the intersection of a plane and a right circular cone. The four basic types of conics are parabolas, ellipses, circles, and hyperbolas. We've already discussed parabolas and circles in
previous sections, but here we'll define them a new way. Study the figures below to see how a conic is geometrically defined.
Figure %: The four basic types of conics
In the conics above, the plane does not pass through the vertex of the cone. When the plane does intersect the vertex of the cone, the resulting conic is called a degenerate conic. Degenerate conics
include a point, a line, and two intersecting lines.
The equation of every conic can be written in the following form: Ax ^2 + Bxy + Cy ^2 + Dx + Ey + F = 0 . This is the algebraic definition of a conic. Conics can be classified according to the
coefficients of this equation.
The determinant of the equation is B ^2 - 4AC . Assuming a conic is not degenerate, the following conditions hold true: If B ^2 -4AC > 0 , the conic is a hyperbola. If B ^2 -4AC < 0 , the conic is a
circle, or an ellipse. If B ^2 - 4AC = 0 , the conic is a parabola.
Another way to classify conics has to do with the product of A and C . Assuming a conic is not degenerate, the following conditions hold true: if AC > 0 , the conic is an ellipse or a circle. If AC <
0 , the conic is a hyperbola. If AC = 0 , and A and C are not both zero, the conic is a parabola. Finally, if A = C , the conic is a circle.
In the following sections we'll study the other forms in which the equations for certain conics can be written, and what each part of the equation means graphically. | {"url":"http://www.sparknotes.com/math/precalc/conicsections/section1.rhtml","timestamp":"2014-04-19T07:01:53Z","content_type":null,"content_length":"53356","record_id":"<urn:uuid:321750f9-218f-4839-a590-081d94b459c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trying to solve any equation ever with Recurrences
So, Im trying to make up this theory of trying to solve any equation ever using recurrences... Ill show you what I mean
Consider the quadratic function -
f(x) = x^2 - 3x + 2
Well, obviously you could do this the easy way and do factoring and figure out x = 2 or 1... no big deal right?
But what if we try a different way... albeit harder and more complicated... but Itll make a bit more sense later in this post
So what if we do this
x^2 - 3x + 2 = 0
x(x) - 3x + 2 = 0
x(x) = 3x-2
x = (3x-2)/x
So what does this mean?
Well, plug in a random number into your calculator... say 10...
Now do (3x-2)/x, and youll get 2.8...
now (if your using a TI-84 like me), plug in ((3*Ans)-2)/Ans
So Ans = 2.8, this should yield 2.28
Now Ans = 2.28, this should yield 2.125
From 2.125, you get 2.0588....
Basically, If you do this an infinite amount of times, it will converge to 2... which is one of the solutions of the equation...
Now say you rearrange the equation differently...
x^2 -3x+2 = 0
x(x) - 3x + 2 = 0
(x-3)x = -2
x = (-2)/(x-3)
If you do the same method as above, it will converge to 1 for any initial value you put in for x, which is the other zero for the quadratic equation...
So... why am I wasting my time with this?
Well, consider this equation
4x = e^(.5x)
Well... idk how to solve this equation.... but what if we do the thing I just did, where we set one side of the equation to x then infinitely compute the other side...
x = e^(.5x)
ln(4x) = .5x
2ln(4x) = x
If you pick 20 for the first x, 2ln(4x) will yield 8.76, then 7.11, then 6.69, and so on, until it converges to a number 6.523371369...
Is this right? well
4(x) = e^(.5x)
4(6.523371369) = e^(.5(6.523371369))
26.09 = e^(3.26)
26.09 = 26.09
So... thats the solution.... by using an infinite recursion of setting one side of the equation to x, then we will get some sort of solution....
Now... this doesnt always work... for example, say we did (from the previous equation)
4x = e^(.5x)
x = (e^(.5x))/4
If you do this, then x will converge to infinity. Now you may think that this is wrong... but I feel this is right, since they both technically (intersect) at infinity. In theory, 4(infinity) = e^
(infinity).... so is it really wrong? I dont believe so....
Anyway... my theory is that you can solve a solution for an equation when you set the equation to the form of
x = f(x)
Using an infinite recursive calculation.
Now, I really don't know how to dive more into this theory, since I think this involves math thats super crazy (more than I know). Im a computer engineer, so I dont have the biggest arsenal of
theoretical math at my disposal. I do know that this might have to do something with damping, since some of the recurrences will have an overdamped, underdamped, and critically damped response to
converging to the solution... but... i dont know...
So... any thoughts on this? Any suggestions on how to move forward with this? Or has someone done this before?
And no, this does not have anything to do with homework, this is just fun theoretical stuff Im doing on my own and I would like input/other ideas to help me move forward with this. If this post is
deemed by the admins to not belong here, then Ill delete this post (no questions asked) and move it to the Homework Section (or wherever they tell me to put it). All I want is some outside input for | {"url":"http://www.physicsforums.com/showthread.php?p=4289020","timestamp":"2014-04-19T02:24:24Z","content_type":null,"content_length":"59449","record_id":"<urn:uuid:1f3cc212-f99c-4d57-9a55-2859d53ca59a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
White Noise
Tag Archives: White Noise
This post covers quite a wide range of concepts in volatility modeling relating to long memory and regime shifts. The post discusses autocorrelation, long memory, fractional integration, black noise,
white noise, Hurst Exponents, regime shift detections, Asian markets and various topics froms nonlinear dynamics. Continue reading
Posted in ARFIMA, Asian markets, Black Noise, Correlation Dimension, Correlation Integral, FIGARCH, Forecasting, Fractional Brownian Motion, Fractional Integration, Henon Attractor, Hurst Exponent,
Logistic Attractor, Long Memory, Modeling, Nonlinear Dynamics, Pink Noise, Regime Shifts, Strange Attractor, Uncategorized, Volatility Modeling, White Noise Tagged ARFIMA, Black Noise, Forecasting,
Fractional Brownian Motion, Fractional Integration, Henon Attractor, Logistic Attractor, Long Memory, Regime Shifts, Strange Attractor, Volatility, Volatility Dynamics, White Noise 3 Comments | {"url":"http://jonathankinlay.com/index.php/tag/white-noise/","timestamp":"2014-04-21T12:45:50Z","content_type":null,"content_length":"44812","record_id":"<urn:uuid:acbe38b4-176d-4d6d-8ed3-0b09706da161>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
River Oaks, TX Math Tutor
Find a River Oaks, TX Math Tutor
...I still use the formula capability often. I do not have practice with pivot tables, but do have good knowledge of general techniques. I gained considerable OTJ experience writing complex SQL
queries for a large mainframe relational database.
15 Subjects: including algebra 1, algebra 2, calculus, chemistry
...All difficulties in mathematics can be broken down into very simple pieces and our minds seem to do this with natural ease when we allow it to happen. Trust me, nothing is too difficult to
conquer. I look forward to helping you soon! - JanetI have taught pre-algebra in High School and tutored many students as well.
20 Subjects: including probability, algebra 1, algebra 2, prealgebra
...I have done test preparation tutoring for individuals studying for the verbal/writing part of the SAT, for the ISEE, and for the TExES, as well as for Special Ed. Certification. I have also
taught an SAT verbal/writing class for the Nepalese Society in Irving, Texas.
43 Subjects: including logic, algebra 1, prealgebra, writing
...Her grades improved drastically. She had a hard time seeing what the teacher was doing, but my way was easy to understand. Algebra 2 is a difficult course, but it is a challenge that can be
conquered with the right help.
10 Subjects: including algebra 1, algebra 2, grammar, geometry
My goal as an instructor/tutor is to help students appreciate chemistry and apply it in the real world. I am currently teaching my second semester of chemistry lecture and lab, and have
approximately four years of peer tutoring experience, ranging from general chemistry, to organic chemistry, and s...
7 Subjects: including algebra 2, algebra 1, chemistry, genetics
Related River Oaks, TX Tutors
River Oaks, TX Accounting Tutors
River Oaks, TX ACT Tutors
River Oaks, TX Algebra Tutors
River Oaks, TX Algebra 2 Tutors
River Oaks, TX Calculus Tutors
River Oaks, TX Geometry Tutors
River Oaks, TX Math Tutors
River Oaks, TX Prealgebra Tutors
River Oaks, TX Precalculus Tutors
River Oaks, TX SAT Tutors
River Oaks, TX SAT Math Tutors
River Oaks, TX Science Tutors
River Oaks, TX Statistics Tutors
River Oaks, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Azle Math Tutors
Blue Mound, TX Math Tutors
Forest Hill, TX Math Tutors
Fort Worth Math Tutors
Fort Worth, TX Math Tutors
Kennedale Math Tutors
Lake Worth, TX Math Tutors
Richland Hills, TX Math Tutors
Saginaw, TX Math Tutors
Sansom Park, TX Math Tutors
Watauga, TX Math Tutors
Westover Hills, TX Math Tutors
Westworth Village, TX Math Tutors
White Settlement, TX Math Tutors
Willow Park, TX Math Tutors | {"url":"http://www.purplemath.com/river_oaks_tx_math_tutors.php","timestamp":"2014-04-17T13:05:27Z","content_type":null,"content_length":"23888","record_id":"<urn:uuid:deed1f52-09ab-4b82-bb3b-4c1281d4e3b8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Is the geometry of the world the source of what we call "math"?
anyway, my point was that the successor function, which is the basis of all mathematics, doesnt give us one, two, and three. it gives us first, second, and third. so the whole question of what is a
'number' may be meaningless.
for the record, I consider that very relevant to 'math'.
you have to distinguish between 'types' and 'instances'. instances are real world objects. types are not. distinguishing an instance is in theory no different from distinguishing a type. | {"url":"http://www.physicsforums.com/showpost.php?p=1857497&postcount=20","timestamp":"2014-04-16T04:31:15Z","content_type":null,"content_length":"7339","record_id":"<urn:uuid:cf79bd98-6592-44a6-8754-f444772bc823>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Search Results
The objective of the present study was to investigate the body-cognitive relationship through behavioral and electrophysiological measures in an attempt to uncover the underlying mediating neuronal
mechanism for movement-induced cognitive change. To this end we examined the effects of Quadrato Motor Training (QMT), a new whole-body training paradigm on cognitive performance, including
creativity and reaction time tasks, and electrophysiological change, using a within-subject pre-post design. Creativity was studied by means of the Alternate Uses Task, measuring ideational fluency
and ideational flexibility. Electrophysiological effects were measured in terms of alpha power and coherence. In order to determine whether training-induced changes were driven by the cognitive or
the motor aspects of the training, we used two control groups: Verbal Training (VT, identical cognitive training with verbal response) and Simple Motor Training (SMT, similar motor training with
reduced choice requirements). Twenty-seven participants were randomly assigned to one of the groups. Following QMT, we found enhanced inter-hemispheric and intra-hemispheric alpha coherence, and
increased ideational flexibility, which was not the case for either the SMT or VT groups. These findings indicate that it is the combination of the motor and cognitive aspects embedded in the QMT
which is important for increasing ideational flexibility and alpha coherence.
PMCID: PMC3559385 PMID: 23383043
Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated
motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can
generate appropriate motor commands. Recent computational models of this process rely on the optimal feedback control (OFC) framework of control theory. OFC is a powerful tool for describing motor
control, it does not describe adaptation. Some assume that adaptation of the forward model alone could explain motor adaptation, but this is widely understood to be overly simplistic. However, an
adaptive optimal controller is difficult to implement. A reasonable alternative is to allow forward model adaptation to ‘re-tune’ the controller. Our simulations show that, as expected, forward model
adaptation alone does not produce optimal trajectories during reaching movements perturbed by force fields. However, they also show that re-optimizing the controller from the forward model can be
sub-optimal. This is because, in a system with state correlations or redundancies, accurate prediction requires different information than optimal control. We find that adding noise to the movements
that matches noise found in human data is enough to overcome this problem. However, since the state space for control of real movements is far more complex than in our simple simulations, the effects
of correlations on re-adaptation of the controller from the forward model cannot be overlooked.
Electronic supplementary material
The online version of this article (doi:10.1007/s10827-011-0350-z) contains supplementary material, which is available to authorized users.
PMCID: PMC3304072 PMID: 21792671
Optimal control; Motor adaptation; Forward model; Reaching movements
Brain 2008;131(11):2894-2903.
Children with autism exhibit a host of motor disorders including poor coordination, poor tool use and delayed learning of complex motor skills like riding a tricycle. Theory suggests that one of the
crucial steps in motor learning is the ability to form internal models: to predict the sensory consequences of motor commands and learn from errors to improve performance on the next attempt. The
cerebellum appears to be an important site for acquisition of internal models, and indeed the development of the cerebellum is abnormal in autism. Here, we examined autistic children on a range of
tasks that required a change in the motor output in response to a change in the environment. We first considered a prism adaptation task in which the visual map of the environment was shifted. The
children were asked to throw balls to visual targets with and without the prism goggles. We next considered a reaching task that required moving the handle of a novel tool (a robotic arm). The tool
either imposed forces on the hand or displaced the cursor associated with the handle position. In all tasks, the children with autism adapted their motor output by forming a predictive internal
model, as exhibited through after-effects. Surprisingly, the rates of acquisition and washout were indistinguishable from normally developing children. Therefore, the mechanisms of acquisition and
adaptation of internal models in self-generated movements appeared normal in autism. Sparing of adaptation suggests that alternative mechanisms contribute to impaired motor skill development in
autism. Furthermore, the findings may have therapeutic implications, highlighting a reliable mechanism by which children with autism can most effectively alter their behaviour.
PMCID: PMC2577807 PMID: 18819989
reach adaptation; prism adaptation; motor control; autism
Adaptation is sometimes viewed as a process where the nervous system learns to predict and cancel effects of a novel environment, returning movements to near baseline (unperturbed) conditions. An
alternate view is that cancellation is not the goal of adaptation. Rather, the goal is to maximize performance in that environment. If performance criteria are well defined, theory allows one to
predict the re-optimized trajectory. For example, if velocity dependent forces perturb the hand perpendicular to the direction of a reaching movement, the best reach plan is not a straight line but a
curved path that appears to over-compensate for the forces. If this environment is stochastic (changing from trial to trial), the re-optimized plan should take into account this uncertainty, removing
the over-compensation. If the stochastic environment is zero-mean, peak velocities should increase to allow for more time to approach the target. Finally, if one is reaching through a via-point, the
optimum plan in a zero-mean deterministic environment is a smooth movement, but in a zero-mean stochastic environment is a segmented movement. We observed all of these tendencies in how people adapt
to novel environments. Therefore, motor control in a novel environment is not a process of perturbation cancellation. Rather, the process resembles re-optimization: through practice in the novel
environment, we learn internal models that predict sensory consequences of motor commands. Through reward based optimization, we use the internal model to search for a better movement plan to
minimize implicit motor costs and maximize rewards.
PMCID: PMC2752329 PMID: 18337419
motor learning; motor adaptation; cerebellar damage; ataxia; optimal control; internal model
A recent controversy has emerged concerning the existence of long pauses, presumably reflecting bistability of membrane potential, in the cerebellar Purkinje cells (PC) of awake animals. It is
generally agreed that in the anesthetized animals and in vitro, these cells switch between two stable membrane potential states: a depolarized state (the ‘up-state’) characterized by continuous
firing of simple spikes (SS) and a hyperpolarized state (the ‘down-state’) characterized by long pauses in the SS activity. To address the existence of long pauses in the neural activity of
cerebellar PCs in the awake and behaving animal we used extracellular recordings in cats and find that approximately half of the recorded PCs exhibit such long pauses in the SS activity and
transition between activity – periods with uninterrupted SS lasting an average of 1300 ms – and pauses up to several seconds. We called these cells pausing Purkinje cells (PPC) and they can easily be
distinguished from continuous firing Purkinje cells. In most PPCs, state transitions in both directions were often associated (25% of state transitions) with complex spikes (CSs). This is consistent
with intracellular findings of CS-driven state transitions. In sum, we present proof for the existence of long pauses in the PC SS activity that probably reflect underlying bistability, provide the
first in-depth analysis of these pauses and show for the first time that transitions in and out of these pauses are related to CS firing in the awake and behaving animal.
PMCID: PMC2671936 PMID: 19390639
Purkinje cell; cerebellum; pauses; bistability; simple spike; complex spike
The compensatory eye movement (CEM) system maintains a stable retinal image, integrating information from different sensory modalities to compensate for head movements. Inspired by recent models of
the physiology of limb movements, we suggest that CEM can be modeled as a control system with three essential building blocks: a forward model that predicts the effects of motor commands; a state
estimator that integrates sensory feedback into this prediction; and, a feedback controller that translates a state estimate into motor commands. We propose a specific mapping of nuclei within the
CEM system onto these control functions. Specifically, we suggest that the Flocculus is responsible for generating the forward model prediction and that the Vestibular Nuclei integrate sensory
feedback to generate an estimate of current state. Finally, the brainstem motor nuclei – in the case of horizontal compensation this means the Abducens Nucleus and the Nucleus Prepositus Hypoglossi –
implement a feedback controller, translating state into motor commands. While these efforts to understand the physiological control system as a feedback control system are in their infancy, there is
the intriguing possibility that CEM and targeted voluntary movements use the same cerebellar circuitry in fundamentally different ways.
PMCID: PMC2786296 PMID: 19956563
cerebellum; model; control systems; vor; okr; vestibular nucleus; eye movements; forward model
Children with autism exhibit a host of motor disorders including poor coordination, poor tool use, and delayed learning of complex motor skills like riding a tricycle. Theory suggests that one of the
crucial steps in motor learning is the ability to form internal models: to predict the sensory consequences of motor commands and learn from errors to improve performance on the next attempt. The
cerebellum appears to be an important site for acquisition of internal models, and indeed the development of the cerebellum is abnormal in autism. Here, we examined autistic children on a range of
tasks that required a change in the motor output in response to a change in the environment. We first considered a prism adaptation task in which the visual map of the environment was shifted. The
children were asked to throw balls to visual targets with and without the prism goggles. We next considered a reaching task that required moving the handle of a novel tool (a robotic arm). The tool
either imposed forces on the hand or displaced the cursor associated with the handle position. In all tasks, the children with autism adapted their motor output by forming a predictive internal
model, as exhibited through after-effects. Surprisingly, the rates of acquisition and washout were indistinguishable from normally developing children. Therefore, the mechanisms of acquisition and
adaptation of internal models in self-generated movements appeared normal in autism. Sparing of adaptation suggests that alternative mechanisms contribute to impaired motor skill development in
autism. Furthermore, the findings may have therapeutic implications, highlighting a reliable mechanism by which children with autism can most effectively alter their behavior.
PMCID: PMC2577807 PMID: 18819989
reach adaptation; prism adaptation; motor control; autism
Adaptability of reaching movements depends on a computation in the brain that transforms sensory cues, such as those that indicate the position and velocity of the arm, into motor commands.
Theoretical consideration shows that the encoding properties of neural elements implementing this transformation dictate how errors should generalize from one limb position and velocity to another.
To estimate how sensory cues are encoded by these neural elements, we designed experiments that quantified spatial generalization in environments where forces depended on both position and velocity
of the limb. The patterns of error generalization suggest that the neural elements that compute the transformation encode limb position and velocity in intrinsic coordinates via a gain-field; i.e.,
the elements have directionally dependent tuning that is modulated monotonically with limb position. The gain-field encoding makes the counterintuitive prediction of hypergeneralization: there should
be growing extrapolation beyond the trained workspace. Furthermore, nonmonotonic force patterns should be more difficult to learn than monotonic ones. We confirmed these predictions experimentally.
A computational model offers a unifying explanation of seemingly disparate findings from human reaching experiments
PMCID: PMC261873 PMID: 14624237 | {"url":"http://pubmedcentralcanada.ca/pmcc/solr/reg?pageSize=25&term=jtitle_s%3A(%22Brain%22)&sortby=score+desc&filterAuthor=author%3A(%22Donchin%2C+Opher%22)","timestamp":"2014-04-20T02:50:39Z","content_type":null,"content_length":"63716","record_id":"<urn:uuid:a8c22779-95fd-4ce8-bb9e-a41250ddbc6a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecture 4
The fourth lecture covered slides 10-60 in the lecture slides.
You should have learnt about recursion, the definition of recursive functions, and recursive data structures (now you know that a list is a recursive data structure).
Remember to carry on reading chapters 4 to 7 in the textbook (pp. 67-176).
Extra note:
In the last two lectures you’ve been introduced to QuickCheck. QuickCheck does random testing on functions. This is a very useful tool for testing functions, but it relies on the fact that give some
input a function always has the same output. This is of course true in Haskell. If f(x)=y then f(x) always equals y.
You give QuickCheck your function and a property that is true of all results of the function. QuickCheck generates a set of random inputs to your function and then checks that for every input the
property is true. Of course, you have to be careful about the property you choose!
In other languages random testing is not quite as useful. A Java method, for example, does not always return the same result for some input. In Java f(x) does not always equal y. For example, f might
depend on something in memory or some user input. In Haskell this is not the case.
One of the most powerful aspects of functional programming is this absence of state, that is the state of the world (memory, hard disk, user, network, etc.) does not affect the outcome of the
function. Later on in the course you will learn how to deal with these necessary interactions with the world.
– Chris
One Response to Lecture 4
1. If you’re having trouble working out how to define recursive functions, the discussion on p162-165 of the book is very good. As it says, the crucial question to ask is:
What if we were given the value fun xs. How could we define fun (x:xs) from it?
which is explored through a short series of very typical examples.
You must be logged in to post a comment.
This entry was posted in Uncategorized. Bookmark the permalink. | {"url":"http://blog.inf.ed.ac.uk/inf1fp12/archives/25","timestamp":"2014-04-16T07:22:50Z","content_type":null,"content_length":"13494","record_id":"<urn:uuid:9167e4e2-55cd-452a-9335-5e9dda956b85>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doylestown, PA Trigonometry Tutor
Find a Doylestown, PA Trigonometry Tutor
...It is such a shame that English is not rigorously taught at most schools today. I can remember strict teachers drilling proper English usage into my head: diagramming sentences, looking up
words in the dictionary, re-writing papers that my teachers knew I didn't put much effort into. It's no wonder that my English skills exceed those of most of today's English teachers.
23 Subjects: including trigonometry, English, calculus, statistics
...I think it's quite fun, actually! Logic, reasoning, and instinct go into solving every problem. The SAT Reading section is deceptively straightforward, yet subtle.
34 Subjects: including trigonometry, English, physics, calculus
...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair!
14 Subjects: including trigonometry, calculus, physics, geometry
...I have taught Programming I and II at Manor College (CS 210 and CS 211). Both of these courses use C++ as the language. Programming II includes object oriented programming topics. I teach
Linear Algebra (MTH 201) at Burlington County College.
17 Subjects: including trigonometry, calculus, GRE, geometry
...I am not a great jazz/blues/acoustic guitar player, but consider myself a very good hard rock and heavy metal guitar player. (Bands like Metallica, Pantera, Children of Bodom, Megadeth, Led
Zep, Iron Maiden, Black Sabbath, etc). I do know music theory and song writing theory, but to be honest the...
14 Subjects: including trigonometry, calculus, physics, algebra 1 | {"url":"http://www.purplemath.com/doylestown_pa_trigonometry_tutors.php","timestamp":"2014-04-16T10:32:32Z","content_type":null,"content_length":"24109","record_id":"<urn:uuid:7a4f4d2f-51fe-4db2-a9cd-61d783414fac>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do you solve, x/2+x/5=x/6-1/3
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50660759e4b0f9e4be286125","timestamp":"2014-04-20T08:46:12Z","content_type":null,"content_length":"63289","record_id":"<urn:uuid:a62e32c2-0cc2-40e6-883f-520c59bf6d5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doylestown, PA Trigonometry Tutor
Find a Doylestown, PA Trigonometry Tutor
...It is such a shame that English is not rigorously taught at most schools today. I can remember strict teachers drilling proper English usage into my head: diagramming sentences, looking up
words in the dictionary, re-writing papers that my teachers knew I didn't put much effort into. It's no wonder that my English skills exceed those of most of today's English teachers.
23 Subjects: including trigonometry, English, calculus, statistics
...I think it's quite fun, actually! Logic, reasoning, and instinct go into solving every problem. The SAT Reading section is deceptively straightforward, yet subtle.
34 Subjects: including trigonometry, English, physics, calculus
...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair!
14 Subjects: including trigonometry, calculus, physics, geometry
...I have taught Programming I and II at Manor College (CS 210 and CS 211). Both of these courses use C++ as the language. Programming II includes object oriented programming topics. I teach
Linear Algebra (MTH 201) at Burlington County College.
17 Subjects: including trigonometry, calculus, GRE, geometry
...I am not a great jazz/blues/acoustic guitar player, but consider myself a very good hard rock and heavy metal guitar player. (Bands like Metallica, Pantera, Children of Bodom, Megadeth, Led
Zep, Iron Maiden, Black Sabbath, etc). I do know music theory and song writing theory, but to be honest the...
14 Subjects: including trigonometry, calculus, physics, algebra 1 | {"url":"http://www.purplemath.com/doylestown_pa_trigonometry_tutors.php","timestamp":"2014-04-16T10:32:32Z","content_type":null,"content_length":"24109","record_id":"<urn:uuid:7a4f4d2f-51fe-4db2-a9cd-61d783414fac>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
│Volumes by Section │ │
│ │Cross Sections of a Pyramid │
│ │ │
Objective: To provide a toolbox of aids for teaching students about the volume by section method. In addition, a collection of animations is included which can be run on a number of platforms.
Level: Calculus courses in high school or college.
Prerequisites: An introduction to Riemann sums and the use of integrals to determine the area of a planar region. Basic ideas about volume.
Platform: A particular platform is not needed. However, there is a collection of animations which can be viewed in a browser or other utility, such as Quicktime, for running animations and/or
Instructor's Notes:
In basic geometry students learn the formulas for the volume of a rectangular solid and the volume of a cylinder. In preparation for determining volumes of more general solids these basic
formulas can be revisited to lay a foundation for determining the volume of solids of revolution.
The term cross section can be used to describe a typical portion of some object or body. In politics we often hear statements like 'a cross section of voters' or pollsters will base election
predictions on 'a cross section of a population' that was interviewed. In the physical sciences, the term cross section is used to describe a slice perpendicular to an axis of an object. In this
context a cross section can be used to represent a typical slice of an object.
For a solid whose cross sections are the same shape and size, the volume of the solid is computed as the product of the area of the cross section times the height of the object. The height can be
a vertical distance or a horizontal distance as illustrated in Figures 1 and 2. Note that the height is measured along the axis of the object (perpendicular to the cross section).
A general cylinder is a solid formed by translating a cross section S along a line or axis that is perpendicular to S. Several cylinders are shown in Figures 3a-d.
│Figure 3a. │Figure 3b. │
│Figure 3c. │Figure 3d. │
In Figure 3a a metal angle iron is translated to form an 'edge protector' for a workbench. (Click here for pictures of angle irons.) In Figure 3b an ellipse is translated to form an 'elliptical
can'. In Figure3c a triangle is translated to form a prism. In Figure 3d, a slice of bread is translated to form a 'perfect loaf'. See Figure 4 for a realistic loaf; ignore a few slices at each
end to visualize the corresponding general cylinder.
The volume of a general cylinder is the area of the cross section S that is translated times the distance through which the cross section is translated. If A(S) is the area of the cross section
of a general cylinder which is translated a distance h, then the volume of the resulting solid is given by
For more general solids in which the cross section is the same shape, but not the same size, we use integral calculus to determine the volume. Let A(x) represent the cross sectional area which we
assume varies continuously with x in [a, b]. We partition [a, b] into subintervals with end points a = x[0] < x[1] < x[2] < ... < x[n] = b. For each subinterval [x[i], x[i+1]] we arbitrarily
choose a value t[i]. The volume of the portion of the solid over interval [x[i], x[i+1]] is approximated by the volume of the cylinder with cross sectional area A(t[i]) and thickness Dx[i] = x
[i+1] - x[i], that is by, A(t[i])Dx[i]. Hence the volume of the entire solid is approximated by the a sum of volumes of cylinders given by
This approximating sum of volumes of cylinders is a Riemann sum and hence as we take a limit with n, the number of subintervals becoming arbitrarily large and the maximum length of a subinterval
getting arbitrarily small we have that the volume of the solid is obtained by integrating A(x) from a to b:
The animation in Figure 5 shows some cross sections of a solid whose base is in the xy-plane between the the x-axis and the curve
over interval [0, 9].
Figure 5.
The cross sections are semicircles with radii
hence we have that the area of a cross section is
It follows that the volume of this solid is given by
Note: In the animation in Figure 5 we really displayed only wire frame schematics of the shape of a typical cross section. True cross sections have thickness. The pictures are to illustrate how
cross sections would be drawn, rather than showing actual cross sections. This is a graphical convenience in order to generate the animation in a reasonable way.
A small gallery of demos for illustrating the generation of solids with a cross section of fixed shape is available by clicking on SECTION-METHOD-GALLERY. These animations can be used by
instructors in a classroom setting or by students to aid in acquiring a visualization background relating to the generation of solids with cross section of a fixed shape. The demos provide a
variety of animations for some common examples.
Classroom Activities:
□ An informative discussion about using props as teaching aids is in Carol Critchlow's paper 'A prop is worth ten thousand words', The Mathematics Teacher, Vol. 92, No. 1, January 1999. Several
suggestions for motivating volumes of solids with fixed shape cross sections are discussed.
□ For a description of a a good hands-on project involving volumes of solids with fixed shape cross sections see Theresa Offerman's paper 'Foam Activities', The Mathematics Teacher, Vol. 92,
No. 5, May 1999. The discussion focuses on constructing physical models for such surfaces using florist's foam which is light weight, easily shaped, and inexpensive. Accompanying this paper
is a set of worksheets that can be used to provide guidance on how to construct a model.
This demo was constructed by Dr. David R. Hill, Temple University for Demos with Positive Impact. | {"url":"http://www.mathdemos.org/mathdemos/sectionmethod/sectionmethod.html","timestamp":"2014-04-21T02:02:30Z","content_type":null,"content_length":"15739","record_id":"<urn:uuid:48642019-e634-4e5b-b7aa-48e43c0deb5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Software for Calculating Geometric Transformations
Replies: 3 Last Post: Sep 6, 2011 4:52 AM
Messages: [ Previous | Next ]
Re: Software for Calculating Geometric Transformations
Posted: Sep 5, 2011 11:53 AM
"PB" <p@b.com> wrote in message news:j41ko6$dp9$1@news.datemas.de...
>I am currently doing a course on Computer Graphics Algorithms. This
>involves lot of matrix transformations i.e. for eg - rotating co-ordinates,
>translating, reflecting etc.
> I am solving the problems on paper using a calculator, but I need some
> software which will help me verify the solution (the calculations are very
> error prone with so many matrix multiplications)
> i.e. I am looking for software where if I input "Reflect point (4,5) on
> line y = 3x + 12",
> it will give me the reflected co-ordinates. Does anyone know of any free
> software which will help me with this?
If you operating system is MS Windows you could write a VBScript program
which would run via Windows Script Host. Just write you script and save it
with a .vbs extention and double-click it to run the script.
Also Java is free but has a much steeper learning curve.
Her is a very simple VBScript example:
'Script Name: SquareRoot-2.vbs
'Author: Jerry Ford
'Created: 11/22/02
'Description: This script demonstrates how to solve square root
'calculations using VBScript's Built-in Sqr() function
'Initialization Section
Option Explicit
Dim UserInput
UserInput = InputBox ("Type a number", "Square Root Calculator")
MsgBox "The square root of " & UserInput & " is " & Sqr(UserInput)
Date Subject Author
9/5/11 Software for Calculating Geometric Transformations PB
9/5/11 Re: Software for Calculating Geometric Transformations Jasen Betts
9/5/11 Re: Software for Calculating Geometric Transformations Charles Hottel
9/6/11 Re: Software for Calculating Geometric Transformations cor | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2293523&messageID=7562588","timestamp":"2014-04-16T08:28:25Z","content_type":null,"content_length":"21171","record_id":"<urn:uuid:1ceba942-4a41-4ebb-855a-5962f485adb4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
transformation of an exponential function
December 2nd 2010, 09:34 AM #1
Junior Member
Nov 2010
transformation of an exponential function
I am not sure what the transformations are in the following... do i follow normal bedmas operations? Do i multiply -3 by 5?
y=2-3(5x+4) where x+4 is the exponent
Could you please post the original question word-for-word, because I'm having trouble understanding what the problem is asking.
so i have been given the following chart, and need to calcualte what the new points will be.
y=5^x y=-3(5^x) y=2-3(5^x+4)
(-1, 1/5)
(3, 125)
December 2nd 2010, 09:38 AM #2
December 2nd 2010, 09:44 AM #3
Junior Member
Nov 2010 | {"url":"http://mathhelpforum.com/algebra/165089-transformation-exponential-function.html","timestamp":"2014-04-18T04:07:19Z","content_type":null,"content_length":"34561","record_id":"<urn:uuid:a243deed-6c7a-44c3-9210-b214963a033a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sarah's cracking algorithm
January 1999
Sarah Flannery, a student at Scoil Mhuire Gan Smal in Blarney, Co. Cork, was awarded an Intel Fellows Achievement Award for her project in the Chemical, Physical and Mathematical Category at the 1998
Esat Telecom Young Scientist and Technology awards.
Sarah's project for the exhibition was entitled "Cryptography - The Science of Secrecy". Working with matrices and using her PC at home for experiments, Sarah developed and tested a new cryptography
algorithm for encrypting data. While the work has been around for a little while, media attention has recently skyrocketed!
The initial idea for the algorithm was developed while Sarah was on a placement at Baltimore-Zergo in Dublin, and she has called it the Cayley-Purser algorithm, named after Arthur Cayley, an
19th-century Cambridge mathematician, and Michael Purser, a cryptographer who inspired her.
Sarah's new algorithm is a competitor to the popular RSA algorithm. Where RSA uses exponentiation to encode and decode a message, Cayley-Purser uses matrix multiplication. This means that while RSA
grows as a cubic with the length of the encryption "key", Cayley-Purser grows quadratically. For a typical key length of 1024 bits, this makes Cayley-Purser around 75 times faster.
One downside of Sarah's algorithm is that the encrypted messages it produces are much longer than those produced by RSA. More significantly, however, there is still some possibility that
Cayley-Purser might have some "security holes", making it too easy to "crack the code". Now that Sarah and her algorithm are receiving so much attention, other researchers will begin to explore the
security properties of Cayley-Purser, and see how it stacks up against the dominant RSA.
Some Internet discussion of Sarah's work.
The homepage of RSA. | {"url":"http://plus.maths.org/content/sarahs-cracking-algorithm","timestamp":"2014-04-17T01:01:11Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:82b7bcd4-f1ca-4c0f-8abd-ec560d2d4eca>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unnatural postscript hacks
Tim Vaughan and I wrote the following postscript programs, which revolve around doing unusual amounts of processing on printers. I've got to give Tim the credit for introducing me to this fantastic
idea (cheers Tim!).
I don't think it's generally appreciated, but Postscript is a complete programming language in it's own right, letting you do pretty much any piece of computation on a postscript printer. As I
understand it, the language is very similar to Forth - both are heavily stack-oriented. Think of programming an old HP calculator and you'll get the idea. For the record, none of the programs below
send any raster graphics to the printer :)
The Programs
A simple raytracer
Ever wanted to benchmark your postscript printer? Well here's a perfect opportunity to do just that! psRay is (mostly) the product of a happy weekend's hacking and shows that you really can write
pretty much arbitrary programs in postscript. PsRay basically implements a ray-tracer in postscript. I have to admit that the inspiration for this partly came from the docs for the excellent free
raytracer povray - the povray team were perverse enough to write a raytracer in their scene description language. It's really a pretty basic proof-of-concept (or perhaps proof-of-craziness?) and
there's plenty of things which could be added like colours and other geometry types, but I'm inclined to keep it simple and clean.
1. Expect older printers to get bogged down processing this one - accidentally sending this program to the wrong (older) printer in the physics department once resulted in a five hour wait (!)
2. You may also find that the printer complains, or just seems to forget the job altogether. I suspect this is a problem with postscript noncompliant hardware - the effect is inconsistent
between printers - but I'm happy to learn about any bugs.
I was recently disappointed to learn that I'm not the first to implement this piece of crazyness... Apparently someone got there more than 10 years before me - what is ever new on the net? I've
decided to take comfort from the fact that they just translated theirs, while mine was written from scratch without looking up any references :) Of course this begs the interesting question: how
many independent implementations of postscript raytracers are out there? Let me know if you find any others!
After putting the above up, Evan Danaher has kindly sent me a very nice example of a postscript raytracer. The example is both incredibly obfusificated, and incredibly much faster than my version
above. It was apparently written by a guy named Takashi Hayakawa, which may be found here. Evan also modified it into a colour version which is also quite cool.
1D cellular automata
Everyone loves cellular automata, so here's Tim's great implementation of a one dimensional version. The best thing about this program is that it represents an open-ended time evolution. This
means that a simple change to the code yields an infinite number of pages of output. (OK, well at least in principle. In practise I guess it's until the paper runs out, the sysadmin discovers
your evil ways, or you feel too guilty about the number of dead trees you're wasting.)
An affine IFS fractal generator
A implementation of Barnsley's famous iterated function systems; these are one of my favourite ways of generating fractals because of their extreme versatility. IFS were popularised by Barnsley
in his fantastic book "Fractals Everywhere": Given a picture (= set in R^2), the collage theorem allows us to find a fractal map with an attractor which looks very similar to the original set.
Fascinating stuff. Of course there's plenty of pretty pictures which you can create as well by randomly choosing a set of affine maps.
Bubbly Mandelbrot - Tim's original!
Here's the image which used to reside on Tim's wall and started this whole thing off... Points in the Mandelbrot set are rendered with circles.
The Julia set
Well, if the Mandelbrot set is hanging around, we'd better have a Julia set as well. This program uses inverse Julia iteration to converge to the Julia set, and plots the points as it goes. For
those who are interested there's a deterministic recursive algorithm or a Monte Carlo version though both produce visually the same output. This program brings up some interesting artifacts due
to the limited floating point precision specified in the postscript spec. (If I recall rightly we only have 6 or so digits of precision to play with.)
The Henon attractor
Here's Tim's version of the Henon attractor. I'm afraid I don't know much about this one, but Wikipedia tells me it's a simplified model of a Poincare section of the Lorenz attractor. More
interesting, it depends on some parameters which may be adjusted to change the set all the way from the disconnected Cantor set at one extreme to a smooth curve at another. Wow.
Instructions for M$ users...
For those of you who are unlucky enough to be burdened with the use of M$ windo$e, you have my pity. Oh, and if you don't own a postscript printer, you should download and install gsview and
ghostscript if you want to look at the pictures above. Then just open up the files in gsview... Of course, the enlightened among us will just use gv for a preview ;) | {"url":"http://www.physics.uq.edu.au/people/foster/postscript.html","timestamp":"2014-04-16T16:41:21Z","content_type":null,"content_length":"8140","record_id":"<urn:uuid:cb49754b-49d2-4411-9ade-bbb951a27cf4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
More complicated questions about maxima and minima, and some closures of NP, Theoret
Results 1 - 10 of 58
, 1993
"... Abduction is an important form of nonmonotonic reasoning allowing one to find explanations for certain symptoms or manifestations. When the application domain is described by a logical theory,
we speak about logic-based abduction. Candidates for abductive explanations are usually subjected to minima ..."
Cited by 163 (26 self)
Add to MetaCart
Abduction is an important form of nonmonotonic reasoning allowing one to find explanations for certain symptoms or manifestations. When the application domain is described by a logical theory, we
speak about logic-based abduction. Candidates for abductive explanations are usually subjected to minimality criteria such as subsetminimality, minimal cardinality, minimal weight, or minimality
under prioritization of individual hypotheses. This paper presents a comprehensive complexity analysis of relevant decision and search problems related to abduction on propositional theories. Our
results indicate that abduction is harder than deduction. In particular, we show that with the most basic forms of abduction the relevant decision problems are complete for complexity classes at the
second level of the polynomial hierarchy, while the use of prioritization raises the complexity to the third level in certain cases.
- ARTIFICIAL INTELLIGENCE , 1998
"... In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an
ordering on the program rules is used to express preferences. We show how this ordering can be used to de ..."
Cited by 132 (17 self)
Add to MetaCart
In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an
ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We
define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at
least one answer set. Adding priorities
- Theoretical Computer Science , 1993
"... Circumscription and the closed world assumption with its variants are well-known nonmonotonic techniques for reasoning with incomplete knowledge. Their complexity in the propositional case has
been studied in detail for fragments of propositional logic. One open problem is whether the deduction prob ..."
Cited by 99 (22 self)
Add to MetaCart
Circumscription and the closed world assumption with its variants are well-known nonmonotonic techniques for reasoning with incomplete knowledge. Their complexity in the propositional case has been
studied in detail for fragments of propositional logic. One open problem is whether the deduction problem for arbitrary propositional theories under the extended closed world assumption or under
circumscription is $\Pi^P_2$-complete, i.e., complete for a class of the second level of the polynomial hierarchy. We answer this question by proving these problems $\Pi^P_2$-complete, and we show
how this result applies to other variants of closed world reasoning.
- Journal of the ACM , 1997
"... Abstract. In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters ’ preferences becomes a Condorcet winner—a candidate who
beats all other candidates in pairwise majority-rule elections. Bartholdi, Tovey, and Trick provided a lower b ..."
Cited by 54 (13 self)
Add to MetaCart
Abstract. In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters ’ preferences becomes a Condorcet winner—a candidate who beats all
other candidates in pairwise majority-rule elections. Bartholdi, Tovey, and Trick provided a lower bound—NP-hardness—on the computational complexity of determining the election winner in Carroll’s
system. We provide a stronger lower bound and an upper bound that matches our lower bound. In particular, determining the winner in Carroll’s system is complete for parallel access to NP, that is, it
is complete for � 2 p, for which it becomes the most natural complete problem known. It
, 2002
"... We show that polynomial time truth-table reducibility via Boolean circuits to SAT is the same as logspace truth-table reducibility via Boolean formulas to SAT and the same as logspace Turing
reducibility to SAT . In addition, we prove that a constant number of rounds of parallel queries to SAT i ..."
Cited by 51 (2 self)
Add to MetaCart
We show that polynomial time truth-table reducibility via Boolean circuits to SAT is the same as logspace truth-table reducibility via Boolean formulas to SAT and the same as logspace Turing
reducibility to SAT . In addition, we prove that a constant number of rounds of parallel queries to SAT is equivalent to one round of parallel queries.
, 1998
"... We prove that the Minimum Equivalent DNF problem is \Sigma p 2 -complete, resolving a conjecture due to Stockmeyer. The proof involves as an intermediate step a variant of a related problem in
logic minimization, namely, that of finding the shortest implicant of a Boolean function. We also obtain ..."
Cited by 42 (4 self)
Add to MetaCart
We prove that the Minimum Equivalent DNF problem is \Sigma p 2 -complete, resolving a conjecture due to Stockmeyer. The proof involves as an intermediate step a variant of a related problem in logic
minimization, namely, that of finding the shortest implicant of a Boolean function. We also obtain certain results concerning the complexity of the Shortest Implicant problem that may be of
independent interest. When the input is a formula, the Shortest Implicant problem is \Sigma p 2 - complete, and \Sigma p 2 -hard to approximate to within an n 1=2\Gammaffl factor. When the input is a
circuit, approximation is \Sigma p 2 - hard to within an n 1\Gammaffl factor. However, when the input is a DNF formula, the Shortest Implicant problem cannot be \Sigma p 2 -complete unless \Sigma p 2
= NP[log 2 n] NP . 1. Introduction Two-level (DNF) logic minimization is a central practical problem in logic synthesis and also one of the more natural problems in the polynomial hierarchy....
- Information and Computation , 1994
"... : We investigate here NP optimization problems from a logical definability standpoint. We show that the class of optimization problems whose optimum is definable using first-order formulae
coincides with the class of polynomially bounded NP optimization problems on finite structures. After this, we ..."
Cited by 40 (2 self)
Add to MetaCart
: We investigate here NP optimization problems from a logical definability standpoint. We show that the class of optimization problems whose optimum is definable using first-order formulae coincides
with the class of polynomially bounded NP optimization problems on finite structures. After this, we analyze the relative expressive power of various classes of optimization problems that arise in
this framework. Some of our results show that logical definability has different implications for NP maximization problems than it has for NP minimization problems, in terms of both expressive power
and approximation properties. To appear in Information and Computation. Research partially supported by NSF Grants CCR8905038 and CCR-9108631. y e-mail addresses: kolaitis@cse.ucsc.edu,
thakur@cse.ucsc.edu z supersedes Technical report UCSC-CRL-90-48 1 Introduction and Summary of Results It is well known that optimization problems had a major influence on the development of the
theory of NP-co...
, 1996
"... If a new piece of information contradicts our previously held beliefs, we have to revise our beliefs. This problem of belief revision arises in a number of areas in Computer Science and
Artificial Intelligence, e.g., in updating logical database, in hypothetical reasoning, and in machine learning. M ..."
Cited by 38 (0 self)
Add to MetaCart
If a new piece of information contradicts our previously held beliefs, we have to revise our beliefs. This problem of belief revision arises in a number of areas in Computer Science and Artificial
Intelligence, e.g., in updating logical database, in hypothetical reasoning, and in machine learning. Most of the research in this area is influenced by work in philosophical logic, in particular by
Gardenfors and his colleagues, who developed the theory of belief revision. Here we will focus on the computational aspects of this theory, surveying results that address the issue of the
computational complexity of belief revision.
, 2002
"... Temporal logic. Logical formalisms for reasoning about time and the timing of events appear in several fields: physics, philosophy, linguistics, etc. Not surprisingly, they also appear in
computer science, a field where logic is ubiquitous. Here temporal logics are used in automated reasoning, in pl ..."
Cited by 32 (0 self)
Add to MetaCart
Temporal logic. Logical formalisms for reasoning about time and the timing of events appear in several fields: physics, philosophy, linguistics, etc. Not surprisingly, they also appear in computer
science, a field where logic is ubiquitous. Here temporal logics are used in automated reasoning, in planning, in semantics of programming languages, in artificial intelligence, etc. There is one
area of computer science where temporal logic has been unusually successful: the specification and verification of programs and systems, an area we shall just call programming for simplicity. In
today's curricula, thousands of programmers first learn about temporal logic in a course on model checking!
- In Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI-2005 , 2005
"... and complexity ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=240551","timestamp":"2014-04-20T13:44:24Z","content_type":null,"content_length":"36472","record_id":"<urn:uuid:ccbde2d8-f4e8-4439-9850-08a903f61f57>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: RE: matrix of significance stars
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: RE: matrix of significance stars
From "Adam Seth Litwin" <aslitwin@MIT.EDU>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: RE: matrix of significance stars
Date Fri, 28 Jul 2006 13:29:57 -0400
Thanks, Roger. I had another thought.
If estimation commands create a vector of p-values (one for each independent
variable), then I could just use those to generate zero, one, two, or three
stars for each independent variable.
That would be easy...except that I don't see an e(p) matrix (not the scalar
for the entire model) when I type -ereturn list- or -return list- after
estimating a model. Surely, the p-values must be accessible, no? But how?
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Newson, Roger B
Sent: Friday, July 28, 2006 12:47 PM
To: statalist@hsphsun2.harvard.edu
Subject: st: RE: matrix of significance stars
The Stata matrix language (documented online by typing -whelp matrix-)
supports only numeric matrices, not string matrices. (This is in
contrast to the matrix language Mata, used in low-level programming,
which does support string matrices.) Therefore, we can only produce a
string variable of stars (as in -parmest-), and not a Stata string
matrix of stars.
You might like to write a Mata program to produce a Mata string matrix
of stars. However, I don't know what you would want to do with such a
matrix, once it has been produced.
I hope this helps.
Roger Newson
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322
Email: r.newson@imperial.ac.uk
Opinions expressed are those of the author, not of the institution.
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Adam Seth
Sent: 28 July 2006 14:15
To: statalist@hsphsun2.harvard.edu
Subject: st: matrix of significance stars
Hello, everyone.
I am trying to figure out the easiest way to build a matrix of
stars (or other symbols) after either an estimation command or, better
after building a table of estimates. I figured there would be something
like e(stars), especially since the estimates table can display them so
readily. But, short of -parmest-, which makes me build a whole new
I'm not sure how to proceed.
Thank you for your help. adam
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-07/msg00837.html","timestamp":"2014-04-18T21:39:23Z","content_type":null,"content_length":"9191","record_id":"<urn:uuid:616df0f1-9732-406e-95dc-568da5be5df5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Latest 3i-infotech Aptitude Question SOLUTION: A certain number of men can finish a piece of work in 10 days. If however there were 10 men less it will take 10 days more for the work to be finished.
How many men were there originally? (a) 110 men (b) 130 men (c) 100 men (d) none of these | {"url":"http://www.m4maths.com/2016-In-simple-interest-what-sum-amounts-of-Rs-1120---in-4-years-and-Rs-1200---in-5-years.html","timestamp":"2014-04-18T00:53:25Z","content_type":null,"content_length":"93826","record_id":"<urn:uuid:eb6f3279-533e-4a53-8067-a465ceaa3b3d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thom space
From Encyclopedia of Mathematics
A topological space associated with a vector (or sphere) bundle or spherical fibration.
Let cone of the projection
The role of the Thom space is to allow one to reduce a series of geometric problems to problems in homotopic topology, and hence to algebraic problems. Thus, the problem of computing a bordism group
reduces to the problem of computing a homotopy group of a Thom space [1], [2], and also Cobordism). The problem of classifying smooth manifolds reduces to the study of the homotopy properties of the
Thom space of the normal bundle (cf. [3]). The problem of realizing cycles by submanifolds (cf. Steenrod problem) reduces to the study of the cohomology of the Thom spaces Transversal mapping;
Tubular neighbourhood).
The construction of Thom spaces is natural on the category of bundles: Any morphism of (vector) bundles
For vector bundles [4]). In particular, for the trivial bundle suspension operator, so that Thom spectrum.
For a multiplicative generalized cohomology theory Generalized cohomology theories) there is a pairing
There arises a pairing
so that Thom isomorphism.
The following Atiyah duality theorem is important and often used (cf. [4], [5]): If
[1] R. Thom, "Quelques propriétés globales des variétés différentiables" Comm. Math. Helv. , 28 (1954) pp. 17–86
[2] R.E. Stong, "Notes on cobordism theory" , Princeton Univ. Press (1968)
[3] W.B. Browder, "Surgery on simply-connected manifolds" , Springer (1972)
[4] D. Husemoller, "Fibre bundles" , McGraw-Hill (1966)
[5] M. Atiyah, "Thom complexes" Proc. London Math. Soc. , 11 (1961) pp. 291–310
[a1] J. Dieudonné, "A history of algebraic and differential topology 1900–1960" , Birkhäuser (1989)
How to Cite This Entry:
Thom space. Yu.B. Rudyak (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Thom_space&oldid=17782
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Thom_space","timestamp":"2014-04-21T04:38:53Z","content_type":null,"content_length":"25523","record_id":"<urn:uuid:3d2e00ec-d8e8-4d29-b37e-fca5e94f2854>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math summer assignment questions. Please help! - WyzAnt Answers
Adding & Subtracting w/ Different denominators:
1. 3/x-5 + 7/5-x
2. 30-4x/x^2-9 + 7/x+3
3. 5x/5x^2-125 + 5x-5/x^2-6x+5
4. 2/x-2 - 10/x^2+x-6
5. 8/6x-18 - 14/6-2x
6. 2/x^2-4 - 5/x^2-3x-10
7. x+1/3x^2-2x-1 + x-1/3x^2+4x+1
8. 2/a - 3/a+1 + 5/a-1
Perform the indicated opertions:
1. 2z^2-3z+6/z^2-1 - z^2-5z+9/z^2-1
2. 3/6x^2-4x - x-2/9x-6
3. 2x+1/6x^2+3x+1 + 2x-1/6x^2-x-1
4. (ab)^2/(a+b)^2 * (a+b)^3/(ab)^3
5. 8a/2a^2+4a+2 - 3a-3/a^2-1
6. 3a^2-2a-16/2a^2+3a-2 * 6a+16/9a^2-64
7. a-3/a^3+8 - 2/a+2 - a-3/a^2-2a+4
Tutors, please sign in to answer this question.
2 Answers
2) 30-4x/x^2-9 + 7/(x+3)
30-4x/(x-3)(x+3) + 7/(x+3) (I factored x^2-9)
30-4x/(x-3)(x+3) + 7(x-3)/(x+3)(x-3)
30-4x + 7(x-3)/(x+3)(x-3)
4) (ab)^2/(a+b)^2 * (a+b)^3/(ab)^3
(ab)^2 goes into (ab)^2 1 time, (ab)^2 goes into (ab)^3 (ab) times
so we have so far 1/(a+b)^2 * (a+b)^3/(ab)
now divide the numerator and the denominator by (a+b)^2
1/1 * (a+b)/(ab)
answer: (a+b)/(ab)
5) I'll factor the denominators for you.
6) I'll factor all the terms for you and you finish the problem !
(3a-8)(a+2)/(2a-1)(a+2) * 2(3a+8)/(3a-8)(3a+8)
Simplify. Look at the common factors in the terms.
Hi Page,
Unfortunately I'm kind of pressed for time this morning, however adding and subtracting with different denominators is really a simple process once you get use to the concept. In order to add two
different denominators you have to multiply each side of the problem by the denominator of the opposite side where the polynomial/ function is written twice in the numerator and denominator.
So to make sense of that, since it is worded poorly look lets look at problem 1.
3/(x-5) + 7/(5-x)
in order to add these two the next operation you'll need to do is that cross multiply, which looks like this
[3(5-x)]/[(x-5)(5-x)] + [7(x-5)]/[(5-x)(x-5)]
the trick is to feta. Common denominator and this is done through the multiplication
from there, technically this answer is correct, but you will need to simplify by distributing the top then adding the two functions together. Make sure you leave the denominator, typically, as is so
that you can see if it won't cancel nicely.
sorry I can't actually do a lot of the problems for you, but if hope you have a better understanding now! Good luck and hopefully I can get back to you on these later this weekend! | {"url":"http://www.wyzant.com/resources/answers/15485/math_summer_assignment_questions_please_help","timestamp":"2014-04-20T02:12:18Z","content_type":null,"content_length":"42076","record_id":"<urn:uuid:8948b2a6-aabd-43c3-a65d-381996aada22>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Concerning charts on a manifold
Actually, using the fact that n is the Lebesgue Covering Dimension, an n-manifold
has, in each component, a refinement of each cover in which each element is covered
by a maximum of n+1 charts.
To give a proof to the previous claim that a manifold M covered by a union
of pairwise-disjoint open sets must be disconnected ( I think that disjoint closure
is not necessary):
Let U_(i in I) {V_i} cover M, with V_j/\V_k ={ }
Then any V_k is open in M, but also closed in it, since its complement in M is the
union of a collection of open sets V_k- \/{V_i}-V_k. | {"url":"http://www.physicsforums.com/showthread.php?p=3397495","timestamp":"2014-04-21T04:44:52Z","content_type":null,"content_length":"35699","record_id":"<urn:uuid:6f60fd30-0315-4a17-8a53-9182d3448f1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proving ptolemy right: The environment abstraction framework for model checking concurrent systems
Results 1 - 10 of 20
"... Abstract. We present new algorithms for automatically verifying properties of programs with an unbounded number of threads. Our algorithms are based on a new abstract domain whose elements
represent thread-quantified invariants: i.e., invariants satified by all threads. We exploit existing abstracti ..."
Cited by 21 (3 self)
Add to MetaCart
Abstract. We present new algorithms for automatically verifying properties of programs with an unbounded number of threads. Our algorithms are based on a new abstract domain whose elements represent
thread-quantified invariants: i.e., invariants satified by all threads. We exploit existing abstractions to represent the invariants. Thus, our technique lifts existing abstractions by wrapping
universal quantification around elements of the base abstract domain. Such abstractions are effective because they are thread-modular: e.g., they can capture correlations between the local variables
of the same thread as well as correlations between the local variables of a thread and global variables, but forget correlations between the states of distinct threads. (The exact nature of the
abstraction, of course, depends on the base abstraction lifted in this style.) We present techniques for computing sound transformers for the new abstraction by using transformers of the base
abstract domain. We illustrate our technique in this paper by instantiating it to the Boolean Heap abstraction, producing a Quantified Boolean Heap abstraction. We have implemented an instantiation
of our technique with Canonical Abstraction as the base abstraction and used it to successfully verify linearizability of data-structures in the presence of an unbounded number of threads. 1
"... Abstract. We describe Deskcheck, a parametric static analyzer that is able to establish properties of programs that manipulate dynamically allocated memory, arrays, and integers. Deskcheck can
verify quantified invariants over mixed abstract domains, e.g., heap and numeric domains. These domains nee ..."
Cited by 8 (2 self)
Add to MetaCart
Abstract. We describe Deskcheck, a parametric static analyzer that is able to establish properties of programs that manipulate dynamically allocated memory, arrays, and integers. Deskcheck can verify
quantified invariants over mixed abstract domains, e.g., heap and numeric domains. These domains need only minor extensions to work with our domain combination framework. The technique used for
managing the communication between domains is reminiscent of the Nelson-Oppen technique for combining decision procedures, in that the two domains share a common predicate language to exchange shared
facts. However, whereas the Nelson-Oppen technique is limited to a common predicate language of shared equalities, the technique described in this paper uses a common predicate language in which
shared facts can be quantified predicates expressed in first-order logic with transitive closure. We explain how we used Deskcheck to establish memory safety of the
"... Abstract—A message flow is a sequence of messages sent among processors during the execution of a protocol, usually illustrated with something like a message sequence chart. Protocol designers
use message flows to describe and reason about their protocols. We show how to derive high-quality invarian ..."
Cited by 6 (4 self)
Add to MetaCart
Abstract—A message flow is a sequence of messages sent among processors during the execution of a protocol, usually illustrated with something like a message sequence chart. Protocol designers use
message flows to describe and reason about their protocols. We show how to derive high-quality invariants from message flows and use these invariants to accelerate a state-of-the-art method for
parameterized protocol verification called the CMP method. The CMP method works by iteratively strengthening and abstracting a protocol. The labor-intensive portion of the method is finding the
protocol invariants needed for each iteration. We provide a new analysis of the CMP method proving it works with any sound abstraction procedure. This facilitates the use of a new abstraction
procedure tailored to our protocol invariants in the CMP method. Our experience is that message-flow derived invariants get to the heart of protocol correctness in the sense that only couple of
additional invariants are needed for the CMP method to converge. I.
"... Verifying correctness properties of parameterized systems is a long-standing problem. The challenge lies in the lack of guarantee that the property is satisfied for all instances of the
parameterized system. Existing work on addressing this challenge aims to reduce this problem to checking the prope ..."
Cited by 5 (1 self)
Add to MetaCart
Verifying correctness properties of parameterized systems is a long-standing problem. The challenge lies in the lack of guarantee that the property is satisfied for all instances of the parameterized
system. Existing work on addressing this challenge aims to reduce this problem to checking the properties on smaller systems with a bound on the parameter referred to as the cut-off. A property
satisfied on the system with the cut-off ensures that it is satisfied for systems with any larger parameter. The major problem with these techniques is that they only work for certain classes of
systems with a specific communication topology such as ring topology, thus leaving other interesting classes of systems unverified. We contribute an automated technique for finding the cut-off of the
parameterized system that works for systems defined with any topology. Given the specification and the topology of the system, our technique is able to automatically generate the cut-off specific to
this system. We prove the soundness of our technique and demonstrate its effectiveness and practicality by applying it to several canonical examples where in some cases, our technique obtains smaller
cut-off values than those presented in the existing literature.
- IN: APLAS 2009 , 2009
"... We present a new technique for speeding up static analysis of (shared memory) concurrent programs. We focus on analyses that compute thread correlations: such analyses infer invariants that
capture correlations between the local states of different threads (as well as the global state). Such invari ..."
Cited by 3 (0 self)
Add to MetaCart
We present a new technique for speeding up static analysis of (shared memory) concurrent programs. We focus on analyses that compute thread correlations: such analyses infer invariants that capture
correlations between the local states of different threads (as well as the global state). Such invariants are required for verifying many natural properties of concurrent programs. Tracking
correlations between different thread states, however, is very expensive. A significant factor that makes such analysis expensive is the cost of applying abstract transformers. In this paper, we
introduce a technique that exploits the notion of footprints and memoization to compute individual abstract transformers more efficiently. We have implemented this technique in our concurrent shape
analysis framework. We have used this implementation to prove properties of fine-grained concurrent programs with a shared, mutable, heap in the presence of an unbounded number of objects and
threads. The properties we verified include memory safety, data structure invariants, partial correctness, and linearizability. Our empirical evaluation shows that our new technique reduces the
analysis time significantly (e.g., by a factor of 35 in one case).
"... Abstract. Verifying that a parameterized system satisfies certain desired properties amounts to verifying an infinite family of the system instances. This problem is undecidable in general, and
as such a number of sound and incomplete techniques have been proposed to address it. Existing techniques ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. Verifying that a parameterized system satisfies certain desired properties amounts to verifying an infinite family of the system instances. This problem is undecidable in general, and as
such a number of sound and incomplete techniques have been proposed to address it. Existing techniques typically focus on parameterized systems with a single parameter, (i.e., on systems where the
number of processes of exactly one type is dependent on the parameter); however, many systems in practice are multi-parameterized, where multiple parameters are used to specify the number of
different types of processes in the system. In this work, we present an automatic verification technique for multi-parameterized systems, prove its soundness and show that it can be applied to
systems irrespective of their communication topology. We present a prototype realization of our technique in our tool Golok, and demonstrate its practical applicability using a number of
multi-parameterized systems. 1
"... A key problem in verification of multi-agent systems by model checking concerns the fact that the state-space of the system grows exponentially with the number of agents present. This often
makes practical model checking unfeasible whenever the system contains more than a few agents. In this paper w ..."
Cited by 2 (2 self)
Add to MetaCart
A key problem in verification of multi-agent systems by model checking concerns the fact that the state-space of the system grows exponentially with the number of agents present. This often makes
practical model checking unfeasible whenever the system contains more than a few agents. In this paper we put forward a technique to establish a cutoff result, thereby showing that systems with an
arbitrary number of agents can be verified by checking a single system consisting of a number of agents equal to the cutoff of the system. While this problem is undecidable in general, we here define
a class of parameterised interpreted systems and a parameterised temporal-epistemic logic for which the result can be shown. We exemplify the theoretical results on a robotic example and present an
implementation of the technique as an extension of mcmas, an open-source model checker for multi-agent systems.
, 2009
"... Abstract. We present a new analysis for proving properties of finegrained concurrent programs with a shared, mutable, heap in the presence of an unbounded number of objects and threads. The
properties we address include memory safety, data structure invariants, partial correctness, and linearizabili ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. We present a new analysis for proving properties of finegrained concurrent programs with a shared, mutable, heap in the presence of an unbounded number of objects and threads. The
properties we address include memory safety, data structure invariants, partial correctness, and linearizability. Our techniques enable successful verification of programs that were not be handled by
previous concurrent shape analysis algorithms. We present our techniques in an abstract framework we call thread-correlation analysis. Thread-correlation analysis infers invariants that capture the
correlations between the local states of different threads and the global state (content of the heap). However, inferring such invariants is non-trivial, since it requires reasoning about a quadratic
number of interactions between threads. We transformers. In our experiments, these techniques were able to reduce the time of the analysis by a factor of 35. 1
"... Abstract. We consider verification of safety properties for parameterized distributed protocols. Such a protocol consists of an arbitrary number of (infinite-state) processes that communicate
asynchronously over FIFO channels. The aim is to perform parameterized verification, i.e., showing correctne ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. We consider verification of safety properties for parameterized distributed protocols. Such a protocol consists of an arbitrary number of (infinite-state) processes that communicate
asynchronously over FIFO channels. The aim is to perform parameterized verification, i.e., showing correctness regardless of the number of processes inside the system. We consider two non-trivial
case studies: the distributed Lamport and Ricart-Agrawala mutual exclusion protocols. We adapt the method of monotonic abstraction that considers an over-approximation of the system, in which the
behavior is monotonic with respect to a given pre-order on the set of configurations. We report on an implementation which is able to fully automatically verify mutual exclusion for both protocols. 1
"... Abstract. Many real component-based systems, so called Control-User systems, are composed of a stable part (control component) and a number of dynamic components of the same type (user
components). Models of these systems are parametrised by the number of user components and thus potentially infinit ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. Many real component-based systems, so called Control-User systems, are composed of a stable part (control component) and a number of dynamic components of the same type (user components).
Models of these systems are parametrised by the number of user components and thus potentially infinite. Model checking techniques can be used to verify only specific instances of the systems. This
paper presents an algorithmic technique for verification of safety interaction properties of Control-User systems. The core of our verification method is a computation of a cutoff. If the system is
proved to be correct for every number of user components lower than the cutoff then it is correct for any number of users. We present an on-the-fly model checking algorithm which integrates
computation of a cutoff with the verification itself. Symmetry reduction can be applied during the verification to tackle the state explosion of the model. Applying the algorithm we verify models of
several previously published component-based systems. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5452247","timestamp":"2014-04-18T06:32:45Z","content_type":null,"content_length":"40522","record_id":"<urn:uuid:d46ec533-60e5-4da7-8304-c43491bdba4c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
What advanced Area of Mathematics can be delved into with only basic Calculus and Linear Algebra
up vote 7 down vote favorite
Hello Mathoverflow Community,
I would really appreciate some advice on this:
All I know is Basic Calculus and Basic Linear Algebra, I want to start learning more advanced material on my own while taking more advanced calculus/ Linear algebra courses.
Is there any area of mathematics which I can delve into with only this much knowledge? (ex: Topology, Number theory, ect.) or should I instead fully focus on my courses for now?
Thank you very much,
Thank you so much for all of your comments
Yes I am a freshman in university, and by basic i meant Calculus I, II, and (now) III, and I'm in a linear Algebra I course. I find myself really good at Calculus, I pick up new topics really fast.
However, I'm still improving in Linear Algebra.
I have picked up a couple of books on proofs, I seem to be doing well with it, However I exposed myself to a "Elementary Number Theory" book and I felt like a bit of background is missing (especially
in understanding advanced proofs).
Thank you once again for your amazing advice and comments, it really means alot to me to get such advice at this stage.
You should fully focus on your courses right now. If you find that you have spare time that you want to spend doing mathematics, you can pick up books and learn subjects in the order in which you
11 would learn them in your course. Basically, the reason things are taught in the order they are taught in is because many very clever and knowledgeable people thought about the best order to teach
things in and the result is the modern university curriculum. You will not do yourself a favour by skipping, say, introductory abstract algebra and trying to learn class field theory instead. –
Alex B. Dec 16 '10 at 5:03
One more thing: you shouldn't stress too much about getting to the forefront of modern research as quickly as possible. That would only slow you down in the long run. Just learn at a pace at
10 which you enjoy the maths. Not only will that make it easier for you to make an informed decision about what you actually want to do, you will also get the frontiers much more quickly than might
seem to you at the moment. In particular, if you do decide to read Artin's Algebra, don't try to swallow it. Do all the exercises and make sure you feel comfortable with the concepts. – Alex B.
Dec 16 '10 at 5:10
First, I agree with Thierry. Second, you can start working on open problems in, say, combinatorial probability right away (Take a graph on $n$ vertices. Do $100n$ steps randomly. Show that the
9 probability to visit all vertices is exponentially small in $n$ for large $n$. Nobody knows how to do it, and you don't need any advanced knowledge to start tinkering with it). Third, make sure
that, while you are getting your education, you do research in addition to regular studies, not instead of them. Fourth, talk to your professors about research options (works only if you pass all
exams with A's) – fedja Dec 16 '10 at 5:37
17 @Alex: I'm very uncomfortable with that appeal to authority. Just because the OP's university does things in some particular way doesn't mean that there are also very smart people who do things
another way. For instance, if the OP is at a school where no sort of honors math/math for math majors specific course, then your advice would probably be detrimental to the OP's mathematical
development. – Harry Gindi Dec 16 '10 at 6:56
I also disagree with Alex. Many people, including myself, learn better by pushing ahead to see what lies ahead of us, and then going back to fill in the earlier material when we understand why it
10 matters. You shouldn't push so far ahead that you lose track of your current courses but, if you are solving almost all the problems on your problem sets, and you are excited about getting a
glimpse of higher math, go for it! – David Speyer Dec 16 '10 at 15:33
show 15 more comments
10 Answers
active oldest votes
Stillwell's Naive Lie theory was essentially written as an answer to this question. I quote from the introduction:
It seems to have been decided that undergraduate mathematics today rests on two foundations: calculus and linear algebra. These may not be the best foundations for, say, number theory
up vote or combinatorics, but they serve quite well for undergraduate analysis and several varieties of undergraduate algebra and geometry. The really perfect sequel to calculus and linear
25 down algebra, however, would be a blend of the two —a subject in which calculus throws light on linear algebra and vice versa. Look no further! This perfect blend of calculus and linear
vote algebra is Lie theory (named to honor the Norwegian mathematician Sophus Lie—pronounced “Lee ”).
I'm no expert here, but it would seem to me that calculus and linear algebra are an excellent foundation for combinatorics, especially at the advanced undergraduate level. I took exactly
3 one course on combinatorics, from the great Laszlo Babai, and the tools we used were indeed calculus (e.g. knowledge of asymptotics of functions and the ability to optimize certain
constructions) and linear algebra (in very clever ways as in the book on the subject by Babai and Frankl). And I don't view the fact that you could probably get away with even less than
this as detracting from my assertion. – Pete L. Clark Dec 16 '10 at 8:29
To put the assertion of my previous comment more positively: it seems to me that the OP could now study combinatorics, if he so chose. – Pete L. Clark Dec 16 '10 at 8:30
This is indeed an excellent suggestion, Qiaochu. – Georges Elencwajg Dec 16 '10 at 8:32
@Pete: I think the operative word in that sentence is best. I certainly agree that calculus and linear algebra are quite useful in combinatorics and that the OP should feel free to study
combinatorics. – Qiaochu Yuan Dec 16 '10 at 8:44
2 Great recommendation. I never knew about this book, and it appears to be exactly the book I wish I had when I was a graduate student. – Deane Yang Dec 16 '10 at 16:34
show 3 more comments
Dear James,
To a large extent, the answer to this question will depend on how successful you are at working on your own without the infrastructure of a course/lecturer/problem sets/etc. to guide you.
Given this, if you don't know yet whether you work well by yourself, there's only one way to find out: try it! You may find that you are good at working by yourself, and, if so, it doesn't
really matter what your background is: you can fill it in by reading more books. On the other hand, you may find that it's hard to make progress without the usual structures that a course
provides, and that's fine; many successful mathematicians were not all that independent when they were undergraduates.
One book that you can read which doesn't require much background at all is Hardy and Wright's classic text on number theory. It does not suit everyone's taste, but if you are not yet sure
where your taste lies, you can take a look and see if you like it.
One thing that you didn't address in your post is the question of how comfortable you are with reading and writing proofs. If you are not comfortable with this aspect of mathematics, then my
up vote suggestion of Hardy and Wright won't be terribly appropriate, and neither will many of the others. If you are comfortable with proofs, then in some sense there is no limit on what you can do
10 down by yourself, since (at least in principle) you can pick up any textbook and try to learn what is in it. On the other hand, if you find that you aren't (yet) comfortable with reading and
vote understanding formal proofs by yourself, then it will be harder to go very far by yourself, and it might be better to focus on your formal course work for now. (And, if your ambition is to
pursue pure mathematical research, you should try to take courses that introduce you to reading and writing proofs as soon as you can.)
Whatever your situation is, you should always be sure not to neglect your formal coursework (even if the work you are doing on your own turns out to be more exciting). Excellence in formal
coursework is more or less a requirement for going on in graduate school, which is in turn a requirement for becoming a research mathematician.
add comment
I think if you like Taylor series, then Herb Wilf's generatingfunctionology would be a good choice. It shows how you can use Taylor series to solve counting problems in combinatorics. You
can download the second edition from Wilf's homepage.
up vote 6 You could also just try looking at textbooks for the undergraduate math major courses, such as abstract algebra, and real analysis. Michael Artin's Algebra gives a fairly broad
down vote introduction to the subject of abstract algebra. For real analysis, many people swear by Rudin's Principles of Mathematical Analysis. There are many other texts that cover the same
add comment
I will say: many areas and not a lot. Ha!
Let me explain: one the one hand, linear algebra and calculus are enough to consider a lot of non-trivial problems and describe basic issues in many areas. On the other hand, the various
areas of mathematics tend to interact intensely with each other, which is what makes math so cool. So it's going to be difficult to direct you to a specific area, since chances are that a
reference that is advanced enough will not be shy about using much more advanced notions (check out the math articles on wikipedia to get an idea of what I mean; even innocuous sounding
ones can get pretty intense).
up vote 5
down vote I do want to encourage you to give in to your curiosity: but instead of picking a specific subject, you would be much better off picking up specific references that are written more
specifically for your level. There are many of those, look for general math books, e.g. from the AMS and MAA. "Proofs from THE BOOK" might be a bit intense, but roughly at the right level.
Since the various areas of math tend to riff off each other as I mentioned, the last thing you want to do is get specialized too early anyway, so generalist books are better for you now.
Since you mention books from the AMS and MAA, I'll point out that the Student Mathematical Library series (STML) from the AMS has many books targeted at the level of undergraduates who
1 are ready to move beyond calculus and linear algebra and are looking for something suitable for self-study. There are other book series as well, some of which are similarly targeted, so
the OP may find something that tickles his fancy browsing somewhere in there. – Vaughn Climenhaga Dec 16 '10 at 18:01
add comment
I think a nice and interesting topic that seems doable with basic calculus and linear algebra would be some kind of introduction in the theory of knots and surfaces. In particular, I have in
mind the book "Knots and Surfaces. A guide to discovering mathematics" by Gilbert and Porter, Oxford Univ. Press, 1995. I think it would not be too sophisticated for you, it will introduce
up vote you to and get you thinking about various important objects in mathematics, and it may inspire you for your later studies. Have a look at it. If it turns out to be not that well doable for
3 down you, you could always take a second look at it in a year or so.
2 Other books along these lines: Stillwell's Geometry of Surfaces, Adams' The Knot Book, Weeks' The Shape of Space (no rigorous definitions, but lots of interesting pictures and exercises).
– Qiaochu Yuan Dec 16 '10 at 21:27
add comment
@James, OP of this fine question:
I've edited this answer in light of your response. Thanks for getting back to us with the details of your mathematical education to this point. As you can see from one of my comments, I was a
little concerned that you might have forgotten us! In any event, my follow up is presented in the paragraph after this next one, which I'm leaving in as part of my original answer to this
My original response was:
I think, if you want to get "better"answers--by which I mean answers more precisely tailored to your individual level of mathematical development, I think it would help if you edited your
question (since you can't make comments until you have 50 reputation points) so as to specify exactly what you mean by "basic". It sounds to me like you have been exposed to single-variable
calculus and linear algebra through maybe determinants. To offer a few hints as to what I'm fishing for here, perhaps you could tell us if you have studied: a.) infinite series; b.) partial
derivatives and multiple integrals; c.) eigevalues and eigenvectors; d.) characteristic polynomials of matrices; e.) the Hamilton-Cayley theorem; f.) vector calculus--gradient, divergence and
curl; g.)linear ordinary differential equations. If you do that, I'll try to answer your question. (You can find my email address on my user profile in case I forget to check back.) Meanwhile,
Qiaochu Yuan's answer looks fascinating to me, as does the problem fedja pitched.
up And my addenda are:
vote 3
down First of all, it sounds to me like you have encountered, or are about to encounter, almost everything I mentioned in your course work. Let's see, you've had a full year of calculus, if I
vote understand you, and you are in the first half of your second year. So if your courses are anything like mine were, you have probably seen items (a.) and (b.) on my list--you are probably just
getting into partial derivatives etc. right about now. I would guess you've scratched the surface of item (f.), and probably have been exposed to eigenvalues and eigenvectors (item (c.)), and
perhaps the characteristic polynomial (item (d.)). I'd bet that items (e.) and (g.) are just up the road in your course work. That being said, I think there are a few really good books you
could probably tackle without too much difficulty. First of all, you might check out the book Differetial Equations, Dynamical Systems and and Introduction to Chaos by Hirsh, Smale and
Devaney. This is an introductory text on differential equations which includes some very nice explanations of some fairly advanced topics; it should be pretty accessible to a person with your
background. If you are interested in abstract algebra, you might have a look at Emil Artin's little book called Galois Theory; it covers some central material on groups and fields, right from
the ground up. Incidentally, Smale, Hirsh and Devaney explains most of the linear algebra needed as you go along, so anything you haven't seen will be covered. If you like topology, and are
ready for a challenge, you might look into John Milnor's Topology from the Differentiale Viewpoint. Finally, Barrett O'Neill's Elementary Differential Geometry covers the basics of this field,
and as I recall only requires knowledge of calculus at your level, plus some linear algebra. All these books are good introductions to topics of great interest to many mathematicians at the
present time.
Don't forget to try the problems--math is like music; you've got to practice.
Good luck with it! Let us know how it goes!
add comment
Perhaps if you are analytically inclined then analysis on fractals is a good place to look. A good portion of the work done is with second finite difference equations leading towards a
limit definition of a ``second derivative'' and uses some linear algebra such as inverting small matrices. Strichartz's Differential Equations on Fractals is a good place to start,
especially the first few chapters where spectral decimation is discussed. As an aside signifigant number of papers in this area have come out of REU programs.
up vote 1 In general focusing on your courses is important but that should still leave you with some time to think about other topics as well. This kind of curiosity will help you see what's out
down vote there and give you a sensible way to choose a specialty when the time comes.
Best of luck.
add comment
A linear algebra point of view can be useful for some topics normally addressed in a second-year calculus course. Understanding jacobians and doing some things with systems of differential
equations that require eigenvectors, etc.
Then in statistics, suppose you want to understand why the sum of squares of residuals in a simple linear regression problem has a scalar multiple of a chi-square distribution with $n-2$
up vote 1 degrees of freedom, where $n$ is the number of data points, and why it's independent of the estimate of the slope. That all becomes clear if you know how a real symmetric matrix can be
down vote diagonalized by an orthogonal matrix. Or suppose you want to understand why every non-negative definite symmetric real matrix can be realized as the variance of some random vector. Same
Another small comment on statistics. Anyone with a pseudo-random number generator and some bits of commonplace software can draw a (pseudo-)random sample of 200 observations from a
bivariate normal distribution with correlation (for example) 0.8. The correlation in the sample is on average about 0.8, but from one sample to the next varies with a standard deviation
of about 0.025. So if you draw such a sample and get 0.825 or 0.775 as the correlation, that's unsurprising. What if you want to sample from the conditional distribution given the
correlation, so that..... – Michael Hardy Dec 16 '10 at 18:16
.....you always get exactly 0.8, while the 200 points continue to vary randomly from one sample of 200 to the next? I wrote a little program that does that. I did it by knowing about
spectral decompositions of 2-by-2 matrices. The theory of the multivariate normal distribution, the F-distribution, Student's t-distribution, the chi-square distribution, the Wishart
distribution, etc., requires linear algebra to be understood. If you've got linear algebra and first-year calculus, you can do quite a lot of that sort of thing. – Michael Hardy Dec 16
'10 at 18:20
add comment
I would highly recommend trying out mathematical logic and maybe also some introductory set theory. Logic is more or less self-contained and learning how to write up formal proofs is
essential in any higher level of mathematics that you encounter. The nice thing about working in logic is that it trains you to formally prove that which is often intuitively clear. The
same goes for proofs in finite set theory. But with set theory, you can also quickly work up to some results that are often initially counterintuitive involving the infinite.
up vote 1
down vote Perhaps someone else can recommend some references here since my pre-college/undergrad knowledge in these areas came from a variety of sources including oral presentations and course
I quite liked Thomas Forster's Logic, induction and sets — it has the right mix of philosophy and mathematics. It's also somewhat non-standard since he emphasises the notion of recursive
types, which gives it a bit of a computer-science sort of flavour. I also advise some caution — a formal proof in the sense of logic, is quite different to a formal proof in the sense of
mathematics! (Indeed, it's more akin to the so-called two-column proofs from high school, if I understand the descriptions correctly.) – Zhen Lin Dec 20 '10 at 14:07
@Zhen, there are definitely two-column proofs in logic, but there is so much more. To name a few examples, you have the Compactness Theorem, Gödel's Completeness theorem, and Gödel's
Incompleteness theorems. You also have all of the machinery involved in the proofs of the latter two results. Also, all of our proofs are theoretically supposed to be able to be
transformed into a two-column proof to verify correctness, but I don't see that happening in practice anytime soon. – Jason Dec 20 '10 at 19:16
@Jason I'm quite aware of that — I was simply making the amusing observation that the object "formal proof" studied in mathematical logic is, oxymoronically, precisely not the kind of
"formal proof" we mean in the rest of mathematics (including in logic itself)! – Zhen Lin Dec 21 '10 at 7:42
@Zhen: Yes, we definitely would not want to translate Wiles's proof of Fermat's Last Theorem into a "formal proof". :) Also, my comment was intended to clarify to the OP the value of
learning logic. I want to apologize for the fact that my wording suggested you weren't aware of the theorems in logic not proved with the two-column proof. – Jason Dec 21 '10 at 9:22
add comment
Pressley's Elementary Differential Geometry requires little more than some multivariable calculus and linear algebra. It treats curves and surfaces in $\mathbb{R}^{3}$. However, the author
is careful to point out that while many of the results generalise to higher dimensions, the methods used in the book do not always do so. This is done in part to make the subject
up vote 1 accessible. It might be worth a look to get a taste of differential geometry without the machinery developed in more advanced courses on topology, smooth manifolds and the like.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged advice or ask your own question. | {"url":"http://mathoverflow.net/questions/49601/what-advanced-area-of-mathematics-can-be-delved-into-with-only-basic-calculus-an/49660","timestamp":"2014-04-18T13:25:15Z","content_type":null,"content_length":"114234","record_id":"<urn:uuid:fc3a9cae-fbbf-4407-a5f0-0b0ce64b3c7f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/hanzi/medals","timestamp":"2014-04-20T14:05:52Z","content_type":null,"content_length":"60449","record_id":"<urn:uuid:80e3a5c5-b00b-4e8f-b389-4cb8b5b1ce73>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Waltham Math Tutor
Find a North Waltham Math Tutor
...I have particular expertise with standardized tests such as the SAT, ACT, SSAT and ISEE. I also tutor math and writing for middle school and high school students. I was trained by and spent 5
years working for one of the major test prep companies.
26 Subjects: including ACT Math, probability, linear algebra, algebra 1
...I have worked with students who are taking the GED specifically. As an undergraduate I read extensively in philosophy, literature, and sociology. I have also worked in an undergraduate tutorial
office for a year, and tutored high school students in English Language Arts (ELA). I have a BA in philosophy, and do well on standardized test.
29 Subjects: including geometry, trigonometry, statistics, literature
...Very accessible outside of class and willing to address any question you have about the subject matter.”As a student of philosophy, first as an undergraduate major in philosophy at Reed
College, and now as a PhD student in philosophy at Harvard University, I have been required to take courses in ...
24 Subjects: including logic, English, reading, writing
I am a former high school math teacher with 4 years of tutoring experience. I received nothing but positive feedback and recommendations. My schedule is flexible, but weeknights and weekends are
my preference.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...It is just one of the many secondary level math courses that I currently tutor. My secondary educational experience was traditional, which provided a thorough understanding of proper grammar,
sentence structure, paragraph composition, and essay architecture. In college and beyond, the focus is more on effectiveness: knowing your audience and tailoring the style to achieve the desired
44 Subjects: including econometrics, algebra 1, algebra 2, calculus | {"url":"http://www.purplemath.com/north_waltham_ma_math_tutors.php","timestamp":"2014-04-18T22:07:50Z","content_type":null,"content_length":"24112","record_id":"<urn:uuid:7a687251-1638-4b94-ad77-d133d1f2b157>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Video Library
Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation
spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all
available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant
bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org.
I review a class of nonlocally modified gravity models which were proposed to explain the current phase of cosmic acceleration without dark energy. Among the topics considered are deriving causal and
conserved field equations, adjusting the model to make it support a given expansion history, why these models do not require an elaborate screening mechanism to evade solar system tests, degrees of
freedom and kinetic stability, and the negative verdict of structure formation.
Augustine of Hippo declared he knew what time is until someone asked him. After 16 centuries we still largely ignore the true essence of time, but we made definite progress in studying its
properties. The most striking, and somewhat intuitively (and tragically) obvious one is the irreversibility of its flow. And yet, our fundamental theories are time-reversal invariant, they do not
distinguish between past and future. This is usually accounted for by assuming an immensely special initial condition of the Universe, dressed with statistical arguments. | {"url":"https://www.perimeterinstitute.ca/video-library?title=&page=16","timestamp":"2014-04-19T05:42:25Z","content_type":null,"content_length":"58652","record_id":"<urn:uuid:e2922afb-cf58-4f71-9af6-1853f654557d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Waves on water
November 26th 2009, 02:50 AM #1
Waves on water
Please correct me if I am wrong anywhere.
I get the impression from what I have read that, for a simple moving wave train on water,
X = horizontal rest coordinate of a bit of the water surface,
x = horizontal displacement of a bit of the water surface,
y= vertical displacement of a bit of the water surface,
t = time
a = amplitude/2
b = constant
c = 2*pi/wavelength
f = constant
speed of waves varies as sqrt(wavelength)
x = a*sin(b+c*X-v*t), y = a*cos(b+c*X-v*t), v=f*sqrt(c) :: eqns (1)
This leads to a hypocycloid with the sharper curves on top.
If a is too big compared to 1/c, the curve becomes an epicycloid with loops on top, but water cannot go through itself, so the waves develop foam crests (sometimes called "white horses").
I have been unsuccessfully trying to make equations (1) into a differential equation. Please is there known a differential equation, or set of differential equations, for waves moving on the
(2-dimensional) surface of 3-dimensional water, including explaining the usual mixed disorderly wave patterns seen on water in nature?
Last edited by Anthony Appleyard; November 26th 2009 at 07:18 AM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/116837-waves-water.html","timestamp":"2014-04-17T05:22:47Z","content_type":null,"content_length":"29935","record_id":"<urn:uuid:ca108eef-5387-410b-a927-2c7d08827abc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is MODULUS OF RUPTURE?
Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS!
All the definitions and meanings found are from third-party authors, please respect their copyright.
© 2014 - mrwhatis.net | {"url":"http://mrwhatis.net/modulus-of-rupture.html","timestamp":"2014-04-20T18:23:25Z","content_type":null,"content_length":"35127","record_id":"<urn:uuid:a1daee48-d6c8-4db1-9226-5ec23d4d3b6e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sarah's cracking algorithm
January 1999
Sarah Flannery, a student at Scoil Mhuire Gan Smal in Blarney, Co. Cork, was awarded an Intel Fellows Achievement Award for her project in the Chemical, Physical and Mathematical Category at the 1998
Esat Telecom Young Scientist and Technology awards.
Sarah's project for the exhibition was entitled "Cryptography - The Science of Secrecy". Working with matrices and using her PC at home for experiments, Sarah developed and tested a new cryptography
algorithm for encrypting data. While the work has been around for a little while, media attention has recently skyrocketed!
The initial idea for the algorithm was developed while Sarah was on a placement at Baltimore-Zergo in Dublin, and she has called it the Cayley-Purser algorithm, named after Arthur Cayley, an
19th-century Cambridge mathematician, and Michael Purser, a cryptographer who inspired her.
Sarah's new algorithm is a competitor to the popular RSA algorithm. Where RSA uses exponentiation to encode and decode a message, Cayley-Purser uses matrix multiplication. This means that while RSA
grows as a cubic with the length of the encryption "key", Cayley-Purser grows quadratically. For a typical key length of 1024 bits, this makes Cayley-Purser around 75 times faster.
One downside of Sarah's algorithm is that the encrypted messages it produces are much longer than those produced by RSA. More significantly, however, there is still some possibility that
Cayley-Purser might have some "security holes", making it too easy to "crack the code". Now that Sarah and her algorithm are receiving so much attention, other researchers will begin to explore the
security properties of Cayley-Purser, and see how it stacks up against the dominant RSA.
Some Internet discussion of Sarah's work.
The homepage of RSA. | {"url":"http://plus.maths.org/content/sarahs-cracking-algorithm","timestamp":"2014-04-17T01:01:11Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:82b7bcd4-f1ca-4c0f-8abd-ec560d2d4eca>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hint to a math problem posted earlier
Click here to go to the NEW College Discussion Forum
Discus: College Confidential Café: 2004 Archive: Hint to a math problem posted earlier
I remember making an earlier post for a math problem to be solved algebraically, and I think it went unsolved. So here's one hint to the problem:
a + b + c = 1
Prove that 1/3^a + 1/3^b + 1/3^c >= 3a/3^a + 3b/3^b + 3c/3^c
Since 1 = a + b + c....this is the same as
(a + b + c)/3^a + (a + b + c)/3^b + (a + b + c)/3^c >= 3a/3^a + 3b/3^b + 3c/3^c....
subtract the right side from the left to get...
(c -2a + b)/3^a + (a - 2b + c)/3^b + (b + a - 2c)/3^c >= 0
...which becomes...
c/3^a - a/3^a + b/3^a - a/3^a + c/3^b - b/3^b + a/3^b - b/3^b + a/3^c - c/3^c + b/3^c - c/3^c >= 0
....which you can then.....
...perhaps I'll post the next step...I'll wait a little bit more to see if anyone can solve it from here.
i believe one can use calculus to figure that one out also. Hey, thermodude, didnt you make the physics olympics? good job! Im hoping on doing the math ones this year.
Well, I made it as a semifinalist this year for Physics Olympiad, but I didn't get to the training camp. I'm definately going to try out again for it next year...and hopefully I'll get in when I do.
Yeah, Tongos, you should definately do the math olympiad next year. Ur excellent at math, and most importantly you love it...which definately makes an excellent combination.
k....back to the problem....just group it so that
it becomes...
(c - a)(1/3^a - 1/3^c) + (b - a)(1/3^a - 1/3^b) + (b - c)(1/3^c - 1/3^b) >= 0
Now think....a + b + c = 1
Apparently, it looks like interest in this math problem is rather low. Perhaps I should post the answer.
Ah shucks...no wonder no one could solve it....the final step has nothing to do with a + b + c = 1.
Think....what if a > b > c?
(c - a)(1/3^a - 1/3^c) + (b - a)(1/3^a - 1/3^b) + (b - c)(1/3^c - 1/3^b) >= 0
Report an offensive message on this page E-mail this page to a friend | {"url":"http://www.collegeconfidential.com/discus/messages/68281/89068.html","timestamp":"2014-04-19T17:03:55Z","content_type":null,"content_length":"14190","record_id":"<urn:uuid:4a1fe44b-0c70-422f-b5a3-b232b08760c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Approximating discrete probability distributions with bayesian networks
, 2001
"... Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted,
with some causal interpretation given to the graph in a network and some standard interpretation of probabi ..."
Cited by 11 (7 self)
Add to MetaCart
Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with
some causal interpretation given to the graph in a network and some standard interpretation of probability given to the probabilities specified in the network. In this chapter I argue that current
foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches. One standard approach is to interpret a Bayesian network
objectively: the graph in a Bayesian network represents causality in the world and the specified probabilities are objective, empirical probabilities. Such an interpretation founders when the
Bayesian network independence assumption (often called the causal Markov condition) fails to hold. In §2 I catalogue the occasions when the independence assumption fails, and show that such failures
are pervasive. Next, in §3, I show that even where the independence assumption does hold objectively, an agent’s causal knowledge is unlikely to satisfy the assumption with respect to her subjective
probabilities, and that slight differences between an agent’s subjective Bayesian network and an objective Bayesian network can lead to large differences between probability distributions determined
by these networks. To overcome these difficulties I put forward logical Bayesian foundations in §5. I show that if the graph and probability specification in a Bayesian network are thought of as an
agent’s background knowledge, then the agent is most rational if she adopts the probability distribution determined by the
- Proceedings of the Eleventh International Workshop on Principles of Diagnosis (DX-00 , 2000
"... This paper addresses the foundations of diagnostic reasoning, in particular the viability of a probabilistic approach. One might be reluctant to adopt such an approach for one of two reasons:
one may suppose that the probabilistic approach is inappropriate or that it is impractical to implement. I s ..."
Cited by 4 (1 self)
Add to MetaCart
This paper addresses the foundations of diagnostic reasoning, in particular the viability of a probabilistic approach. One might be reluctant to adopt such an approach for one of two reasons: one may
suppose that the probabilistic approach is inappropriate or that it is impractical to implement. I shall attempt to overcome any such doubts and to argue that on the contrary the probabilistic method
is extremely promising.
"... Abstract—Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and human-computer
interaction modeling. We focus in this paper on temporal models representable as excitatory networks where all c ..."
Cited by 4 (1 self)
Add to MetaCart
Abstract—Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and human-computer interaction
modeling. We focus in this paper on temporal models representable as excitatory networks where all connections are stimulative, rather than inhibitive. Through this emphasis on excitatory networks,
we show how they can be learned by creating bridges to frequent episode mining. Specifically, we show that frequent episodes help identify nodes with high mutual information relationships and which
can be summarized into a dynamic Bayesian network (DBN). To demonstrate the practical feasibility of our approach, we show how excitatory networks can be inferred from both mathematical models of
spiking neurons as well as real neuroscience datasets.
- CoRR
"... Motivation: Several different threads of research have been proposed for modeling and mining temporal data. On the one hand, approaches such as dynamic Bayesian networks (DBNs) provide a formal
probabilistic basis to model relationships between time-indexed random variables but these models are intr ..."
Cited by 2 (1 self)
Add to MetaCart
Motivation: Several different threads of research have been proposed for modeling and mining temporal data. On the one hand, approaches such as dynamic Bayesian networks (DBNs) provide a formal
probabilistic basis to model relationships between time-indexed random variables but these models are intractable to learn in the general case. On the other, algorithms such as frequent episode
mining are scalable to large datasets but do not exhibit the rigorous probabilistic interpretations that are the mainstay of the graphical models literature. Results: We present a unification of
these two seemingly diverse threads of research, by demonstrating how dynamic (discrete) Bayesian networks can be inferred from the results of frequent episode mining. This helps bridge the modeling
emphasis of the former with the counting emphasis of the latter. First, we show how, under reasonable assumptions on data characteristics and on influences of random variables, the optimal DBN
structure can be computed using a greedy, local, algorithm. Next, we connect the optimality of the DBN structure with the notion of fixed-delay episodes and their counts of distinct occurrences.
Finally, to demonstrate the practical feasibility of our approach, we focus on a specific (but broadly applicable) class of networks, called excitatory networks, and show how the search for the
optimal DBN structure can be conducted using just information from frequent episodes. Application on datasets gathered from mathematical models of spiking neurons as well as real neuroscience
datasets are presented.
"... According to different typologies of activity and priority, risks can assume diverse meanings and it can be assessed in different ways. In general risk is measured in terms of a probability
combination of an event (frequency) and its consequence (impact). To estimate the frequency and the impact (se ..."
Add to MetaCart
According to different typologies of activity and priority, risks can assume diverse meanings and it can be assessed in different ways. In general risk is measured in terms of a probability
combination of an event (frequency) and its consequence (impact). To estimate the frequency and the impact (severity) historical data or expert opinions (either qualitative or quantitative data) are
used. In the case of enterprise risk assessment the considered risks are, for instance, strategic, operational, legal and of image, which many times are difficult to be quantified. So in most cases
only expert data, gathered in general by scorecard approaches, are available for risk analysis. The Bayesian Network is a useful tool to integrate different information and in particular to study the
risk’s joint distribution by using data collected from experts. In this paper we want to show a possible approach for building a Bayesian networks in the particular case in which only prior
probabilities of node states and marginal correlations between nodes are available, and when the variables have only two states. 1
"... Abstract. Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and human-computer
interaction modeling. In this paper we introduce the notion of excitatory networks which are essentially tempor ..."
Add to MetaCart
Abstract. Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and human-computer
interaction modeling. In this paper we introduce the notion of excitatory networks which are essentially temporal models where all connections are stimulative, rather than inhibitive. The emphasis on
excitatory connections facilitates learning of network models by creating bridges to frequent episode mining. Specifically, we show that frequent episodes help identify nodes with high mutual
information relationships and that such relationships can be summarized into a dynamic Bayesian network (DBN). This leads to an algorithm that is significantly faster than state-of-theart methods for
inferring DBNs, while simultaneously providing theoretical guarantees on network optimality. We demonstrate the advantages of our approach through an application in neuroscience, where we show how
strong excitatory networks can be efficiently inferred from both mathematical models of spiking neurons and several real neuroscience datasets. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=473451","timestamp":"2014-04-20T08:00:32Z","content_type":null,"content_length":"28360","record_id":"<urn:uuid:f64b02e8-5d1b-44e3-a887-0e9db45d3163>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conroe Precalculus Tutor
Find a Conroe Precalculus Tutor
...I spent a year under rigorous tutelage with three experienced preachers in Lufkin, Tx. Afterwards, I spent five years preaching full time in Gladewater, TX and conducting missionary work
overseas. I've taught the Bible in public and private settings on thousands of occasions.
41 Subjects: including precalculus, chemistry, English, calculus
...In many ways it is more important than the upper level math courses. Here you learn the basic math that you will need for many of the courses you will take later. A good foundation in
Pre-algebra sets the stage for Algebra 1.
54 Subjects: including precalculus, reading, chemistry, calculus
...I try as much as possible to work in the comfort of your own home at a schedule convenient to you. I operate my business with the highest ethical standards and have consented to a background
check if you would like one.I have taught flute and clarinet lessons since the mid-'80s with many student...
35 Subjects: including precalculus, chemistry, physics, calculus
...I have worked as a Math Co-Teacher and Special Ed Case Manager for 6 years. Therefore I have a vast amount of experience working with children who have been diagnosed with ADD/ADHD. Several of
my students whom I have worked with through my position as a Special Ed Case Manager and Teacher have been diagnosed with Aspergers Syndrome.
32 Subjects: including precalculus, reading, ASVAB, piano
...Tutoring is a full time thing for me. This is what I do and the only thing I do. I would love to have you as a student on YOUR time not MINE.
24 Subjects: including precalculus, chemistry, calculus, physics
Nearby Cities With precalculus Tutor
Beach, TX precalculus Tutors
Cut And Shoot, TX precalculus Tutors
Houston precalculus Tutors
Humble precalculus Tutors
Jersey Village, TX precalculus Tutors
Jersey Vlg, TX precalculus Tutors
Katy precalculus Tutors
Kingwood, TX precalculus Tutors
Oak Ridge N, TX precalculus Tutors
Panorama Village, TX precalculus Tutors
Shenandoah, TX precalculus Tutors
Spring precalculus Tutors
The Woodlands, TX precalculus Tutors
Tomball precalculus Tutors
Willis, TX precalculus Tutors | {"url":"http://www.purplemath.com/conroe_precalculus_tutors.php","timestamp":"2014-04-21T07:39:33Z","content_type":null,"content_length":"23756","record_id":"<urn:uuid:d2c645f7-0867-412b-b757-ff05cec5f4e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Takoma Park Algebra Tutor
Find a Takoma Park Algebra Tutor
...I have taught elementary, middle and high school levels, as well as college level courses. With my experience and expertise I use a variety of teaching methods and will tailor my methodology to
meet your child's needs. I guarantee that your child will comprehend difficult concepts, be motivated to learn and will have grade improvement.
14 Subjects: including algebra 2, algebra 1, English, reading
...In addition to this, I believe that every child can achieve their true potential when given adequate resources and proper environment that nurtures their curiosity to learn. In my classroom, I
try to incorporate technology such as Khanacdemy, Youtube, and interactive education games such as jeop...
6 Subjects: including algebra 1, algebra 2, elementary math, GED
...I like to get feedback from my students often in order to improve their experience with tutoring continuously and to be able to cater to their specific needs. I have gotten tremendous
satisfaction from seeing my students' grades improve and from hearing positive feedback from them (including con...
40 Subjects: including algebra 1, chemistry, algebra 2, calculus
I have a passion for Math. I have been teaching and tutoring part time for the last 25 years. While in college, I was on the Putnam Intercollegiate Math Competition team for 3 consecutive years,
and won several math competitions.
37 Subjects: including algebra 1, algebra 2, calculus, physics
...I was always good at math in school, have always done math problems in my head, and did well on all standardized multiple choice problems. I was less good at the kind of material in the verbal
section, but since then my training in linguistics and wide reading have improved my skills in this area. The SAT reading test has multiple choice questions.
22 Subjects: including algebra 1, Spanish, English, reading | {"url":"http://www.purplemath.com/takoma_park_algebra_tutors.php","timestamp":"2014-04-19T23:18:44Z","content_type":null,"content_length":"24082","record_id":"<urn:uuid:6e3a4300-6311-4e9b-9889-b403f5c229c5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Quantum States of Shor's Algorithm
Given a number N and an element X of the multiplicative group mod N (i.e., X is a positive number less than N and relatively prime to N), the problem is to determine the order R of X mod N. (That is,
R is the minimum number such that X^R mod N is 1.)
Shor's algorithm uses two registers. We describe the state of the algorithm at any time by associating a complex number called an amplitude to each possible pair of values the two registers might
take. (This is analagous to describing the state of a probabilistic algorithm by associating a real number - the probability - with each possible deterministic state.) The amplitude is the "arrow"
(as Feynman calls it in his book QED) associated with the corresponding event.
The first rule regarding amplitudes is that, if the system were to be fully observed at any time, then the probability of finding a particular outcome (i.e., a particular pair of register values)
would be equal to the squared length of its current amplitude. The second rule ("adding arrows") is that, if there are several alternative ways for an outcome (again, a pair of register values) to
arise, then the amplitude for the event is the sum of the amplitudes of the alternatives. Since amplitudes can cancel, this is the mechanism for destructive interference, which is in turn what allows
the Fourier transform to be computed.
1. In the initial state, each pair of values (A, X^A mod N), where A ranges from 0 to Q-1, has amplitude 1/sqrt(Q), where Q is the smallest power of two larger than N^2.
The quantum computation that achieves this is analagous to a probabilistic computation that chooses a random A for the first register and then computes the corresponding value X^A mod N for the
second register.
2. Observe the second register. (We differ here from Shor for pedagogical purposes.) This partially "collapses" the state of the computation so that the only values A of the first register
associated with non-zero amplitudes are those such that X^A mod N agree with the observed value. That is, the sequence of amplitudes as A ranges from 0 to Q-1 is non-zero only on every Rth value.
For instance, if R was 4, the resulting sequence might be 0,S,0,0,0,S,0,0,0,S,0,0,0,S,0,0,..., where the sequence is Q numbers long and S is approximately 1/sqrt(Q/R).
3. Perform a discrete Fourier transform on the aforementioned sequence of amplitudes. This is the key step. The Fourier transform of such a sequence with period R and Q elements is essentially a
similar sequence, but with period Q/R and starting at the first element. That is, the non-zero amplitudes are now associated with those values of the first register that are (very close to)
multiples of Q/R. See here for elaboration.
4. Observe the first register. We will see a value C that is (very close to) one of the first R multiples of Q/R and each such multiple will occur with roughly equal probability. Compute C/Q, which
will be very close to one of the first R multiples of 1/R, say D/R. Next, compute the ratio D/R (using the fact that among all fractions expressible as ratios with denominator N or less, it is
the one closest to C/Q).
At this point, we know the ratio D/R - we don't yet know D or R. However, we can easily compute relatively prime A and B such that A/B = D/R. If D is relatively prime to R, then B will be R.
Since each non-negative integer smaller than R is roughly equally likely to be D, and at least a fraction proportional to 1/log(log(R)) of these values are relatively prime to R (for any R), we
find R with probability proportional to at least 1/log(log(R)).
Neal Young <Neal.Young@dartmouth.edu> Last modified: Tue May 21 11:47:38 1996 | {"url":"http://www.cs.ucr.edu/~neal/1996/cosc185-S96/shor/high-level.html","timestamp":"2014-04-16T04:11:17Z","content_type":null,"content_length":"5145","record_id":"<urn:uuid:40cccd50-2a56-4914-b29d-2b8572bd6684>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angle Between Line and Plane
The angle between a line, r, and a plane, π, is the angle between r and its orthogonal projection onto π, r'.
The angle between a line and a plane is equal to the complementary acute angle that forms between the direction vector of the line and the normal vector of the plane.
If the line, r, and the plane, π, are perpendicular, the direction vector of the line and the normal vector of the plane have the same direction and therefore its components are proportional:
1. Determine the angle between the line
2. Determine the angle between the line
3. Determine the angle between the line and the plane: | {"url":"http://www.vitutor.com/geometry/distance/line_plane.html","timestamp":"2014-04-21T12:09:44Z","content_type":null,"content_length":"17848","record_id":"<urn:uuid:cbf50fed-df14-4c28-8cc5-8ae5bb821826>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steel mill tackles power quality and power factor problems
North Star Steel confronted low power factor (PF) and harmonics problems prior to installing a new ladle furnace by conducting a computerized analytical study using harmonic and power factor
software. This study included field measurements of harmonics and PF on various feeders in the system. The Beaumont, Tex. company determined the characteristics of existing equipment and conductors.
The objective
North Star Steel confronted low power factor (PF) and harmonics problems prior to installing a new ladle furnace by conducting a computerized analytical study using harmonic and power factor
software. This study included field measurements of harmonics and PF on various feeders in the system. The Beaumont, Tex. company determined the characteristics of existing equipment and conductors.
The objective was to bring the overall plant PF to a value above 0.90 lagging, which would eliminate utility penalties.
At the time of the study, the plant had two existing scrap metal furnaces, and the electrical system included corrective PF capacitors as components of existing harmonic filter banks. A rolling mill
connected to a 13.8 kV system having no PF correction was also in operation. (See Fig. 1, on page 49, for a simplified one-line diagram.)
[Figure 1 ILLUSTRATION OMITTED]
Developing computer simulation models
A steady-state harmonic model of the plant was developed based on obtained information on existing loads, conductors, and field measurements. Testing was done at various locations within the facility
to determine harmonic voltages and currents resulting from existing scrap metal furnace and rolling mill operation. Measurements at the primaries of the furnace transformers, the main utility
entrance busses, the existing harmonic filter banks, the main feeders serving the 13.8kV bus, and the primary feeders serving the rolling mill busses were made. Testing was carried out during
concurrent operation of the scrap metal furnaces (arc load) and the rolling mill (drive load). To perform the harmonic analysis, including the future ladle furnace, typical data obtained from
harmonic field measurements of similar ladle furnace installations were used for simulation purposes.
Harmonic field measurements for 13.8kV systems
Power factor correction requirements at the steel mill were determined through field measurements via testing and computer analysis with the help of power factor and harmonic software. Using portable
harmonic monitoring instruments, on-site measurements were performed at the main switchgear feeder circuit breaker locations on the 13.8kV system for the rolling mill loads over several days.
Existing current and voltage transformers were used as signal sources at all measurement points. Power flow and power factor data also were gathered to aid in the system analysis.
The voltage THD measurement was in the range of 1.51% to 8.17% of the fundament voltage. The voltage range is due to the load level of the mill stands (a machine that "mills" or rolls the steel into
various shapes or different thickness). When the mill stands are working hard, there is high current flow and high THD. When the machines are idling between loads, the result is low current flow and
low THD. The current THD was in the range of 3.45% to 9.86% of the fundament current. The following total power flows into the 13.8kV bus were measured at the typical heavy loading periods.
[MW.sub.measured] = 23.3MW
[Mvar.sub.measured] =19.9Mvar
[MVA.sub.measured] = 30.6MVA
[PF.sub.measured] = 0.76 lagging
The mill stand motor drives are the largest harmonic producing load on the plant's electrical system. Connected through step-down transformers, these drives are 6-pulse units generating predominantly
a 5th harmonic current during operation.
From the standpoint of the power supply, the rolling mill is divided into two sections. (See Fig. 1.) The section served by Supply Circuit No.1 contains most of the 6pulse drives. The current THD on
this supply circuit ranged from 27.28% to 34.16% of the fundamental current. [See Fig.2 (on page 50) for a typical current waveform for the 6-pulse drives.] This waveform contains 5th, 7th, 11th, and
13th harmonics. The current THD measurements taken on Supply Circuit No. 2 ranged from 3.69% to 7.76% of the fundamental current.
[Figure 2 ILLUSTRATION OMITTED]
Harmonic field measurements for 34.5 kV system
For this system, the voltage THD was normally in the range of 0.53% to 6.3% of the fundamental voltage. During times of erratic furnace arcing conditions, voltage THD values escalated, with a high of
19.18% being recorded on test instruments. The current THD on the two main feeders were in the range of 3.14% to 16.08% of the fundamental current. When both scrap metal furnaces were in operation,
higher current THD was reported. The scrap metal furnaces are a low PF load, typically in the range of 0.65 to 0.85 lagging. With two harmonic filter banks already operating, the overall PF was in
the range of 0.90 lagging to 0.90 leading.
Power factor study
Analysis of the power measurements taken at the site using test instruments indicated that reactive compensation was needed at the main 13.8kV bus to correct the PF from an average value of 0.76
lagging to 0.95 lagging. This reflected an initial low PF for the plant. The study also determined that existing harmonic filter banks on the 34.5kV system would provide adequate compensation for the
additional reactive load of the new ladle furnace. This is because the 34.5kV system, at times, operated at 0.90 leading PF. Thus, the existing capacitors of the harmonic filters had the capability
to correct the low PF from the new load as well as serve the existing load.
Field measurements using high-grade test instruments indicated the 13.8KV system load was approximately 23.3 MW with a 0.76 lagging PF, with an uncorrected reactive load of 19.9Mvar. To improve the
13.8kV system PF to 0.95 lagging, the reactive load had to be reduced to 7.2Mvar. An additional capacitor bank of 12.7Mar would provide the reactive compensation.
Analysis shows need of harmonic filters
A harmonic computer program was used to perform a steady-state analysis of the facility's electrical system for each frequency at which a harmonic source was present. The program calculates the
harmonic voltages and currents throughout the system. In the harmonic simulations, the PF correction capacitors were connected to check for system resonance. The utility impedances were varied to
simulate changes to the short circuit MVA.
A harmonic computer simulation of the 13.8kV power system, with 12.7Mvar reactive compensation added to the main bus, was made and revealed a parallel resonant peak close to the 4.7th harmonic, as
shown in Fig. 3.
[Figure 3 ILLUSTRATION OMITTED]
A computerized harmonic analysis was done to determine the effect of the 12.7Mvar capacitor bank on the 13.8kV system. The calculated voltage THD more than tripled compared with the measured values,
and the current THD reached values as high as 62% due to the parallel resonant peak. Corrective action had to be taken.
The parallel resonant condition on the 13.8kV system was determined by harmonic computer simulation. The peak near the 5th harmonic would cause harmonic amplification and indicated the need for a
filter at this frequency. A 4.7th harmonic was selected for the tuned frequency filter to allow for inexact tolerances in the filter components and to reduce filter duty. The following equations were
used to calculate the correct size of an in-line reactor to convert the 12.7Mvar capacitor bank into a 4.7th harmonic filter:
[h.sup.2] = [X.sub.C]/[X.sub.L] [right arrow] [X.sub.L] = [X.sub.C]/[h.sup.2] [X.sub.C] = [(k[V.sub.L-L).sup.2]/[Mvar.sub.3-phase] Where: h=Tuned Harmonic [X.sub.C] = Capacitive Reactance of Filter
[X.sub.L] = Inductive Reactance of Filter [kV.sub.L-L] = Rated Voltage of Capacitor Bank [Mvar.sub.3-phase] = Rated Mvar of Capacitor Bank
Calculating an [X.sub.C] of 14.995 ohm for a 12.7Mvar capacitor bank applied at 13.8kV results in a required [X.sub.L] of 0.679 ohm for a 4.7th harmonic filter.
An electrical model of the plant's distribution system with a capacitor bank installed was made, and an analysis was carried out to determine various electrical parameters of the steel mill's
electrical system; the system did not perform as desired. Then, while modeling the electrical system, the capacitor bank was changed into a 4.7 harmonic filter bank by adding reactors and resistors.
Results showed a decrease in voltage and current THD from the values obtained during the testing of the electrical system prior to the installation of the filter bank. Maximum voltage THD was 2.63%,
below the 5% limit recommended by EKE, and maximum current THD was 3.83%, well below previously measured values and below recommended EKE limits.
Capacitors used with reactors as part of a filter will experience a steady-state voltage above the nominal line-to-neutral voltage of the electrical system. The excess voltage is a function of the
harmonic number at which the filter is tuned. When installing capacitors in a harmonic filter, make sure the voltages and currents are within the following limits established by EKE/ANSI standards
regarding capacitor nameplate rating: rms voltage less than 110%; peak voltage less than 120%; rms current less than 180%; and reactive power output limited to 135%. Harmonic analysis verified that
the highest values of [V.sub.rms], [V.sub.peak] and [I.sub.rms] across and through the filter bank capacitors were all below these standard ratings.
A facility's electrical operation often can be improved by obtaining information on the harmonics and the power factor being experienced using test instruments and by taking corrective measures based
on studies, via computer analysis. The studies will determine what will be needed to reduce voltage THD on the system, reduce current THD injected into the utility system, increase bus voltage and
improve power factor and eliminate power factor penalties. For the steel mill, the computer model created for the harmonic analysis included data to anticipate the effect of the new ladle furnace
installation. The filter bank for harmonic suppression and PF correction was designed to handle future loads as well as correct existing problems. Since nonlinear loads are typical in industrial
facilities today, properly specified and installed harmonic suppression filters are an effective way to reduce harmonics and improve plant power factor.
Beware of harmonics
When loads are linear (such as conventional induction motors), the voltage and current are essentially sine waves, and a form of power factor called displacement power factor (DPF) is present. The
ratio of useful working current to total current in the line, or the ratio of real power to apparent power, equals DPF. The use of certain AC equipment, primarily induction motors, requires power
lines to carry more current than is actually needed to do a specific job. If properly applied, capacitors will supply a sufficient amount of leading current to cancel out the lagging current required
by the inductive load. Capacitors thus reduce the total current flowing through the electrical system, improving system efficiency.
Today however, many electrical systems also have harmonic currents on their lines. Harmonics are caused by non-linear or pulsed loads (such as electronic power supplies), and their current causes the
apparent power to exceed the active power by a substantial amount. In these situations, the form of power factor present is called distortion power factor. The sum of the displacement and distortion
power factors is the total power factor (TPF).
For linear loads, you can make measurements to determine displacement power factor with a number of handheld instruments. These instruments can measure kilowatts (kW) and kilo-volt-amperes (kVA), and
some can directly read power factor (PF). When harmonics are present, meters with true rms capability must be used to accurately account for the total cur rent (the current at the fundamental 60 Hz
and the harmonic currents) to determine TPF. Also, it's advisable to read the true rms value of the voltage, since harmonic currents may cause voltage waveform distortion in some systems.
Harmonic currents in an electrical system have frequencies that are multiples of the fundamental 60 Hz current. Thus a 5th harmonic has a frequency of 300 Hz (for a 50 Hz system a 5th harmonic would
be 250 Hz). When the total current, including all the harmonic currents on the line, is used in determining the apparent power (kVA) and the active power (kW), then the TPF is equal to the rms values
of kW divided by kVA.
Note that harmonics do not usually show up in kW; this is the reason harmonics tend to reduce TPF. In an electrical system, the harmonic currents caused by nonlinear loads may cause TPF to be low
(.60 to .70) while the DPF could be relatively high (.90 to .95). Because of the abundance of non-linear loads now being placed on lines, you should look at PF as being the total power factor.
Be careful when adding pure capacitance. In the past, when the electrical system had a low PF, the usual procedure was to add "pure" capacitance (installing capacitors without reactors, which are
used to avoid tuned circuits and to reduce harmonics). That was done when all, or almost all, of the loads had a 60 Hz sinusoidal signature. Today, adding "pure" capacitance to correct a low PF
situation may cause problems due to harmonics in the electrical system. Because the impedance of capacitors decreases as the system frequency increases, and as harmonic currents are multiples of the
fundamental 60 Hz current, capacitors become "sinks" that attract high-frequency currents, causing possible overheating and early failure. The cure is to install filters (a combination of capacitors,
reactors, and resistors with specific design criteria) that trap the harmonics. These will improve power factor the same way as a capacitor does alone but, with the use of reactors and resistors,
will reduce the flow of harmonic currents in the facility's power system as well.
Adding "pure capacitance" may create other serious problems when harmonics are present. Capacitance and inductance in an electrical system can form a "tuned" circuit where the current is resonating
at a specific frequency. This is the frequency at which the capacitive reactance equals the inductive reactance. If this circuit is exposed to an "exciting" harmonic (which is at, or close to, the
resonating frequency and having sufficient amplitude), the current in the circuit will oscillate, causing a circulating current that could be much greater than normal. The resultant circulating
current can also produce extreme voltage distortion across all circuit elements. The high current flow can blow fuses, damage components, and reflect an excessively high harmonic level back into the
entire facility. Harmonic currents negatively impact motors and transformers as well because the currents can cause overheating.
When such resonance is a possibility, there are a number of system modifications that you should consider to lessen or effectively eliminate the problem. An obvious one is to shift the resonant
frequency so that it does not coincide with an exciting harmonic. This can be done by changing the capacitance (by adding or removing capacitors from the capacitor bank), or by relocating the
capacitor bank to change the amount of source inductance that is in parallel with the capacitance. Sometimes, if an existing capacitor bank is able to withstand the additional duty associated with
serving as a harmonic filter, the capacitor bank can be modified into a filter bank by installing appropriate rated reactors and resistors. If the resonant frequency cannot be changed, filters can be
added to reduce the most troublesome harmonics. However, you must make sure that the additional filters do not cause resonance at a lower frequency.
Installing adjustable capacitor banks and harmonic filters. Some facilities install capacitor banks that are designed to correct low PF at different kVA load levels to match a facility's electrical
system operation. The total capacitor bank installation might be split into four steps of PF correction, with the first step turned on at 25% of plant load, the second step at 50%, and so on.
Automatic controls are supplied with the adjustable bank to determine when the switching of the steps will occur. Each step must have an independent switching device.
When switching capacitors to add or reduce the units connected to a bus, it's important for you to recognize that changes in capacitance on the electrical system introduce the possibility of causing
undesirable resonance. As such, this PF correction technique must be studied carefully. Such a study usually includes testing to verify the existing harmonics in the system and the existing power
factor, an investigation of the electrical system to determine the ratings and characteristics of all the equipment and all the conductors, and then an engineering or computer analysis using harmonic
analysis software.
To mitigate harmonics that are present in a facility's electrical system, harmonic filters can be designed in a similar fashion to that of a capacitor bank. First, you must test for harmonics in the
electrical system to determine their order and magnitude. And, resonant conditions must be checked using harmonic analysis software before the filter bank is designed. Such a design usually
incorporates different combinations of filters (each filter to dissipate specific harmonics at their respective frequencies) that must be placed in and out of service in the proper sequence to avoid
problems. The normal sequence begins with the lower order filters first, and then adding the higher order filters. The sequence is reversed when filters are removed from service. This sequencing is
necessary to prevent parallel resonant conditions (which can amplify lower frequency harmonics) that can be caused by the higher frequency filters.
If capacitors are going to be added to a system seeing known non-linear loads, an engineering or computer study should be completed prior to the application of the capacitors. At times, the
application of PF capacitors can result in a tuned circuit that can cause substantial harm to an electrical system. Engineering or computer studies (the latter using harmonic software) can indicate
if filter banks are necessary for avoiding such problems and ensure proper ratings of the bank. These studies consist of doing a harmonic analysis of a facility's electrical system. To conduct such
an analysis, an investigation of the existing electrical system is necessary (determination and characteristics of all loads and conductors will be needed), and information must be obtained on all
new loads being planned. Prior testing of a facility's electrical system via suitable instrumentation to verify existing harmonics, PF, 24 hr load profiles, etc., would be very helpful in planning
low PF and harmonic remedies. You also should establish a budget for such testing.
Determining harmonic sources. Harmonics produced by scrap metal and ladle furnaces vary due to the changing arc length over the total heat period. The amount of harmonic generation is determined by
the furnace type. Scrap metal furnaces predominantly generate a 3rd harmonic voltage and produce a very erratic voltage total harmonic distortion (THD) on the bus. Ladle furnaces predominantly
generate 3rd and 5th harmonic voltages and produce a more consistent THD. Not only are "odd" harmonics (3rd, 5th, 7th, etc.) produced, "even" harmonics (2nd, 4th, 6th, etc.) are also present in power
systems that include furnace operation because of erratic arcing behavior yields an unequal current conduction for the positive and negative half cycles. Typical upper limits for the harmonic
components of the arc voltage for both scrap metal and ladle furnaces are given in the accompanying table.
A harmonic computer analysis of arc furnace operation includes a harmonic voltage source that describes the harmonics produced by the arc furnace. An arc furnace load can be represented as a harmonic
voltage source in series with some lead impedance, in this case, the impedance of the secondary cables and electrodes in the arc furnace. The lead impedance is not a negligible component and should
be included in the simulation models. The harmonic study should also include the operation of the motor drive systems, if they are served by the same electrical source. The harmonics spectrum
produced by 6-and 12-pulse drives are related to the pulse number of the drive.
Energizing of scrap metal and ladle furnace transformers is another harmonic current producing source. The dynamic inrush current waveform associated with transformer energizing operation includes
even and odd harmonics that decay with time until the transformer magnetizing current reaches a steady-state condition. Along with the fundamental current, the most predominant harmonics during
transformer energization are the 2nd, 3rd, 4th, and 5th. These harmonics normally do not cause problems unless the system is sharply resonant at one of the predominant harmonic frequencies produced
by the inrush current. Then, the transformer energization will excite the system, causing voltage distortion that will affect the energizing transformer's inrush current, producing more harmonic
currents and causing further distortion. This interaction can produce high values of rms and peak voltages that can degrade or damage equipment and lead to premature equipment failures.
Effects of capacitors. Capacitors can withstand a reasonable amount of harmonics. However, they may detrimentally interact with harmonic producing loads in a steel plant, like arc furnaces or rolling
mills. The result can be magnification of the voltage and current distortion. The increase in the distortion is usually due to a parallel resonant condition. This condition exists when the electrical
source's impedance (inductive reactance) is equal to the capacitor bank's impedance (capacitive reactance) at a common frequency. This condition produces a tuned circuit, causing an extreme voltage
at that frequency.
A good indication of excessive harmonics at a capacitor bank is an increase in the number of blown fuses and eventual failure of capacitors. When fuses blow in a capacitor bank, the parallel resonant
frequency will shift. The system will de-tune itself, shifting the parallel resonant point to a higher frequency, perhaps resulting in a stable operating condition. When blown fuses are replaced,
problems could reoccur, returning the system to the original parallel resonant frequency that caused the initial fuse operation.
A system that has a sharp parallel resonant peak at the 5th harmonic will produce approximately 360V of 5th harmonic voltage per ampere of 5th harmonic current injected into the system. Since this
harmonic current resulting from the operation of the arc furnace is significant, the resonant current could amplify the voltage at the capacitor bank, leading to capacitor failures, fuse operations,
or arrester operations.
Typical scrap metal and ladle furnace harmonic voltages Typical harmonic arc voltages produced by scrap metal and ladle furnaces.
Steel mills, because of arc furnace operation and rolling mill loads, are particularly plagued with harmonics and PF problems. These loads operate at PFs that may result in penalties and lower bus
voltages. The nonlinear characteristics of furnace arcs and rolling mill drives can generate significant harmonic currents that are passed into a plant's electrical system and onto utility lines.
Harmonic currents are produced at scrap metal or ladle furnaces when the harmonic voltages from the arc are impressed across the electrodes, and the load current passes through the impedance of the
leads and furnace transformer. These harmonic currents are injected back into the electrical system, and usually do not cause problems unless the system is sharply resonant at one of the predominant
harmonic frequencies. Then, the harmonic current can excite the resonant circuit, producing high values of rms and peak voltages. This can cause equipment damage or degradation, eventually leading to
equipment failure. Severe voltage distortion can disturb electronic power supplies, such as drive systems, and may interfere with control systems.
[Figure ILLUSTRATION OMITTED]
Capacitor. A device for storing an electric charge. It is made of tightly wrapped layers of very thin metallic plates, separated by a dielectric material. When line voltage is applied at the
terminals, capacitive reactance is introduced into the circuit. A shunt capacitor produces a leading current. Capacitance (C) is measured in farads, or more usually, microfarads ImFI.
Filter. A combination of capacitors, inductors, and resistors that are rated and configured in such a way as to reduce harmonic current at a certain frequency while exhibiting minimal impedance to
the fundamental 60 Hz current. A filter acts to shunt harmonic currents, dissipating them, while at the same time being able to provide reactive power at the fundamental frequency to correct low
power factor.
Reactor (inductor). A coil (with or without an iron core) that stores energy when AC current flows through it and provides inductive reactance in a circuit. A shunt reactor produces a lagging
current. Inductance (L) is measured in henrys, or more usually, microhenrys (mH).
RMS (or root-mean-square). The square root of the mean of the sum of the squares of instantaneous amplitudes of a wave form; the effective value of an AC current that produces heat equal to a DC
current of the same value. The rms value of a cycling function, when harmonics are present, is a more accurate value of the function compared with an average value of that function. | {"url":"http://ecmweb.com/content/steel-mill-tackles-power-quality-and-power-factor-problems?page=1","timestamp":"2014-04-18T11:41:19Z","content_type":null,"content_length":"149763","record_id":"<urn:uuid:7823862b-a3fe-46bd-b876-7475c44cf242>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
A disappearing number
Issue 49
December 2008
One morning early in 1913, he found, among the letters on his breakfast table, a large untidy envelope decorated with Indian stamps. When he opened it, he found sheets of paper by no means fresh, on
which, in a non-English holograph, were line after line of symbols. Hardy glanced at them without enthusiasm. He was by this time, at the age of thirty-six, a world famous mathematician: and world
famous mathematicians, he had already discovered, are unusually exposed to cranks. He was accustomed to receiving manuscripts from strangers, proving the prophetic wisdom of the Great Pyramid, the
revelations of the Elders of Zion, or the cryptograms that Bacon has inserted in the plays of the so-called Shakespeare.
So Hardy felt, more than anything, bored. He glanced at the letter, written in halting English, signed by an unknown Indian, asking him to give an opinion of these mathematical discoveries. The
script appeared to consist of theorems, most of them wild or fantastic looking, one or two already well-known, laid out as though they were original. There were no proofs of any kind. Hardy was not
only bored, but irritated. It seemed like a curious kind of fraud. He put the manuscript aside, and went on with his day's routine...
That particular day, though, while the timetable wasn't altered, internally things were not going according to plan. At the back of his mind, getting in the way of his complete pleasure in his game,
the Indian manuscript nagged away. Wild theorems. Theorems such as he had never seen before, nor imagined. A fraud of genius? A question was forming itself in his mind. As it was Hardy's mind, the
question was forming itself with epigrammatic clarity: is a fraud of genius more probable than an unknown mathematician of genius? Clearly the answer was no. Back in his rooms in Trinity, he had
another look at the script. He sent word to Littlewood that they must have a discussion after hall...
Apparently it did not take them long. Before midnight they knew, and knew for certain. The writer of these manuscripts was a man of genius.
"A Mathematician's Apology" by G. H. Hardy, from the Foreword by C. P. Snow
An image of Ramanujan projected over David Annen as Hardy in A disappearing number. Photo: Robbie Jack.
So C. P. Snow described, in his foreword to G. H. Hardy's A Mathematician's Apology, the day Hardy received his first letter from Srinivasa Ramanujan, the unknown Indian mathematician with whom he
would go on to form one of the most remarkable collaborations in the history of mathematics. It was this scene, and the poetic descriptions of mathematics by Hardy in his book, that inspired Simon
McBurney (Artistic Director of Complicite) to create a show based on Ramanujan and Hardy's story, and the mathematics they created. But rather than a straight portrayal of the lives and relationship
of Hardy and Ramanujan, Complicite created a play (most recently performed at the Barbican in London) intertwining the past and the present, with mathematical ideas and concepts woven into the story.
At the heart of the play are two relationships: one set in the past, the mathematical collaboration between G.H. Hardy and Srinivasa Ramanujan, and the other set in modern day, a love story between a
mathematician, Ruth, and her husband, Al. The depiction of Hardy and Ramanujan's extraordinary mathematical collaboration shows them as romantic heroes on a quest for truth, with all the passion,
hardships and joy that such a quest should involve. In the scenes of the mathematicians at work, Complicite manage to convey the incredible passion and creativity of mathematical research. At one
point, while Ramanujan and Hardy are shown working together at a blackboard, musical patterns created by Nitin Sawhney repeat over and over again, building to a climax that seems to evoke the frenzy
of creativity and climax of discovery mathematicians experience in their research work.
Hardy and Ramanujan weren't just mathematical collaborators, they were also great friends. Indeed, in his book Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work, Hardy said, "my
association with him was the one romantic incident in my life". And despite the reserved nature of Hardy's character, this personal relationship comes across strongly in the play. Unusually for
portrayals of mathematicians (who often appear in popular culture as cold, insane or friendless), the play rejects the common stereotypes of mathematicians. Instead, all the characters as portrayed
as human beings, experiencing the passions, desires, fears and frailties that we all do.
The second relationship also depicts a quest, but one that is more familiar: the personal quest for love, marriage and family. If the depiction of Hardy and Ramanujan is of two extraordinary heroes,
the depiction of the mathematician Ruth and her husband Al is one we are much more likely to identify with. Ruth's character shows great passion for her mathematical work, but it is the very personal
story of her relationship with Al, from meeting in a maths lecture, to marriage and trying for children, that forms the heart of the play.
Archimedes will be remembered when Aeschylus is forgotten, because languages die and mathematical ideas do not. "Immortality" may be a silly word, but probably a mathematician has the best chance of
whatever it may mean.
G. H. Hardy, "A Mathematician's Apology" p81
The play opens with Ruth giving a maths lecture (the same lecture where she meets Al for the first time) explaining the maths contained in Ramanujan's very first letter to Hardy. Among many results
in this letter was the following remarkable equation:
At first sight this appears to be mathematical crazy talk, but as Hardy found on closer inspection, this equation begins to reveal Ramanujan's genius.
The series divergent series: the sum gets larger and larger the more terms you add. In fact you could say that
Assigning a finite value to this infinite series does not seem to make sense at all. But Hardy recognised that when Ramanujan wrote
zeta function
Now this was not strictly correct. Euler's zeta function is only defined when the number
to a finite number.
However, there is another function, known as the Riemann zeta function,which agrees with Euler's zeta function for all
The formulas used to express the Riemann zeta function for all values of functional equation, that is, an equation that describes the value of the Riemann zeta function,
gives the value of the Riemann Zeta function at the point
Most of the terms in the functional equation are familiar, perhaps apart from the Gamma function, factorial of
the value of the Riemann zeta function at the point
The problem of finding the value this series converges to is known as the Basel problem. Euler found the solution in 1735, showing that
(You can read a remarkable proof of this in the
An infinite series of surprises
So we can now calculate the value of the Riemann zeta function at
This is exactly the answer Ramanujan gave in his letter to Hardy. This surprising result showed Hardy that Ramanujan had managed to derive the Riemann zeta function and its functional equation
correctly himself. In fact Ramanujan, almost entirely self-taught, had reinvented many other areas of Western mathematics, on his own. He was a mathematical genius with an intuitive connection to
mathematics. And Hardy displayed his own mathematical genius by seeing through Ramanujan's idiosyncratic notations to uncover the genius beneath.
Saskia Reeves as Ruth in A disappearing number. Photo: Tristram Kenton.
In the opening scene of A disappearing number the character of Ruth demonstrates this proof to the audience, immediately establishing a parallel between the mathematics lecture theatre and the
theatre we were sitting in watching the play. Communicating maths is a performance, and in doing so lecturers weave for their students a mathematical story. And here we were, as an audience, suddenly
finding ourselves in a sea of mathematics. The audience laughed nervously as the maths got more and more complex. You could feel the frisson of fear: were we really expected to understand what was
going on? In a way it felt like the company was playing with the fear of maths that many people (sadly) feel. Comfortable in the safe surrounds of a theatre, they were suddenly thrown into the
mathematical deep end. For a few minutes you could feel the audience's growing concern that they should be following the mathematical reasoning on the board. The tension was shortly broken when
another character stepped on stage and the play moved on, but it was an exciting introduction. It's particularly exciting to see "real" mathematics literally on stage.
A mathematician is working with his own mathematical reality. 317 is prime, not because we think so, or because our minds are shaped in one way rather than another, but because it is so, because
mathematical reality is built that way.
G. H. Hardy, "A Mathematician's Apology" pp 129-130
One of the most intriguing elements in the play is the use of mathematical concepts to express human emotions and the human condition. In the post-show talk, Simon McBurney explained that the play
explores the connection between mathematics and real life. But rather than focussing on the direct connection we more usually talk about in Plus — how maths is used to describe the reality of our
physical world — the play explores how mathematical concepts can also describe emotional experiences. One character uses infinity to describe the place they'll go when they die. Partitions of numbers
are enacted on stage by partitioning people or objects, echoing the physical separations of characters in the play at that time. And the physical decomposition of dead bodies is compared to the
decomposition of numbers into their prime factors, prime numbers being the bones of mathematics.
One particularly lovely example is the use of the convergent infinite series
as a metaphor for love, for marriage and for having children.
To see how this series converges we look at the partial sums, the sum of the series up to the
As this is a finite sum, we can manipulate it as we would any algebraic equation. For example
which gives
With this new expression of
A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas... The mathematician's patterns, like the
painter's or the poet's, must be beautiful; the ideas, like the colours or the words, must fit together in a harmonious way... It may be very hard to define mathematical beauty, but that is just as
true of beauty of any kind — we may not know quite what we mean by a beautiful poem, but that does not prevent us from recognizing one when we read it.
G. H. Hardy, "A Mathematician's Apology" pp 84-5
The content of the play was obviously mathematical, but even the structure of the play seemed to have a mathematical nature. Victoria Gould (who is the subject of this issue's career interview), a
member of Complicite who is also a mathematician said: "We wanted to somehow embody the maths in the show, within the music and the rhythms and the space. We wanted it to be a mathematical experience
as well as a theatrical one." The use of repetitions of the same scenes throughout the play, with each repetition giving more meaning, felt very much like the seeking of patterns that is an important
element of mathematics. The play doesn't have a linear narrative. Instead it moves forwards and backwards through time and space, sometimes showing memories and past events superimposed on the
present. Counting is used to evoke a feeling of moving forwards or backwards through time, and characters from different moments in history are positioned along a diagonal of the stage.
Shane Shambhu as Ramanujan and David Annan as Hardy in A disappearing number. Photo: Tristram Kenton.
It isn't surprising that it took a very unusual process to create such an unusual play. On and off over two years, the company played with the ideas and concepts to develop the show. They worked in a
well equipped studio, the actors working with musicians, including Nitin Sawhney who provided the original music for the show, visual artists, sound designers, dancers, and mathematicians including
Marcus du Sautoy. They explored the personal stories, as well as the mathematical concepts, and the images and emotions these all provoked.
The way that McBurney and Gould described this development process was very reminiscent of mathematical research. "What you get is an enormous amount of complete dross, but once in a while you know
that something beautiful has happened," explained Gould. "When something is really right, and beautiful, you know. When Simon [McBurney] makes theatre, he is looking for that moment, when all the
audience will come together." McBurney described mathematicians as being at the edge of what is knowable, and making the same kind of creative leaps of imagination as an artist. He recalled Hardy's
own description of mathematical research: that mathematicians know when they are going in the right direction when what they are doing is beautiful.
The resulting play is a wonderful account of mathematical lives, real and imagined, and of creating mathematics. It puts some very sophisticated mathematical concepts onstage, bringing mathematics to
an audience who otherwise might never have experienced it. But it also brings new perspectives to those with mathematical experience, including using mathematics as metaphor for the human condition.
For me the most inspiring and wonderful thing was seeing mathematics authentically portrayed in a dramatic setting. Mathematicians as real people, with real emotions. The mathematics was accurately
conveyed, as was the mathematical passion and creativity that is vital to mathematical research. As someone who has always felt very passionate about maths, and that the creative moments of making
mathematics are as exciting as any other creative discipline, it was fabulous to see this passion enacted on stage. Through the play, Complicite portrayed mathematicians that we could all identify
with. If their vision of mathematicians can enter the popular imagination, maybe some day young people might be just as likely to daydream of becoming a mathematical hero, as any other.
Further reading
• You can find more about A disappearing number, including interviews and film clips on Complicite's web site
• Victoria Gould talks to Plus about developing the play in this issue's career interview
• And you can read more about Ramanujan and Hardy on the MacTutor History of Mathematics archive, or in Hardy's book A Mathematician's Apology (1940).
Rachel Thomas
is co-editor of | {"url":"http://plus.maths.org/content/disappearing-number","timestamp":"2014-04-18T21:04:16Z","content_type":null,"content_length":"59080","record_id":"<urn:uuid:dddc5e1a-8bd0-4bcc-bd23-7eee50637df8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diffusion Equation Derivation
A selection of articles related to diffusion equation derivation.
Original articles from our library related to the Diffusion Equation Derivation. See Table of Contents for further available material (downloadable resources) on Diffusion Equation Derivation.
Diffusion Equation Derivation is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Diffusion Equation Derivation books and related
Suggested Pdf Resources
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/diffusion-equation-derivation/","timestamp":"2014-04-16T07:35:34Z","content_type":null,"content_length":"29331","record_id":"<urn:uuid:cb7e7f77-50ba-462d-a4d9-d48ca3533766>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Experimental Mathematics and Integer Relations
by Jonathan M. Borwein
The emergence of powerful mathematical computing environments, growing availability of correspondingly powerful (multi-processor) computers and the pervasive presence of the Internet allow
mathematicians to proceed heuristically and quasi-inductively . Its exciting consequences have been studied over the past ten years at the Centre for Experimental and Constructive Mathematics of
Simon Fraser University, Canada. They include several recent results connected with the Riemann Zeta Function.
Mathematicians increasingly use symbolic and numeric computation, visualisation tools, simulation and data mining. This is both problematic and challenging. For example, we mathematicians care more
about the reliability of our literature than other sciences. These new developments, however, have led to the role of proof in mathematics now being under siege.
Ten years ago I founded the Centre for Experimental and Constructive Mathe-matics (CECM) and wrote: At CECM we are interested in developing methods for exploiting mathematical computation as a tool
in the development of mathematical intuition, in hypotheses building, in the generation of symbolically assisted proofs, and in the construction of a flexible computer environment in which
researchers and research students can undertake such research. That is, in doing Experimental Mathematics. At present, the mathematical universe is unfolding much as anticipated.
Many of my favourite examples originate in between mathematical physics and number theory/analysis/knot theory and involve the ubiquitous Zeta Function, of Riemann hypothesis fame. They rely on the
use of Integer Relations Algorithms: A vector (x[1], x[2],..., x[n]) of real numbers possesses an integer relation if there are integers ai not all zero with
0 = a[1] x[1]+ a[2] x[2] + ... + a[n] x[n].
The goal is to find ai if such exist, and if not found, to obtain lower bounds on the size of possible a[i].
For n = 2, Euclid s algorithm gives the solution. The first general algorithm was found in 1977 by Ferguson and Forcade, followed by Lenstra, Lenstra and Lovász (LLL) in 1982, implemented in Maple
and Mathematica, and my favourite PSLQ (Partial Sums using matrix LQ decomposition) in 1991, with a parallel version in 1999. Recently, J. Dongarra and F. Sullivan ranked Integer Relation Detection
among the 10 algorithms with the greatest influence on the development and practice of science and engineering in the 20th century . (Some others: Monte Carlo, Simplex, FFT.)
A CECM interface allows one to find relations and explore the underlying algorithms. Three examples are presented in the Boxes: 1. Algebraicness, 2. Riemann Zeta Function, 3. Binomial Sums.
1. Algebraicness 3.
An immediate very useful application is to determine if a number is algebraic. We compute a to sufficiently high precision (up to n[2] digits) and apply LLL to the vector (1, a, a[2], Sums
..., a[n-1]). Solution integers a[i] are coefficients of a polynomial likely satisfied by a. If no relation is found, firm exclusion bounds are obtained.
2. Riemann Zeta Function
The Riemann Zeta Function is defined, for s > 1 by
Thanks to Apéry (1976) it is known that
These suggest that
is a simple rational or algebraic number.
Sadly, or happily, PSLQ tells us that if Z[5] satisfies a polynomial of degree &Mac178; 25 the Euclidean norm of coefficients exceeds 2 x 10^37. And the order and norm can be extended at
Results in Box 3 were obtained via Gosper s (Wilf-Zeilberger type) telescoping algorithm (see also Paule s contribution in this issue). Identity (3) was recently proved by Almkvist and Granville
(Experimental Math, 1999) thus finishing the proof of (2) and perhaps shedding light on the irrationality of N+1) is not proven irrational for N > 1, though it is now known that one of
Many other similar triumphs, and an equal number of failures are described in the references given below, in fields as diverse as quantum field theory, nuclear magnetic resonancing, probability,
complexity, algorithm design, and coding theory:
• The role of proof in mathematics: J.M. Borwein and P.B. Borwein, Computing in Science & Engineering, Vol.3, 48-53 (2001): http://www.cecm.sfu.ca/preprints, and http://www.cecm.sfu.ca/personal/
• Experimental Mathematics:
D.H. Bailey and J.M. Borwein in: Mathematics Unlimited 2001 and Beyond, B. Engquist and W. Schmid (Eds.), Springer-Verlag, Vol.1, 51-66 (2000).
• Top 10 algorithms:
Computing in Science & Engineering, Vol.2, 22-23 (2000): http://www.cecm.sfu.ca/personal/jborwein/algorithms.html
• Integer Relations: http://www.cecm.sfu.ca/projects/IntegerRelations/
Please contact:
Jonathan M. Borwein,
Simon Fraser University, Canada
Tel: +604 291 3070/5617
E-mail: jborwein@cecm.sfu.ca | {"url":"http://www.ercim.eu/publication/Ercim_News/enw50/borwein.html","timestamp":"2014-04-20T13:54:34Z","content_type":null,"content_length":"9692","record_id":"<urn:uuid:b96b265f-3997-4ebf-95cb-f737e0a80cd0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 2008 [00623]
[Date Index] [Thread Index] [Author Index]
Re: notation using # with exponents and &
• To: mathgroup at smc.vnet.net
• Subject: [mg93160] Re: notation using # with exponents and &
• From: AES <siegman at stanford.edu>
• Date: Wed, 29 Oct 2008 05:49:56 -0500 (EST)
• Organization: Stanford University
• References: <ge6nik$ll4$1@smc.vnet.net>
In article <ge6nik$ll4$1 at smc.vnet.net>,
Bill Rowe <readnews at sbcglobal.net> wrote:
> I don't see that what Mathematica defines as a pure function is
> inconsistent with the Wikipedia definition. For example,
> f = #^2&
> Always returns the same value for the same argument and has no
> I/O side effect, meeting both requirements for Wikipedia's
> definition of a pure function.
I guess I get confused, or led astray, by exactly how the words in the
Wiki definition are to be interpreted. For example
f = #^x &
seems to me to be a function with an argument (the "#"), and a -- what
shall we call it? -- a "parameter" (the "x"); and this construct returns
_different_ values depending on how the value of x is pre-set, or
changed, before calling it.
In other words, it does _not_ always return the same value for the same
I can see #1^#2 & as evidently a pure function. But if the function
definition also contains parameters that can be varied, is it still a
pure function? And if so, what does "pure" really mean? | {"url":"http://forums.wolfram.com/mathgroup/archive/2008/Oct/msg00623.html","timestamp":"2014-04-18T21:14:06Z","content_type":null,"content_length":"26327","record_id":"<urn:uuid:5045453a-dfa6-4d4a-b0a5-df255af78d75>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to prove?
June 7th 2009, 06:22 AM
how to prove?
I realize that this is a quite simple proof. I can reason the answer but cannot prove it.
problem Statement:
Say c is a real number and has the property Q if x <= y whenever x, y in real numbers with cx <= cy. Which real numbers have property Q?
June 7th 2009, 06:41 AM
I frankly don’t see what there is to prove here.
The number property Q is clearly membership in the positive real numbers.
Are you to prove this: $x \leqslant y\;\& \,c > 0\, \Rightarrow \,cx \leqslant cy$ ?
June 7th 2009, 06:49 AM
The problem statement is exactly as I have written it. Perhaps it is not a proof and that simply the answer would suffice. My other rationale is that we need to use a certain property that we
know is true i.e. (x-y)^2 > 0 for x != y. If you Plato cannot formalize a proof for this then I am going to quit feeling bad about it. My only problem is that I would expect the prof to say
"prove it".
June 7th 2009, 07:04 AM
Once again, as I said above, I do not see what there is to formalize other than what I posted.
BTW: I think the statement is very poorly written.
June 7th 2009, 09:41 AM
Suppose $x \leq y$, $c\in \mathbb{R}$.
$x \leq y$
$0\leq (y-x)$
Case 1: $c\in \mathbb{R}^+$
$c0\leq c(y-x)\Rightarrow 0\leq cy-cx \Rightarrow cx \leq cy$
Which is property Q.
Case 2: c=0
$0\cdot 0\leq 0(y-x)\Rightarrow 0\leq 0$
Which is trivially property Q.
Case 3: $c\in \mathbb{R}^-$
$c0\geq c(y-x)\Rightarrow 0\geq cy-cx \Rightarrow cx \geq cy$
Which is only property Q if x=y, but Q is for all $x,y \in \mathbb{R}$, so case 3 does not satisfy property Q.
Thus property Q is being non-negative. | {"url":"http://mathhelpforum.com/discrete-math/92082-how-prove-print.html","timestamp":"2014-04-21T04:03:58Z","content_type":null,"content_length":"9837","record_id":"<urn:uuid:b8c4bc89-9ca6-4443-bff5-b76889a70c64>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] Generic polynomials class (was Re: Volunteer for Scipy Project)
[SciPy-dev] Generic polynomials class (was Re: Volunteer for Scipy Project)
Charles R Harris charlesr.harris@gmail....
Fri Oct 16 21:24:21 CDT 2009
On Fri, Oct 16, 2009 at 1:58 PM, Anne Archibald
> 2009/10/15 Fernando Perez <fperez.net@gmail.com>:
> > On Thu, Oct 15, 2009 at 6:36 PM, Charles R Harris
> > <charlesr.harris@gmail.com> wrote:
> >>
> >> How did you build the analysis objects? Part of what we have been
> discussing
> >> it the easiest way to expose a consistent interface while reusing the
> >> maximum amount of code.
> >
> > See
> >
> > http://github.com/fperez/nitime/blob/master/nitime/timeseries.py
> >
> > starting around line 504. The analyzers store a reference to a time
> > series object and then expose properties (that are replaced in-place
> > by a static attribute once computed, as explained in the paper) that
> > make the library calls. All the actual numerics are in
> >
> > http://github.com/fperez/nitime/blob/master/nitime/algorithms.py
> >
> > I'm not saying this will exactly solve your current design issues, but
> > it may provide some useful ideas to think about.
> Well, I think I'm convinced of the necessity for a low-level
> procedural interface, so I've set up a draft one in git. There's some
> question of just how bare-bones it should really be - for example
> should it include an add function, since that's just using the
> coefficient-extension function then adding? What about multiplying,
> which is the same thing, though this is not obvious?
It should definitely include add. For my personal use the following tend to
be most important:
Stuff I use:
- evaluate
- differentiate
- integrate
- leastsquares
- zeros
Stuff I don't use much but should be there:
- add
- subtract
- multiply
- divide
Stuff for completeness:
- powers
- weight function
- one
- zero
- X
> As it stands I've only included implementations of functions with
> nontrivial algorithms.
Include them all, even the trivial ones. It may look silly, but more often
than not it will save hassles down the line. Remember, you aren't doing this
for you and *your* classes, you are doing it for someone else and *their*
classes. Why should they have to worry about how you do addition? If nothing
else the meaning of the inputs will be documented.
> This has the advantage that, realistically,
> these functions will not be shared between different polynomial
> representations. If I start including more straight-forward functions,
> I will need to figure out how to share the implementation of "add"
> between different representations if I want to avoid it being
> boilerplate code for all representations.
Write a polyutils module and define:
my_utterly_trivial_addition_function_that_bores_me =
If we're implementing the heavy lifting in a procedural interface,
> then the object implementations will just be plumbing (as I see your
> analyzer objects are). So on the one hand there's not much code to be
> shared, but on the other it's all boilerplate that would benefit from
> code sharing.
This is where the polynomial code differs from the nipy code: they have
analyser objects with different interfaces that use the same basic functions
on different datasets. We want to have a common "analyser" interface that
uses different basic function.
> I think, though, that the next important decision has nothing direct
> to do with code sharing; we need to settle on what polynomial objects
> will look like to users.
> Some things are clear:
> * Polynomial objects will be immutable.
> * Polynomial objects will support + - * / % ** divmod and __call__
> * Polynomial objects will be able to report their roots
> * Polynomial objects will be able to return polynomials representing
> their derivatives and antiderivatives
> * Some polynomial objects will have extra features, like the ability
> to trim small coefficients from Chebyshev polynomials
> * We need some method of converting from one representation to another
Since the Chebyshev class can operate with objects, evaluating the Chebyshev
series for the "X" polynomial object does the conversion. This works for
all the graded polynomial classes and might could be made to work with
Lagrange basis. Hopefully, that is something that isn't used a whole lot
anyway. Efficient evaluation for numeric arguments need to be reasonably
efficient, evaluation for objects less so.
> * We should have convenience objects zero, one, and X, to build
> polynomials.
> * There should be a least-squares polynomial fitting function.
Maybe a standard interval also?
> * Some polynomials will have functions that generate them (e.g.
> Chebyshev series for a function)
Isn't this a special case of fitting?
> * We need at least power basis, Chebyshev, and Lagrange polynomials.
> * Ultimately the polynomial class should be able to replace
> KroghInterpolator and BarycentricInterpolator and be stably returned
> from the orthogonal polynomial routines
We probably want trigonometric polynomial class(es) also. I use barycentric
forms of these for the complex remez algorithm. They are just streamlined
versions of a lagrange basis with the sample points on the unit circle in
the complex plane or the two-sheeted Riemann surface equivalent.
> Less clear:
> * Should polynomials store a degree attribute or should it be inferred
> from the size of their coefficient array? (Sometimes it's handy to
> work with degree-inflated polynomials, but deflating their degree is a
> nontrivial operation.)
I choose the latter because it is more dynamic. The coefficients are public
attributes and if some fool wants to assign a different array to them I
figure thing should still work. Computing it is unlikely to be a performance
bottleneck in any case.
* Should we try to accommodate complex variables? Complex coefficients?
Complex coefficients, definitely. I might not use them, but no doubt someone
will want them. If there are stability problems, that is part of the game
and maybe the user can offer improvements. Caveat emptor.
> * Should we try to accommodate vector values?
If you mean evaluate at a vector of points, then yes, it is the only way to
overcome function calling overhead if you need a ton of values.
> * Should polynomials have a method to evaluate derivatives and
> antiderivatives directly without constructing the
> derivative/antiderivative object first?
I think that is unnecessary.
> * Do we need Lagrange-Hermite (some derivatives specified as well as
> values) polynomials? (KroghInterpolator implements these)
Don't know
> * How should code that works for any polynomial type (e.g.
> multiplication by a Lagrange polynomial) identify that an object it
> has been given is some sort of polynomial? Callable and presence of a
> degree attribute? Subclass of Polynomial?
I don't think the classes need to interoperate, in fact I would discourage
it because it complicates everything for a need I don't see. But if there
are such functions, then deriving everything from a dummy Polynomial class
would be the best approach IMHO. We could do that anyway to leave the door
open to future changes.
class Polynomial: pass
> * How should users implement their own polynomial types so they
> interoperate with scipy's built-in ones?
What do you mean by interoperate?
> Still debated:
> * I think we need a first-class object to represent the basis used to
> express a polynomial. Charles Harris disagrees; he wants to encode
> that information in the class of the polynomial plus an interval
> stored in the polynomial. Some different scheme would necessarily be
> used for Lagrange polynomials.
Things are encoded in 1) an array of coefficients 2) an interval, and 3) the
base functions and 4) the class name. In the Nipy model, 1) and 2) would be
the data containers.
> * How should the zero polynomial be represented? I think its
> coefficient array should be np.zeros(0); Charles Harris prefers
> np.zeros(1).
You know where I stand ;) For a Lagrange basis of indeterminate degree I
think is should be an array of zeros of the appropriate dimension.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20091016/20eb8184/attachment-0001.html
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2009-October/013087.html","timestamp":"2014-04-17T19:30:26Z","content_type":null,"content_length":"12890","record_id":"<urn:uuid:9703f05d-0729-4412-a114-a262dfd897a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometric Identities
June 3rd 2011, 09:28 AM
Trigonometric Identities
Hi guys.
So, twenty hours of my life have passed. I'm never going to get them back, but I can prevent another twenty hours elapsing and that cold, hard feeling of my head hitting the desk one more time.
Here's why:
$2cosec2xcos2x = cotx-tanx$
Obviously. So I have to show that. My initial thinking was to convert all the terms into sin and cos, and then using a couple of trigonometric functions left me with:
$2 \frac{1}{2cosxsinx} {cos}^{2}x - {sin}^{2}x = cotx - tanx$
Because of the sum on the right hand side of the equation, I'm not entirely sure what I should do it. I've tried various different techniques but I've ultimately not been able to get an answer.
I've come close to ${cos}^{2}x + {sin}^{2}x = 1$, so that's what I've been aiming for.
Can anyone nudge me in the right direction? I've also tried converting all terms into tan variants but that has only succeeded in driving me into another brick wall.
Thanks (Headbang)
June 3rd 2011, 09:42 AM
$2\csc(2x)\cos(2x) =$
$\frac{2\cos(2x)}{\sin(2x)} =$
$\frac{2(\cos^2{x}-\sin^2{x})}{2\sin{x}\cos{x}} =$
break it up into two fractions and finish it
June 3rd 2011, 06:06 PM
I suggest altering the RHS if you have trouble going from sin/cos to tan
$\cot(x) - \tan(x)$
$\frac {\cos(x)}{\sin(x)} - \frac {\sin(x)}{\cos(x)}$
Now take the common denominator...
June 4th 2011, 05:01 AM
This is ridiculously more advanced than any trig I've done before and I'm afraid the scope of my notes just doesn't cover it.
I'm okay converting the right hand side of the equation, it's just that when I start multiplying and dividing across the two sides I don't know how to do it properly with the plus and minus signs
floating around.
By breaking the left hand side of the equation up into two fractions, I get:
$2{cos}^{2}x....- 2{sin}^{2}x$
$sinx cosx...sinx cosx$
(Can't get the code working, but you can see what I mean)
Again I'm stumped. I've never had to do anything even remotely like this before. I can only hazard a guess that it ends up as cosx/sinx - sinx/cosx at some point.
June 4th 2011, 05:18 AM
I disagree with your result.
Now just cancel like terms.
June 4th 2011, 07:01 AM
Now that I have it all written down, it seems so simple, but it was utter anguish getting there. Of course, as I do more of these problems I'll probably get a better feel for what I should and
should not be doing, but is there any tried and tested process that I should take into account when doing this?
For instance, my first thought was to express the equation in terms of sin and cosine, so as to standardise like-terms, and that did seem to work, but I suppose my knowledge of trigonometry
ultimately prevented me from seeing this through to the end without some help. I also think I seemed a bit preoccupied with flinging terms across the formula rather than focussing on its actual
Or is it all just horses for courses?
June 4th 2011, 07:17 AM
Now that I have it all written down, it seems so simple, but it was utter anguish getting there. Of course, as I do more of these problems I'll probably get a better feel for what I should and
should not be doing, but is there any tried and tested process that I should take into account when doing this?
For instance, my first thought was to express the equation in terms of sin and cosine, so as to standardise like-terms, and that did seem to work, but I suppose my knowledge of trigonometry
ultimately prevented me from seeing this through to the end without some help. I also think I seemed a bit preoccupied with flinging terms across the formula rather than focussing on its actual
Or is it all just horses for courses?
I dont think there is a set approach. Converting to sin and cos is usually a good start (not always - think of your common identities first, and check whether any of those will help), otherwise
it really is just a case of combining and separating fractions or adapting your identities as appropriate. If everything looks like a dead end, then you really just have to consider how to get
your stage to resemble the one that you are trying to reach. With practice, it will become far easier.(Wink)
June 4th 2011, 10:08 PM
Now that I have it all written down, it seems so simple, but it was utter anguish getting there. Of course, as I do more of these problems I'll probably get a better feel for what I should and
should not be doing, but is there any tried and tested process that I should take into account when doing this?
For instance, my first thought was to express the equation in terms of sin and cosine, so as to standardise like-terms, and that did seem to work, but I suppose my knowledge of trigonometry
ultimately prevented me from seeing this through to the end without some help. I also think I seemed a bit preoccupied with flinging terms across the formula rather than focussing on its actual
Or is it all just horses for courses?
I would suggest writing up on one sheet of paper all the trigonometric identities that you require, if you use a textbook then just extract the identities from them. Then practice a lot, keep
referring to your sheet of identities, the more you use these the more easier it will be for you to remember them. You will get to a certain stage where it almost becomes second nature. Then all
this identity stuff will be easy! | {"url":"http://mathhelpforum.com/trigonometry/182310-trigonometric-identities-print.html","timestamp":"2014-04-19T22:53:29Z","content_type":null,"content_length":"14509","record_id":"<urn:uuid:a29b04e1-1538-45c1-b839-2e17a7d4c520>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use Coordinate Filters
May 20, 2011
Point filters provide a method of locating a point in a drawing relative to another point without specifying the entire coordinate. Using a point filter, you can enter partial coordinates, and then
the program prompts you for the remaining coordinate information. To use xyz point filters, respond to the prompt for a coordinate with a filter in the following form:
where coordinate is one or more of the letters x, y, and z. The program then prompts you for the filtered coordinate(s). For example, if you type .xy, the program prompts you to select a point whose
xy coordinate you want, and then prompts you for the z coordinate. The filters .x, .y, .z, .xy, .xz, and .yz are all valid filters.
Using point filters in two dimensions
You can use point filters when you work in two dimensions to locate points in relation to existing entities. For example, to draw a circle centered in a rectangle, start the Circle command, and then
respond to the prompts as follows:
2Point/3Point/Ttr(tan tan radius)/Arc/Multiple/<Center of circle>: .y
Select Y of: mid
of: (select the left side of the rectangle)
Still need XZ of: mid
of: (select top of the rectangle)
Diameter/<Radius>: (specify radius of circle)
You can use point filters to center the circle by separately selecting the midpoints of two sides of the rectangle (A and B) and then specifying its radius.
Using point filters in three dimensions
You can use point filters when you work in three dimensional space to locate points in two dimensions and then specify the z coordinate as the elevation above the xy plane. For example, to begin
drawing a line from a point with a z coordinate 3 units above the center of a circle, insert the circle, and then start the Line command and respond to the prompts as follows:
ENTER to use last point/Follow/<Start of line>: .xy
Select XY of: cen
of: (select a point on the circle)
Still need Z of: 3 (locates the starting point 3 units above the center of the circle)
Length of line: (specify the length of the line)
You can use point filters to draw a line by first selecting a point in the xy plane (A), specifying the z coordinate (B), and then specifying the length of the line (C).
Commands Reference
BLIPMODE: Controls the display of marker blips
ID: Displays the coordinate of a location
LIST: Displays database information for selected objects
GRID: Displays a dot grid in the current viewport
SNAP: Restricts cursor movement to specified intervals
TABLET: Calibrates, configures, and turns on and off an attached digitizing tablet
UCS: Manages user coordinate systems
UCSICON: Manages defined user coordinate systems
UNITS: Controls coordinate and angle display formats and precision
System Variables Reference
BLIPMODE: Controls whether marker blips are visible
COORDS: Controls when coordinates are updated on the status line
LASTPOINT: Stores the last point entered, expressed as a UCS coordinate for the current space; referenced by the at symbol (@) during keyboard entry
ELEVATION: Stores the current elevation relative to the current UCS for the current viewport in the current space
TABMODE: Controls the use of the tablet
Recent Posts:
Useful links: | {"url":"http://www.zwsoft.com/Service_Support/ser_zwcad/zwcad_tutorialbooks/six/331.html","timestamp":"2014-04-21T09:36:44Z","content_type":null,"content_length":"36494","record_id":"<urn:uuid:5f982039-7b8d-4d87-9907-5c6ec8d1bb4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Palatine Calculus Tutor
Find a Palatine Calculus Tutor
...The Parabola. The Ellipse. The Hyperbola.
17 Subjects: including calculus, reading, geometry, statistics
...I consistently monitor progress and adjust lessons to meet the specific needs of each individual student. Thank you for considering my services. I look forward to helping you succeed in
mathematics.I have a teaching certificate in mathematics issued by the South Carolina State Department of Education.
12 Subjects: including calculus, geometry, algebra 1, algebra 2
...I tutor because I love working with children. I am happy to work with anyone who is willing to work and am very patient with students as they try to understand new concepts. I have been in the
Glenview area the past four years and have tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands Academy of the Sacred Heart.
20 Subjects: including calculus, chemistry, physics, geometry
...Rather, I find it much more efficient to aide learning with conversation, marching side by side and never to lead too far ahead. I do my best to pause and retrace our route so that when you
inevitably ask the same question multiple times, the solution is never explained twice in exactly the same...
7 Subjects: including calculus, physics, geometry, algebra 1
...The mathematical subjects include various mathematical disciplines, in particular Precalculus. The latter prepares students for a first course in Calculus, as well as introducing topics that
will be needed in other Mathematics courses. For these reasons, I focus on teaching main concepts of Precalculus in simple and clear way.
8 Subjects: including calculus, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/palatine_il_calculus_tutors.php","timestamp":"2014-04-16T04:16:43Z","content_type":null,"content_length":"23866","record_id":"<urn:uuid:1bc54570-07a6-45fd-883d-6ceff8eb2ce3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
15-859S / 21-801A: Analysis of Boolean Functions 2012
Meeting: Mondays and Wednesdays, 3pm-4:20pm, GHC 4101
First meeting: Monday, September 10
Instructor: Ryan O'Donnell (
Office Hours: GHC 7213, by appointment
• Spring 2007 version of this course, with lecture notes
• Course bulletin/discussion board. To access it:
□ Go to https://piazza.com/cmu
□ In one of the class slots, search for "15-859S / 21-801A: Analysis of Boolean Functions".
□ Put in the access code aobf12 and click "Join as Student".
□ Click "Add Class".
□ It will then request your "cmu.edu email address". However you can actually use any email address.
□ Follow the subsequent instructions.
Problem sets
Lecture videos Course description
Boolean functions, f : {0,1}^n → {0,1}, are perhaps the most basic object of study in theoretical computer science. They also arise in several other areas of mathematics, including combinatorics
(graph theory, extremal combinatorics, additive combinatorics), metric and Banach spaces, statistical physics, and mathematical social choice. In this course we will study Boolean functions via their
Fourier transform and other analytic methods. Highlights will include applications in property testing, social choice, learning theory, circuit complexity, pseudorandomness, constraint satisfaction
problems, additive combinatorics, hypercontractivity, Gaussian geometry, random graph theory, and probabilistic invariance principles.
The main prerequisite is working knowledge of discrete probability. Also helpful will be undergraduate-level knowledge of linear algebra, asymptotic analysis, calculus, and computer science.
There will be about 6 problem sets, plus one writing project. The writing project will be given an interim grade and a final grade. The three components (homework, project interim grade, project
final grade) will be weighted equally. | {"url":"http://www.cs.cmu.edu/~odonnell/aobf12/","timestamp":"2014-04-18T03:28:33Z","content_type":null,"content_length":"13422","record_id":"<urn:uuid:3a7ad8c3-3e05-48c0-a091-843819f3f2b4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/richardlong369/asked","timestamp":"2014-04-17T09:52:07Z","content_type":null,"content_length":"107285","record_id":"<urn:uuid:0a19af30-985a-4d87-b3cd-2cb6102cb7b1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teterboro Statistics Tutor
Find a Teterboro Statistics Tutor
...I received an A (maximum score) in my UK A level Physics class (similar to US AP classes) and then studied mathematics and physics at both Imperial College, London and Princeton University.
I've tutored both high school students and college students in physics and greatly enjoyed explaining and ...
40 Subjects: including statistics, chemistry, reading, physics
...I prefer to meet in Manhattan, anywhere between City College and NYU. In addition to the subjects listed elsewhere, I am also able to tutor: Proofs or Mathematical Reasoning, Set Theory, Modern
Analysis, Modern Algebra, Mathematical Logic/Advanced Logic/Computability/Modal Logic., and Game Theory. Besides math I can also tutor programming in the Python programming language.
32 Subjects: including statistics, physics, calculus, geometry
...I have experience teaching the GRE. Many students dread the SAT Reading section because they find it confusing. What many students don't realize is that SAT Reading requires a new type of
reading: They cannot approach SAT passages like they would a Harry Potter novel.
14 Subjects: including statistics, writing, GRE, algebra 1
...Not only am I able to provide help with education, but I can also provide guidance in education and career decisions. I attended Bergen County Academies for high school and excelled in the
Medical Academy. I completed my undergraduate education at Lehigh University, with double majors in Behavioral Neuroscience and Economics in 2.5 years.
30 Subjects: including statistics, reading, biology, trigonometry
...I'm also open to creating tutoring through new and creative means, such as book clubs or music, or in ways that will tap into personal intelligences in order to motivate the brain. I'm trained
in brain-based learning and enjoy differentiating based on the learner's strengths so that the learner ...
22 Subjects: including statistics, English, reading, algebra 1 | {"url":"http://www.purplemath.com/teterboro_statistics_tutors.php","timestamp":"2014-04-21T04:55:40Z","content_type":null,"content_length":"24164","record_id":"<urn:uuid:1cf961d6-242a-416b-abed-747377473f61>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tile formats for located and mobile systems
, 1999
"... We introduce the notion of cartesian closed double category to provide mobile calculi for communicating systems with specific semantic models: One dimension is dedicated to compose systems and
the other to compose their computations and their observations. Also, inspired by the connection between s ..."
Cited by 20 (12 self)
Add to MetaCart
We introduce the notion of cartesian closed double category to provide mobile calculi for communicating systems with specific semantic models: One dimension is dedicated to compose systems and the
other to compose their computations and their observations. Also, inspired by the connection between simply typed -calculus and cartesian closed categories, we define a new typed framework, called
double -notation, which is able to express the abstraction /application and pairing/projection operations in all dimensions. In this development, we take the categorical presentation as a guidance in
the interpretation of the formalism. A case study of the ß-calculus, where the double - notation straightforwardly handles name passing and creation, concludes the presentation.
- MATHEMATICAL STRUCTURES IN COMPUTER SCIENCE , 2002
"... Tile systems offer a general paradigm for modular descriptions of concurrent systems, based on a set of rewriting rules with side-effects. Monoidal double categories are a natural semantic
framework for tile systems, because the mathematical structures describing system states and synchronizing acti ..."
Cited by 13 (9 self)
Add to MetaCart
Tile systems offer a general paradigm for modular descriptions of concurrent systems, based on a set of rewriting rules with side-effects. Monoidal double categories are a natural semantic framework
for tile systems, because the mathematical structures describing system states and synchronizing actions (called configurations and observations, respectively, in our terminology) are monoidal
categories having the same objects (the interfaces of the system). In particular, configurations and observations based on net-process-like and term structures are usually described in terms of
symmetric monoidal and cartesian categories, where the auxiliary structures for the rearrangement of interfaces correspond to suitable natural transformations. In this paper we discuss the lifting of
these auxiliary structures to double categories. We notice that the internal construction of double categories produces a pathological asymmetric notion of natural transformation, which is fully
exploited in one dimension only (for example, for configurations or for observations, but not for both). Following Ehresmann (1963), we overcome this biased definition, introducing the notion of
generalized natural transformation between four double functors (rather than two). As a consequence, the concepts of symmetric monoidal and cartesian (with consistently chosen products) double
categories arise in a natural way from the corresponding ordinary versions, giving a very good relationship between the auxiliary structures of configurations and observations. Moreover, the
Kelly–Mac Lane coherence axioms can be lifted to our setting without effort, thanks to the characterization of two suitable diagonal categories that are always present in a double category. Then,
symmetric monoidal and cartesian double categories are shown to offer an adequate semantic setting for process and term tile systems.
- THEORY AND PRACTICE OF LOGIC PROGRAMMING , 2001
"... We apply to logic programming some recently emerging ideas from the field of reduction-based communicating systems, with the aim of giving evidence of the hidden interactions and the
coordination mechanisms that rule the operational machinery of such a programming paradigm. The semantic framework we ..."
Cited by 8 (6 self)
Add to MetaCart
We apply to logic programming some recently emerging ideas from the field of reduction-based communicating systems, with the aim of giving evidence of the hidden interactions and the coordination
mechanisms that rule the operational machinery of such a programming paradigm. The semantic framework we have chosen for presenting our results is tile logic, which has the advantage of allowing a
uniform treatment of goals and observations and of applying abstract categorical tools for proving the results. As main contributions, we mention the finitary presentation of abstract unification,
and a concurrent and coordinated abstract semantics consistent with the most common semantics of logic programming. Moreover, the compositionality of the tile semantics is guaranteed by standard
results, as it reduces to check that the tile systems associated to logic programs enjoy the tile decomposition property. An extension of the approach for handling constraint systems is also
, 2000
"... The sos formats ensuring that bisimilarity is a congruence often fail in the presence of structural axioms on the algebra of states. Dynamic bisimulation, introduced to characterize the coarsest
congruence for ccs which is also a (weak) bisimulation, reconciles the bisimilarity as congruence pro ..."
Cited by 8 (4 self)
Add to MetaCart
The sos formats ensuring that bisimilarity is a congruence often fail in the presence of structural axioms on the algebra of states. Dynamic bisimulation, introduced to characterize the coarsest
congruence for ccs which is also a (weak) bisimulation, reconciles the bisimilarity as congruence property with such axioms and with the specication of open ended systems, where states can be
recongured at run-time, at the cost of an innitary operation at the meta-level. We show that the compositional framework oered by tile logic is suitable to deal with structural axioms and open ended
systems specications, allowing for a nitary presentation of context closure. Keywords: Bisimulation, sos formats, dynamic bisimulation, tile logic. Introduction The semantics of dynamic systems can
be conveniently expressed via labelled transition systems (lts) whose states are terms over a certain algebra and whose labels describe some abstract behavioral information. Provided such
- Graph Transformations and Process Algebras for Modeling Distributed and Mobile Systems, number 04241 in Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum (IBFI),
Schloss Dagstuhl , 2005
"... Abstract. This paper is an informal summary of different encoding techniques from process calculi and distributed formalisms to graphic frameworks. The survey includes the use of solo diagrams,
term graphs, synchronized hyperedge replacement systems, bigraphs, tile models and interactive systems, al ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. This paper is an informal summary of different encoding techniques from process calculi and distributed formalisms to graphic frameworks. The survey includes the use of solo diagrams, term
graphs, synchronized hyperedge replacement systems, bigraphs, tile models and interactive systems, all presented at the Dagstuhl Seminar 04241. The common theme of all techniques recalled here is
having a graphic presentation that, at the same time, gives both an intuitive visual rendering (of processes, states, etc.) and a rigorous mathematical framework. 1
, 2001
"... This report summarizes the activities in the fourth year of the ESPRIT Working Group APPLIGRAPH, covering the period from April 1, 2000, to March 31, 2001. The principal objective of this
Working Group is to promote applied graph transformation as a rule-based framework for the specication and devel ..."
Cited by 1 (0 self)
Add to MetaCart
This report summarizes the activities in the fourth year of the ESPRIT Working Group APPLIGRAPH, covering the period from April 1, 2000, to March 31, 2001. The principal objective of this Working
Group is to promote applied graph transformation as a rule-based framework for the specication and development of systems, languages, and tools and to improve the awareness of its industrial
, 1999
"... 2-categories and double categories are respectively the natural semantic ground for rewriting logic (rl) and tile logic (tl). Since 2-categories can be regarded as a special case of double
categories, then rl can be easily embedded into tl, where also rewriting synchronization is considered. Since ..."
Add to MetaCart
2-categories and double categories are respectively the natural semantic ground for rewriting logic (rl) and tile logic (tl). Since 2-categories can be regarded as a special case of double
categories, then rl can be easily embedded into tl, where also rewriting synchronization is considered. Since rl is the semantic basis of several existing languages, it is useful to map tl back into
rl to have an executable framework for tile specifications. We extend the results of a previous work of two of the authors, focusing on tile systems where the algebraic structures for configurations
and observations rely on some common auxiliary structure (e.g., for pairing, projecting, etc.). The new model theory required to relate the categorical models of the two logics is an extended version
of the theory of 2-categories, and is defined using partial membership equational logic. More concretely, this semantic mapping yields a rewriting logic implementation of tile logic, where a
meta-layer is requir...
"... Abstract. In the area of component-based software architectures, the term connector has been coined to denote an entity (e.g. the communication network, middleware or infrastructure) that
regulate the interaction of independent components. Hence, a rigorous mathematical foundation for connectors is ..."
Add to MetaCart
Abstract. In the area of component-based software architectures, the term connector has been coined to denote an entity (e.g. the communication network, middleware or infrastructure) that regulate
the interaction of independent components. Hence, a rigorous mathematical foundation for connectors is crucial for the study of coordinated systems. In recent years, many different mathematical
frameworks have been proposed to specify, design, analyse, compare, prototype and implement connectors rigorously. In this paper, we overview the main features of three notable frameworks and discuss
their similarities, differences, mutual embedding and possible enhancements. First, we show that Sobocinski’s nets with boundaries are as expressive as Sifakis et al.’s BI(P), the BIP component
framework without priorities. Second, we provide a basic algebra of connectors for BI(P) by exploiting Montanari et al.’s tile model and a recent correspondence result with nets with boundaries.
Finally, we exploit the tile model as a unifying framework to compare BI(P) with other models of connectors and to propose suitable enhancements of BI(P). 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1098962","timestamp":"2014-04-19T00:17:28Z","content_type":null,"content_length":"33894","record_id":"<urn:uuid:f77b92f8-5487-4b53-89e9-bf2ef57b021c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teterboro Statistics Tutor
Find a Teterboro Statistics Tutor
...I received an A (maximum score) in my UK A level Physics class (similar to US AP classes) and then studied mathematics and physics at both Imperial College, London and Princeton University.
I've tutored both high school students and college students in physics and greatly enjoyed explaining and ...
40 Subjects: including statistics, chemistry, reading, physics
...I prefer to meet in Manhattan, anywhere between City College and NYU. In addition to the subjects listed elsewhere, I am also able to tutor: Proofs or Mathematical Reasoning, Set Theory, Modern
Analysis, Modern Algebra, Mathematical Logic/Advanced Logic/Computability/Modal Logic., and Game Theory. Besides math I can also tutor programming in the Python programming language.
32 Subjects: including statistics, physics, calculus, geometry
...I have experience teaching the GRE. Many students dread the SAT Reading section because they find it confusing. What many students don't realize is that SAT Reading requires a new type of
reading: They cannot approach SAT passages like they would a Harry Potter novel.
14 Subjects: including statistics, writing, GRE, algebra 1
...Not only am I able to provide help with education, but I can also provide guidance in education and career decisions. I attended Bergen County Academies for high school and excelled in the
Medical Academy. I completed my undergraduate education at Lehigh University, with double majors in Behavioral Neuroscience and Economics in 2.5 years.
30 Subjects: including statistics, reading, biology, trigonometry
...I'm also open to creating tutoring through new and creative means, such as book clubs or music, or in ways that will tap into personal intelligences in order to motivate the brain. I'm trained
in brain-based learning and enjoy differentiating based on the learner's strengths so that the learner ...
22 Subjects: including statistics, English, reading, algebra 1 | {"url":"http://www.purplemath.com/teterboro_statistics_tutors.php","timestamp":"2014-04-21T04:55:40Z","content_type":null,"content_length":"24164","record_id":"<urn:uuid:1cf961d6-242a-416b-abed-747377473f61>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition 11
To find a third proportional to two given straight lines.
Let AB and AC be the two given straight lines, and let them be placed so as to contain any angle.
It is required to find a third proportional to AB and AC.
Produce them to the points D and E, and make BD equal to AC. Join BC, and draw DE through D parallel to it.
Then since BC is parallel to a side DE of the triangle ADE, therefore, proportionally, AB is to BD as AC is to CE.
But BD equals AC, therefore AB is to AC as AC is to CE.
Therefore a third proportional CE has been found to two given straight lines AB and AC.
If a and b are two magnitudes, then their third proportional is a magnitude c such that a : b = b : c. The third proportional is needed whenever a duplicate ratio is needed when the ratio itself is
known. The duplicate ratio for a : b is a : c.
Use of this proposition
This construction is used in propositions VI.19, VI.22, and a few propositions in Book X. | {"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookVI/propVI11.html","timestamp":"2014-04-16T19:15:40Z","content_type":null,"content_length":"3140","record_id":"<urn:uuid:a943e1f1-5412-480a-8875-301f3ee2ce5b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Chi-square test for Categorical Data Analysis
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Chi-square test for Categorical Data Analysis
From Richard Williams <Richard.A.Williams.5@ND.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Chi-square test for Categorical Data Analysis
Date Tue, 18 Sep 2007 18:54:21 -0500
At 04:00 PM 9/18/2007, Maarten buis wrote:
> Second question - does the chi-square test convey any additional
> information other than the fact that the distributions are different?
> That is, since only the number of observations in each group is being
> tested, can I, for example, conclude that Group A is making
> significantly more income than Group B?
No, the Chi-square test just says that these distributions are
different. Again -intreg- will give you more meaningfull output.
Actually, since it is a 2 by 2 table, you can do a z-test with a signed effect. See case II of
But note: this would just say that one group had significantly more people above the median than the other group did. That isn't necessarily the same as one group making significantly more money. For
example, one group might have relatively few people above the median, but those people could be really really really rich. Or, there might be no significant differences in % above the median, but the
above average people in one population could be a lot richer than the above average people in the other population.
In the Stata reference manual for intreg, it gives an example along the lines of what Martin later suggests, i.e. log the endpoints of income.
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
HOME: (574)289-5227
EMAIL: Richard.A.Williams.5@ND.Edu
WWW: http://www.nd.edu/~rwilliam
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-09/msg00593.html","timestamp":"2014-04-16T10:22:41Z","content_type":null,"content_length":"7385","record_id":"<urn:uuid:6a8345cb-c84d-4e06-8161-70f6e4b0e3db>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random Number Distributions
20 Random Number Distributions
This chapter describes functions for generating random variates and computing their probability distributions. Samples from the distributions described in this chapter can be obtained using any of
the random number generators in the library as an underlying source of randomness.
In the simplest cases a non-uniform distribution can be obtained analytically from the uniform distribution of a random number generator by applying an appropriate transformation. This method uses
one call to the random number generator. More complicated distributions are created by the acceptance-rejection method, which compares the desired distribution against a distribution which is similar
and known analytically. This usually requires several samples from the generator.
The library also provides cumulative distribution functions and inverse cumulative distribution functions, sometimes referred to as quantile functions. The cumulative distribution functions and their
inverses are computed separately for the upper and lower tails of the distribution, allowing full accuracy to be retained for small results.
The functions for random variates and probability density functions described in this section are declared in gsl_randist.h. The corresponding cumulative distribution functions are declared in
Note that the discrete random variate functions always return a value of type unsigned int, and on most platforms this has a maximum value of 2^32-1 ~=~ 4.29e9. They should only be called with a safe
range of parameters (where there is a negligible probability of a variate exceeding this limit) to prevent incorrect results due to overflow. | {"url":"http://www.gnu.org/software/gsl/manual/html_node/Random-Number-Distributions.html","timestamp":"2014-04-17T01:24:12Z","content_type":null,"content_length":"15031","record_id":"<urn:uuid:ed6b5df6-a72f-42b9-bfcd-0d28fb2724d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Structural Dynamics : Theory and Applications
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/structural-dynamics-theory-applications/bk/9780673980526","timestamp":"2014-04-16T11:47:55Z","content_type":null,"content_length":"72192","record_id":"<urn:uuid:3889aaae-0aac-45b1-a594-96f65ef8f9e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |