content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Temperature and Entropy
I have been trying to understand temperature and entropy. I never had a good grasp of these concepts before. Finding a good explanation of them has been elusive.
I have gone back to basics to try and understand temperature and this is what I have got. Objects have this internal energy called heat. Heat is where mechanical energy goes during friction. When two
objects are touching (or even when they are just facing each other) sometimes this internal energy is transfered from one to the other. If no internal energy is transfered the two objects are said to
be in thermodynamic equilibrium.
It turns out that if objects A and B are in thermodynamic equilibrium and objects B and C are in thermodynamic equilibrium, then objects A and C are in thermodynamic equilibrium. This means we can
design a calibrated object, called a thermometer, such that if my thermometer is in thermodynamic equilibrium with my object, and your identical thermometer is in thermodynamic equilibrium with your
object, and the two thermometers read the same value, then we know that our object are in thermodynamic equilibrium with each other. Further more, we can mark our thermometer in such a way that if
your thermometer reads larger than my thermometer, then we will know that if we allow our objects to touch the internal energy will flow form your object to my object.
So far this does not tell us exactly how to mark our thermometer. Any monotone rearrangement of our marking would preserve this property. We choose a scale with the additional property that if your
object is at temperature T[1] and my object is at temperature T[2], then the maximum efficiency of a heat engine operating between these two object is 1 - T[1]∕T[2].
It should be emphasized that the mercury-in-glass scale of temperature is not simply a linear transformation of the Kelvin temperature. […] It is apparent then that if the mercury-in-glass
temperature is to yield the correct thermodynamic temperature […] it must be constructed with slightly nonuniform subdivisions with the 50℃ mark being not at the half way position between the ice
point and the boiling point.
This is a good start for defining temperature, but it does leave a lot of questions. What is thermodynamic equilibrium? Why does heat only flow from hotter objects to colder objects? Why is a heat
engine’s efficiency limited?
It seems I have a much better understanding of entropy, or at least Shannon entropy. If you have a set of distinguishable elements {s[1], …, s[n]} and you have a random distribution where p[i] is the
probability of selecting s[i], then the entropy of this distribution is the average number of bits needed to store the information that a particular selection was made. This value is -∑[i = 1 … n] p
[i]lg(p[i]) and is measured in bits. This value is always less than or equal to lg(n), and is equal to lg(n) for a uniform probability distribution.
Thermodynamic entropy is not a measurement of disorder. Thank god, because that always confused me. However this authors definition that “entropy change measures energy’s dispersion at a stated
temperature” still confuses me.
As far as I can tell entropy is a measurement of your uncertainty of which state a system is in. So if I place five coins in a row under a sheet of paper, and tell you that the coins are all heads
up, then the entropy of this system is 0 because you know exactly which way they are all facing. If I tell you that two are heads up, then the system has [DEL:lg(20):DEL] bits of entropy, because you
don’t know exactly the state of the system. If I tell you nothing, the coins have [DEL:lg(120):DEL] bits of entropy. If I tell you from left to right the coins are tails, tails, heads, heads, tails,
then the coins have 0 bits of entropy, because you know exactly the state of the coins, even if that state is “disorderly”.
In the last three examples, the coins may have had the same configuration; it is your knowledge of the state determines the entropy. This value corresponds to the Shannon entropy where all states of
the coins that are allowed are considered equally as likely to occur.
If I give you a box with volume V of gas with n molecules at a pressure of P, there are some large number of states these molecules could be in. It is my understanding that the lg of this number is
the entropy of the gas.
If I give you the same box and tell you the state of every molecule, then the same gas has 0 entropy. This makes sense to me, because if you know the state of every molecule, you would not even need
Maxwell’s demon to separate the gas into hot and cold molecules. You could do it yourself.
But wait, entropy is measured in energy per temperature, and I have been measuring it in bits. Apparently there are k[B]∕lg(e) joules per kelvin per bit. I would very much like to know where this
comes from. Actually, since I still have not got a good grasp of temperature, I should think of one kelvin as equal to k[B]∕lg(e) joules per bit. I am not exactly sure what a joule per bit means.
(The term lg(e) is an artifact that comes about because physicists use the natural logarithm (ln) and computer scientists, like myself, use logarithm base two (lg). If Boltzmann as a computer
scientist he would have defined k[B] in such a way that the term lg(e) would disappear.)
Anyhow, I have this feeling that temperature is not really real. What I suspect you actually have is some distribution of particles at various energy levels. This distribution leads to entropy in
accordance to Shannon’s formula. A common family of distributions is Boltzmann’s distributions, and this family is indexed by temperature. Then somehow this temperature relates to thermal equilibrium
and heat engine efficiencies. But I imagine it is possible to have a distribution of energies that is not one of Boltzmann’s distributions. This is all just speculation.
The fact that entropy of a closed system never decreases just says that you cannot magically get information about a system from nowhere. But since state evolution in quantum mechanics is reversible,
it seems to me that means that entropy of a closed system can never increase either! I kinda have this feeling that entropy of a system increasing is analogous to energy being lost by friction; if
you look close enough you find the quantity is actually preserved.
Sorry if you are still reading this. My purpose in writing this was mostly so that I could look back on it later. I still have a lot to learn about this subject.
|
{"url":"http://r6.ca/blog/20050505T170500Z.html","timestamp":"2014-04-17T15:48:46Z","content_type":null,"content_length":"10320","record_id":"<urn:uuid:c06666a9-c33f-498b-becd-21eb87492c27>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tangent Vector to a Space Circle
January 14th 2011, 11:49 AM #1
Junior Member
Nov 2009
Pocatello, ID
Tangent Vector to a Space Circle
Find a vector tangent to the space circle:
At the point $(1/\sqrt{14}, 2/\sqrt{14}, -3/\sqrt{14})$
I should know how to do this, but it's been 4 years since I had multivariable calc and I don't remember a darn thing from that class. :-(
Since the circle belongs to the plane x + y + z = 0, the tangent vector belongs to it as well. Also, the vector is perpendicular to the radius-vector $(1/\sqrt{14}, 2/\sqrt{14}, -3/\sqrt{14})$.
So you have two equations: x + y + z = 0 and x + 2y - 3z = 0. I get (-5, 4, 1) up to proportionality.
$<br /> r_0=(1/\sqrt{14}, 2/\sqrt{14}, -3/\sqrt{14})<br />$
The normal vector to the tangent plane to the sphere is $r_0$.
The normal vector of the given plane is p=(1,1,1).
The vector perpendicular to $r_0$ and $p$ is (cross product)
$<br /> n=p\; \times \; r_0.<br />$
January 14th 2011, 12:39 PM #2
MHF Contributor
Oct 2009
January 14th 2011, 12:49 PM #3
Senior Member
Mar 2010
|
{"url":"http://mathhelpforum.com/calculus/168363-tangent-vector-space-circle.html","timestamp":"2014-04-19T05:47:20Z","content_type":null,"content_length":"36881","record_id":"<urn:uuid:c3bd25c8-7eb3-4c36-882f-6912a2ab5a5f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modular Help
October 11th 2010, 02:16 PM #1
Junior Member
Sep 2010
Modular Help
My problem is:
Prove that there are no integers x and y such that x^2 − 5y^2 = 2.
Hint: consider this equation modulo 5.
I don't get how to start this with using modulo 5.
do I make the whole equation divisible by 5?
Consider all possibilities. x will be one of the following :
a) 5k
b) 5k-1
c) 5k-2
d) 5k+1
e) 5k+2
for some suitable k.
What remainders do these leave when squared and divided by 5 ?
October 11th 2010, 03:40 PM #2
|
{"url":"http://mathhelpforum.com/discrete-math/159215-modular-help.html","timestamp":"2014-04-17T05:30:14Z","content_type":null,"content_length":"31035","record_id":"<urn:uuid:90455808-6e22-42e1-bd0b-9779d10223dd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Meaning of Life - nursing an understanding for those who's job it is to ask the question of life, the universe and everything
Jay's Site.com > Stuff > The Meaning of Life
The answer to life, the universe and everything is 42.
I did not make this up:*
The second greatest computer of all time and space was built to tell the answer to the question of life, the universe and everything. After seven and a half million years the computer divulged the
answer: 42.
"Forty-two! Is that all you've got to show for seven and a half million years' work?"
"I checked it very thoroughly," said the computer, "and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you've never actually known what the question
The computer informs the researchers that it will build them a second and greater computer, incorporating living beings as part of its computational matrix, to tell them what the question is. The
result is the sentence "WHAT DO YOU GET IF YOU MULTIPLY SIX BY NINE".
"Six by nine. Forty-two."
"That's it. That's all there is."
Since 6 x 9 = 54, this being the question would imply that the universe is bizarre and irrational. However, it was later pointed out that 6 x 9 = 42 if the calculations are performed in base 13, not
base 10.
"42" is often used in the same vein as a metasyntactic variable; 42 is often used in testing programs as a common initializer for integer variables.
There is a joke that perhaps there may have been some order of operations issues:
Six equals 1 + 5.
Nine equals 8+1.
So six * nine equals 1+5 * 8+1.
5*8 = 40.
1+40+1 = 42, the meaning of life.
In Lewis Carroll's book The Hunting of the Snark, (before Douglas Adams' tome was written) the baker left 42 pieces of luggage on the pier.
42 is also a sphenic number, a Catalan number and is bracketed by twin primes. 42 is also the number of nursing career journals that were surveyed in an extensive science survey.
42 is the number you get when you add up all the numbers on two six-sided dice. This is showing that life, the universe, and everything is nothing but a big game of craps.
According to Google's calculator, the meaning of life is, indeed, 42. Weird.
* This was all actually made up by Douglas Adams in his comic science fiction series The Hitchhiker's Guide to the Galaxy.
The information on this page was originally found at http://en2.wikipedia.org/wiki/The_Answer_to_Life,_the_Universe,_and_Everything.
|
{"url":"http://www.jayssite.com/stuff/life/life.html","timestamp":"2014-04-18T08:27:44Z","content_type":null,"content_length":"9583","record_id":"<urn:uuid:9545a6c0-b75f-4e03-afdc-cd1a6bbf6986>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Uncertainty for p = 0 for binomial distribution?
May 23rd 2011, 09:50 AM #1
Apr 2011
Uncertainty for p = 0 for binomial distribution?
I have some data (4 runs each of about 10 trials) which is binomial with n_hits/N_trials
n/N = 0/11, 0/9, 0/10, 0/10
So, I estimate the probability p = n/N = 0
But how can I calculate an uncertainty on this value?
I thought to try
total N_tot=40 and n_tot=1, so p_tot=1/40 = 0.025
(i.e., assume one of the trials happened to be successful instead of not)
Then s = sqrt[ 0.025 * (1-0.025)/40] = 0.0247 (approx. 1/40)
is this a more correct way of doing this?
Hope someone can help! Thanks.
i dont understand the method you posted (that doesh't mean its wrong).
Have you been taught how to make a confidence interval?
I have some data (4 runs each of about 10 trials) which is binomial with n_hits/N_trials
n/N = 0/11, 0/9, 0/10, 0/10
So, I estimate the probability p = n/N = 0
But how can I calculate an uncertainty on this value?
I thought to try
total N_tot=40 and n_tot=1, so p_tot=1/40 = 0.025
(i.e., assume one of the trials happened to be successful instead of not)
Then s = sqrt[ 0.025 * (1-0.025)/40] = 0.0247 (approx. 1/40)
is this a more correct way of doing this?
Hope someone can help! Thanks.
I actually mentioned this in another thread as an aside earlier today. Weird coincidence. You're intuition is actually pretty close to what is recommended.
A standard thing to do is add in a certain number of successes and failures - adding two of each is recommended and has a nice interpretation - technically for the interpretation to work you add
1.96 successes and failures, but people usually just round up to 2 so that things make sense. This is sometimes called the Agresti-Coull method (side note: I took categorical data analysis from
Agresti). Adding any positive number of successes and failures also has a Bayesian interpretation as putting a Beta prior on the sample proportion, if you know what that means.
I'll stick with what's fashionable and suggest adding 2 successes and 2 failures. For example, if you observe 0 successes and 40 trials, you would tweak this number to 2 successes in 44 trials.
If p_tilde = 2 / 44 then you would estimate
$\sqrt{\tilde p (1 - \tilde p) / 44} = .0314$
for your standard error. If you use this standard error and use $\tilde p$ to estimate p then you can make confidence intervals, which is where the motivation and justification for 2 and 2 comes
Last edited by theodds; May 23rd 2011 at 07:40 PM.
The issue is that the estimate of p is 0, which makes the usual estimate of the standard error 0. If you naively try to calculate a confidence interval with the usual methods you get {0} which is
useless. So the idea is to cook the books so that things work, except hopefully in a way that makes sense statistically.
There seems to be something philosophically devious going on here. I mean, the experiment produced 0 successes in every trial, and we want to try and draw our confidence about the conclusion. I
would say the conclusion is absolutely certain. We have no evidence to anything! I understand the aim to be able to draw a confidence interval, but it seems sneaky. As you say, you're cooking the
books to make it work numerically. My concern is the empirical implications from that, which one might argue is the point of such statistics.
There seems to be something philosophically devious going on here. I mean, the experiment produced 0 successes in every trial, and we want to try and draw our confidence about the conclusion. I
would say the conclusion is absolutely certain. We have no evidence to anything! I understand the aim to be able to draw a confidence interval, but it seems sneaky. As you say, you're cooking the
books to make it work numerically. My concern is the empirical implications from that, which one might argue is the point of such statistics.
Saying that we are "adding two successes and two failures" is just a way for people to explain to themselves what they are doing. The correction as derived is perfectly reasonable. The usual
confidence interval is based on inverting the lousy Z test; the correction comes from inverting the good one. More importantly, it has been shown to perform well.
If this sort of thing bothers you, you would probably find an advanced course in statistical inference quite troubling. Stein's paradox comes to mind.
Last edited by theodds; May 23rd 2011 at 09:48 PM.
I would have to see a concrete example to see how that works. If an experiment is performed to test some target, and it appears 0 times in a sample of supposedly suitable size (though possibly
not), then I don't see what meaningful inference can be derived from that. Instead of letting the data speak for itself, so to speak, you fudge numbers into it to derive a standard error. But
doesn't that just suggest "if we were to have a little bit larger sample, we would have seen a hit so that ..." Pragmatically, I would see it as the experiment is a bust, and we need to try
again. Nothing wrong with new data. And on that note, there's nothing disconcerting about Stein's paradox. It certainly isn't pragmatically troubling like an experiment that results in a 0
I certainly hope that we can make a meaningful inference. With all those failures, it seems to me like we have quite strong evidence that p is close to 0. Since the variance is a function of p,
we would also hope to conclude that the variance of our estimate of p is small.
Instead of letting the data speak for itself, so to speak, you fudge numbers into it to derive a standard error. But doesn't that just suggest "if we were to have a little bit larger sample, we
would have seen a hit so that ..." Pragmatically, I would see it as the experiment is a bust, and we need to try again. Nothing wrong with new data.
Simulations have shown constructing the confidence interval like this works fine. And it should be obvious that it is going to perform better than {0}. The fact that you don't like what you
perceive to be the motivation behind the correction doesn't change the fact that it performs better. Monte Carlo it yourself if you don't believe me (or look up the original paper which IIRC
includes simulation results).
There's nothing wrong, by the way, with always adding two successes and two failures regardless of how many successes you have.
If I ever told a medical experimenter that his data was completely worthless due to something like this, and that he needed to collect more, I think I would probably be fired from the project.
And the original paper's title is ... ?
Also, what is your definition of "perform better"? The data only supports there being a point estimate with 0 error, which is the point of altering the data. But if that is the measure of
performance, you could also just say there is a small confidence interval of some made up sort. It would "perform better," too.
You misunderstand what my concern is, though. I understand the statistical benefit of cooking the books, as you say. My point is wholly pragmatic. If the experiment is to express something about
a phenomena, then obtaining a zero probability says nothing about it. Sure, we can infer it has a very low probability, but when it is completely zero, no evidence was obtained, it is like trying
to say "lack of evidence supports the conclusion ..." It could be that the phenomena is unlikely. It could mean the sample size is too small. It could mean there is a problem with the experiment.
Last edited by bryangoodrich; May 24th 2011 at 11:53 AM.
I'm not sure why you appear to be so skeptical, but here it is.
JSTOR: An Error Occurred Setting Your User Cookie
Thanks for the paper. It was very informative. As I understood it, the adjustment is to help Wald intervals perform better (in terms of containing a 95% confidence interval), especially when p is
near the extremes. If p is near the extreme, it is hard to look at it as the center of the interval, which is what Wald attempts to do. The adjusted Wald helps that issue.
None of this changes what I said, since I already agreed I'm sure it has statistical benefit. Per the article, I would just use the score interval. My point was pragmatic, however. I'm skeptical
because that makes for good science, and statistics applies to such scientific inferences. If our data shows that we lack all evidence of some phenomena, you can most certainly use an adjusted
Wald or other method of constructing a confidence interval about that, but it doesn't change the fact our point estimate is at the corner. I'm skeptical about the empirical implications, not the
numeric methods involved. The Wald adjustment is not really cooking the books, as you said, since it isn't about adjusting the sample to give us a better estimate, which is what it appeared like
to me. It is a correction to the Wald test itself for its lack of interval performance (given certain sample sizes or extreme estimates). The originator of this thread was really trying to force
his data to say something it doesn't.
First of all, many thanks for the help and the interest from both of you: I've found this discussion very useful. I'm not trying to make my data say anything that isn't there. I am simply asking
the legitimate question: "Given that I estimate p to be 0, given my limited number of trials, what are the chances that p is not exactly 0". Clearly a reasonable estimate of this uncertainty can
be made for any situation where I estimate p > 0 (i.e., n > 0), so why shouldn't I be able to do it for n = 0?
Incidentally, I came up with another (more complicated) solution, and I will post this in a second :-)
Sorry... I have to attach a pdf because the Latex editor was giving all sorts of "unknown latex errors" for some reason. Here is another method that I had a go at. Probably questionable, but
would appreciate the feedback!
For reference, from the Agresti and Coull (1998) paper above, I would also compare with the Clopper-Pearson interval. I didn't bother to run the calculation manually, but I found an R function
binCI(n, y) that is supposed to do it (n = # observations and y = # of successes). In fact, it has facilities for a number of methods. I'll print them all. The output is below:
> binCI(40, 0, method = "CP") ## Clopper-Pearson
95 percent CP confidence interval
[ 0, 0.0881 ]
Point estimate 0
> binCI(40, 0, method = "AC") ## Agresti-Coull
95 percent AC confidence interval
[ -0.01677, 0.1044 ]
Point estimate 0
> binCI(40, 0, method = "Blaker") ## Blaker
95 percent Blaker confidence interval
[ 0, 0.0795 ]
Point estimate 0
> binCI(40, 0, method = "Score") ## Wilson Score
95 percent Score confidence interval
[ 0, 0.08762 ]
Point estimate 0
> binCI(40, 0, method = "SOC") ## Second-order corrected
95 percent SOC confidence interval
[ -0.005935, 0.05665 ]
Point estimate 0
> binCI(40, 0, method = "Wald") ## Wald
95 percent Wald confidence interval
[ 0, 0 ]
Point estimate 0
As the paper details, the CP interval does not perform very well, but it is an alternative to the basic. Note, I don't know half the methods used above or their appropriateness. Nor do I know if
the function even works correctly. If you're interested, find their formulas and check them manually. I only leave this as reference.
Last edited by bryangoodrich; May 25th 2011 at 11:42 AM. Reason: Expanded calculations
Very useful! Many thanks. Haven't had time to read the paper in detail yet. Interesting to see CIs that include negative values, and some don't. CP seems reasonable.
Did you have time to take a look at my (possibly dodgy) derivation? Any comments?
May 23rd 2011, 10:47 AM #2
MHF Contributor
May 2010
May 23rd 2011, 07:20 PM #3
Senior Member
Oct 2009
May 23rd 2011, 07:32 PM #4
Senior Member
Oct 2009
May 23rd 2011, 08:30 PM #5
May 2011
Sacramento, CA
May 23rd 2011, 09:34 PM #6
Senior Member
Oct 2009
May 24th 2011, 01:34 AM #7
May 2011
Sacramento, CA
May 24th 2011, 07:53 AM #8
Senior Member
Oct 2009
May 24th 2011, 11:42 AM #9
May 2011
Sacramento, CA
May 24th 2011, 11:49 AM #10
Senior Member
Oct 2009
May 24th 2011, 12:52 PM #11
May 2011
Sacramento, CA
May 25th 2011, 09:33 AM #12
Apr 2011
May 25th 2011, 10:14 AM #13
Apr 2011
May 25th 2011, 11:33 AM #14
May 2011
Sacramento, CA
May 25th 2011, 02:17 PM #15
Apr 2011
|
{"url":"http://mathhelpforum.com/advanced-statistics/181404-uncertainty-p-0-binomial-distribution.html","timestamp":"2014-04-21T09:37:54Z","content_type":null,"content_length":"82164","record_id":"<urn:uuid:f3230020-abb9-43e6-bfe3-b689a599e367>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Perfect" Electron Roundness Bruises Supersymmetry
54526893 story
Posted by
from the look-at-how-round-it-is-man dept.
astroengine writes
"New measurements of the electron have confirmed, to the smallest precision attainable, that it has a perfect roundness. This may sounds nice for the little electron, but to one of the big physics
theories beyond the standard model, it's very bad news. 'We know the Standard Model does not encompass everything,' said physicist David DeMille, of Yale University and the ACME collaboration, in a
press release. 'Like our LHC colleagues, we're trying to see something in the lab that's different from what the Standard Model predicts.' Should supersymmetrical particles exist, they should have a
measurable effect on the electron's dipole moment. But as ACME's precise measurements show, the electron still has zero dipole moment (as predicted by the standard model) and is likely very close to
being perfectly round. Unfortunately for the theory of supersymmetry, this is yet another blow."
This discussion has been archived. No new comments can be posted.
"Perfect" Electron Roundness Bruises Supersymmetry
Comments Filter:
• Re:Invisible unicorns in a garage (Score:1, Insightful)
by Rosyna (80334) on Friday December 20, 2013 @01:56AM (#45743523) Homepage
Because string theory isn't science!
Parent Share
• Re: Has Anything Ever Validated Supersymmetry? (Score:1, Insightful)
by Anonymous Coward on Friday December 20, 2013 @03:24AM (#45743761)
As long as it still puts out
Parent Share
• Re:Wait, it has a shape? (Score:3, Insightful)
by Twinbee (767046) on Friday December 20, 2013 @04:48AM (#45743967) Homepage
I assume they mean the force created by the electron is perfectly round, rather than the particle itself. Perhaps someone can confirm.
Parent Share
• Re:Time for some really new physics (Score:5, Insightful)
by Spad (470073) <slashdotNO@SPAMspad.co.uk> on Friday December 20, 2013 @06:11AM (#45744175) Homepage
Not to nitpick, but isn't the collapse of the universe *always* closer than ever before?
Parent Share
• Re:Invisible unicorns in a garage (Score:3, Insightful)
by msobkow (48369) on Friday December 20, 2013 @07:31AM (#45744397) Homepage Journal
Yeah, but the summary nor the article explain why supersymmetry is a question or an issue in the first place, just that the evidence doesn't support the theory. What does the theory it disproves
Parent Share
• Re:Invisible unicorns in a garage (Score:2, Insightful)
by Talderas (1212466) on Friday December 20, 2013 @08:35AM (#45744617)
So basically.... evidence supports the standard model and someone's pet theory that they are hoping will make them the next Einstein has evidence that is contrary to it?
Parent Share
• Re:Invisible unicorns in a garage (Score:4, Insightful)
by gtall (79522) on Friday December 20, 2013 @09:40AM (#45744917)
An aspect of science is applied math as the AC below mentioned. More particularly, we should be somewhat cautious in treating math as physics. Physics is describable in math, but it isn't math.
And the mathematics of a physical situation functions more like an analogy. It says "that works like this"...and usually it does that to some epsilon because we can only measure up to a certain
energy. One can think of a physical theory described in mathematics as an idealization. The math is very precise, the real world is not necessarily.
Parent Share
• Re:Invisible unicorns in a garage (Score:5, Insightful)
by CreatureComfort (741652) on Friday December 20, 2013 @12:33PM (#45746423)
The trouble is that Mathematics can describe ANY universe, not just the one we happen to be able to perceive.
Math is great at describing perfect theories that fail to pan out in real life, but that are perfectly self consistent in the theory and equations. Just look at all of the great, and completely
wrong, models offered in super-symmetry, string, and all the other Grand Unified Theories that mathematically are perfectly sound, but are disproved by actual experiment.
This is why Physics, e.g. "science" > Math.
Parent Share
• Re:Invisible unicorns in a garage (Score:2, Insightful)
by Anonymous Coward on Friday December 20, 2013 @12:37PM (#45746455)
You seem to be implying that somehow mathematics are not sufficient for describing the "real" world, and that is simply not the case.
Mathematics are the language of the universe, as far as we can tell.
Speaking as a person who does mathematical modeling for a living, I can tell you that a mathematical model is definitely only an approximation for the real world. There is no such thing as a
perfect model due to the limitations of our knowledge and our inherent inability to model every single detail in the world. There are huge stochastic effects that we can only approximate
statistically (a deterministic model would require a near infinite number of parameters, and even it would be an overfit because we cannot measure or determine the all underlying phenomena).
Parent Share
• Re:Invisible unicorns in a garage (Score:4, Insightful)
by Tyler Durden (136036) on Friday December 20, 2013 @01:14PM (#45746835)
Mathematics cannot be the language of the universe as the vast majority of the universe does not communicate any ideas. The parts of it that do is an insignificant, tiny portion that includes us
and whatever other self-aware/reasoning beings that may be out there.
What mathematics is are a set of insanely great tools that we use to create models helping us to describe the universe. One thing we've learned from math is that self-referential systems tend to
have issues that can crop up in spots. And it's hard to get more self-referential than a subset of the universe trying to understand the whole thing.
Saying that mathematics is sufficient to describe the real world, no matter how successful it has been at it so far, is awfully presumptuous.
Parent Share
Related Links Top of the: day, week, month.
|
{"url":"http://science.slashdot.org/story/13/12/19/2355222/perfect-electron-roundness-bruises-supersymmetry/insightful-comments","timestamp":"2014-04-17T07:03:54Z","content_type":null,"content_length":"100867","record_id":"<urn:uuid:3ebf5b2d-69b4-490d-abb0-2ac064198e9e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Model theory of algebraically closed valued fields
Seminar Room 1, Newton Institute
This tutorial of 5 lectures will be an exposition of a proof of elimination of imaginaries for the theory of algebraically closed valued fields (ACVF), when certain extra sorts from M^eq (coset
spaces) are added. The proof will be based on that of [1], though further recent ideas of Hrushovski may be incorporated. The tutorial will begin with a general account of the basic model theory of
ACVF and the notion of elimination of imaginaries, and will end with further developments from [2]: in particular, the notion of stable domination.
1. D. Haskell, E. Hrushovski, H.D. Macpherson, `Definable sets in algebraically closed valued fields. Part I: elimination of imaginaries', preprint.
2. D. Haskell E. Hrushovski, H.D. Macpherson, `Definable sets in algebraically closed valued fields. Part II: stable domination and independence.'
|
{"url":"http://www.newton.ac.uk/programmes/MAA/seminars/2005033011301.html","timestamp":"2014-04-18T23:23:59Z","content_type":null,"content_length":"4921","record_id":"<urn:uuid:1f197ad2-2342-480e-9a8b-15ab6f2e4582>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is a fast efficient way to find an item?
up vote 0 down vote favorite
Hi so I need some FAST way to search for a word in a dictionary.
The dictionary has like 500k words in it.
I am thinking about a using a hashmap where each bin has at most 1 word.
Ideas on how to do this or is there something better?
c++ hash
1 What language? This problem has been solved in most. – jball Dec 24 '09 at 18:01
C++ its in..... – SuperString Dec 24 '09 at 19:15
add comment
1 Answer
active oldest votes
A Trie is an efficient way to store dictionaries and has very fast lookup characteristics, O(m) where m is the length of the word.
up vote 8 down vote A hashmap will be less efficient memory-wise but lookup time is a constant quantity for a perfect hash, O(1) lookup but you still spend O(m) calculating the hash. An imperfect
accepted hash will have a slower worst case than a Trie.
add comment
Not the answer you're looking for? Browse other questions tagged c++ hash or ask your own question.
|
{"url":"http://stackoverflow.com/questions/1959237/what-is-a-fast-efficient-way-to-find-an-item?answertab=oldest","timestamp":"2014-04-24T01:36:20Z","content_type":null,"content_length":"65519","record_id":"<urn:uuid:c73bcf99-7295-40cc-9251-61f33d50cb4b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Servicios Personalizados
Links relacionados
versión impresa ISSN 0187-6236
Atmósfera vol.25 no.3 México jul. 2012
Frictional convergence, atmospheric convection, and causality
D. J. Raymond and M. J. Herman
Physics Department and Geophysical Research Center, New Mexico Tech, Socorro, NM, USA. Corresponding author: D. J. Raymond; e-mail: raymond@kestrel.nmt.edu
Received August 12, 2011; accepted June 24, 2012
En este trabajo se considera la convergencia por fricción en una capa límite atmosférica acotada por una troposfera estable libre. De acuerdo con un extenso trabajo previo encontramos que la
estabilidad atmosférica reduce la escala vertical de la circulación troposférica secundaria libre asociada con la convergencia por fricción. Relacionada con esta reducción en la escala vertical hay
una reducción proporcional en la escala temporal debido a la fricción por el giro descendente de la circulación atmosférica. Esta reducción en la escala temporal modifica el balance entre términos
del componente en la ecuación de momento a lo largo de las isobaras. En particular, para las escalas de perturbación menores a unos cuantos cientos de kilómetros en condiciones tropicales típicas, la
tendencia del término de momento llega a un balance aproximado con el término de fricción, haciendo que el término de Coriolis sea menos importante. Esto reduce la magnitud del flujo isobárico
cruzado y la fuerza ascendente en las regiones donde este flujo converge. Si algún otro mecanismo, tal como la convergencia húmeda, produce suficiente convergencia de capa límite para anular el giro
descendente de la perturbación en cuestión, entonces la magnitud de la convergencia iguala a la predicha por la fórmula de la convergencia por fricción de estado estable. Sin embargo, en este caso la
flecha de la causalidad es invertida en comparación con la que se asume en el tratamiento simple de la convergencia por fricción. La convergencia por fricción no "causa" la convección, realmente la
convección causa la convergencia y el mecanismo que fuerza la convección debe buscarse en otro lado. Esta distinción es crucial para entender qué produce la convección profunda. El presente análisis
es lineal y la perspectiva puede cambiar cuando los efectos no lineales son importantes. También está limitado a situaciones en las que los vientos de la capa limite son relativamente débiles. Por lo
tanto, los ciclones tropicales con sus fuertes vientos y comportamiento no lineal merecen un análisis independiente.
Frictional convergence in an atmospheric boundary layer topped by a stable free troposphere is considered. In agreement with extensive previous work, we find that atmospheric stability reduces the
vertical scale of the free tropospheric secondary circulation associated with frictional convergence. Associated with this reduction in vertical scale is a proportional reduction in the time scale
for the frictional spindown of an atmospheric circulation. This reduction in time scale alters the balance between terms in the component of the momentum equation along the isobars. In particular,
for disturbance scales less than a few hundred kilometers in typical tropical conditions, the momentum tendency term comes into approximate balance with the friction term, with the Coriolis term
becoming less important. This reduces the magnitude of the cross-isobaric flow and the strength of the ascent in regions where this flow converges. If some other mechanism such as moist convection
produces enough boundary layer convergence to nullify the spindown of the disturbance in question, then the magnitude of the convergence equals that predicted by the steady-state frictional
convergence formulation. However, in this case the arrow of causality is reversed from that assumed in a naive treatment of frictional convergence.
Frictional convergence is not "causing" the convection; the convection is actually causing the convergence, and the mechanism forcing the convection must be sought elsewhere. This distinction is
crucial in understanding what drives deep convection. The present analysis is linearized and the picture may change when nonlinear effects become important. It is also limited to situations in which
the boundary layer winds are relatively weak. Tropical cyclones, with their strong winds and nonlinear behavior, thus deserve an independent analysis.
Keywords: Atmospheric stability, frictional spindown, momentum equation, tropical conditions.
1. Introduction
The frictional spindown of a rotating fluid over a stationary surface is a classical problem in geophysical fluid dynamics, first considered by Ekman (1905, 1906; see also Duck and Foster, 2001). In
this scenario, cross-isobaric flow is induced in the boundary layer by friction as part of a deep secondary circulation which spins down the entire fluid. Charney and Eliassen (1949) proposed that
frictionally induced cross-isobaric flow produces upward motion at the top of the boundary layer when this flow is convergent. (They further noted that Albert Einstein earlier employed this idea to
explain the clustering of tea leaves in the center of a stirred cup of tea in his book Mein Weltbild). This idea later became an integral part of their theory of tropical cyclogenesis (Charney and
Eliassen, 1964) and in Charney's ideas about the forcing of convection in the intertropical convergence zone (ITCZ; Charney 1971) and tropical cloud clusters (Charney, 1973).
That frictionally induced convergence of boundary layer air and the resulting ascent at the top of the boundary layer force convection in the tropics has become a staple of tropical meteorology.
Ooyama's (1969) model of the development of a tropical cyclone incorporated this mechanism as have models of large-scale tropical waves (e. g., Holton et al1971; Holton, 1974; Wang, 1988; Wang and
Rui, 1990).
Holton (1965) investigated the role of stratification of a rotating fluid in the spindown problem. In an unstratified fluid, the secondary circulation associated with frictionally induced
cross-isobaric flow in the boundary layer extends through the full depth of the fluid. If the fluid depth is much greater than the boundary layer depth, then the time scale for spindown is equal to
the rotational time scale times the ratio of fluid depth to boundary layer depth (Greenspan and Howard, 1963). When this ratio is large, the time scale for spindown is much greater than the
rotational time scale, which allows the approximation of time-independence to be made in the calculation of cross-isobaric flow in the boundary layer. However, if the fluid is stratified, then the
secondary circulation is confined to a shallower layer, and the spindown time decreases in comparison to the unstratified case.
Significant controversy surrounded Holton's (1965) paper shortly after it appeared (Pedlosky, 1967; Holton and Stone, 1968; Sakurai, 1969; Duck and Foster, 2001). However, the controversy had to do
primarily with Holton's treatment of the side wall in laboratory experiments. As we are interested in the atmosphere where side walls do not exist, we ignore this controversy; the central result that
spindown is confined to a shallow layer and consequently occurs more rapidly in the presence of stratification has not been challenged.
In the case of strong stratification, the spindown time may be short enough that the assumption of time-independence in the boundary layer calculation of cross-isobaric flow is no longer justified.
Holton et al. (1971) and Holton (1974) include time dependence explicitly in their analyses of the boundary layer. However, many authors do not. The purpose of this paper is to highlight the
potential importance of this issue, especially as it applies to the interaction between frictionally induced flow and deep atmospheric convection. In particular, we address the question as to whether
boundary layer convergence "causes" convection or whether the convection "causes" the convergence. This question is important for understanding the dynamics of tropical, convectively coupled
disturbances such as ITCZs, tropical waves, tropical cyclones, and even the Madden-Julian oscillation.
In section 2 the response of a uniformly stratified atmosphere to surface friction is reviewed in a simple, linearized context. The analysis is extended to a neutral boundary layer topped by a stably
stratified atmosphere in section 3. Section 4 discusses the interaction between frictional convergence and convection in these cases, and conclusions are presented in section 5.
2. Stratified spindown
For purposes of exposition, in this section we postulate a very simple atmosphere, one with constant static stability independent of height. In essence, the neutrally stratified boundary layer
becomes infinitely thin. A linearized Boussinesq approximation is employed, again for simplicity. Though frictional convergence is an issue in a variety of fully three-dimensional geometries, e. g.,
in cloud clusters and tropical cyclones, we consider only the slab-symmetric case here, with the most direct application being to undisturbed ITCZs.
In the classical view, frictional convergence is confined to the neutrally stratified boundary layer. However, in trade wind regions the momentum deficit due to surface friction is arguably
distributed through a layer extending into the stable troposphere via momentum transport by shallow convective clouds. Where deep convection exists, this layer is likely to be even thicker. Thus, we
explore the effects of transporting surface stress into the statically stable region above the boundary layer proper.
The hydrostatic, rotating Boussinesq equations linearized about a state of rest for a stably stratified atmosphere in slab symmetry (∂ / ∂x = 0) are
where the velocity vector is (u, v, w), the buoyancy perturbation is b, the kinematic pressure perturbation is π, and ƒ is the Coriolis parameter. The Brunt-Väisälä frequency, assumed constant, is
given by N and surface friction is represented by the horizontal force vector per unit mass ƒ = (F[x], F[y]), which is assumed to be linear in velocity and decrease exponentially with height in order
to simplify the analysis:
where λ = μC[D]U[0] is the inverse of the spindown time scale and v[S] = (u[S], v[S]) is the surface wind. The quantity C[D] ≈ 10^-3 is the drag coefficient, U[0] is a characteristic velocity, and μ^
-1 is roughly the depth over which surface friction acts. If U[0] = 5 m s^-1 and μ = (500 m)^-1, then λ = 10-^5 s^-1 which is about one third of the typical tropical Coriolis parameter.
The quantity M is the mass source associated with moist convection. In this formulation, the vertical velocity w represents the vertical motion in the environment outside of convection. Convection is
represented as a phenomenon which extracts mass from one level and deposits it at a higher level. By mass conservation, in a bounded domain the vertical integral of M must be zero.
Let us now consider the spindown of an initial set of alternating jets in the λ direction, with a kinematic pressure perturbation initially independent of height of the form π[G] exp(ily), where l is
the wavenumber of the jet structure in the y direction. A schematic of the assumed flow is shown in Figure 1. Substitution of the assumed y dependence exp(ily) into (1)-(5) and a bit of algebra yield
an equation for the time tendency of pressure:
We solve this equation subject to the initial condition that π(z) = π[G] at all levels and assume that ∂π / ∂t, u[S], and v[S] decay exponentially with time dependence exp(-σt). (Note that n itself
has a more complex time dependence, which will be explored shortly). Substituting this form into (7) results in
is the inverse square of the vertical penetration depth of frictional effects into the free atmosphere.
Since by hypothesis, u[S], and v[S] decay exponentially to zero with time, (2) (3) can be evaluated at the surface as follows:
Solving these for u[S] and v[S] in terms of the surface pressure perturbation π[S] results in
We now set M = 0, considering first the convection-free solution to (13). This can be written as the sum of inhomogeneous and homogeneous parts:
The coefficient σ of the homogeneous part is determined by assuming zero buoyancy perturbation at the surface, which implies that ∂π / ∂z there, yielding A = - λμmG(σ)π[S] / (μ^2 - m^2), whence the
full solution becomes
Evaluating this at z = 0 and assuming that π[S] =π exp(-σt), we obtain the dispersion relation yielding σ:
This equation is implicit, but if λ, σ <<ƒ, then G ≈ 1, and we have σ ≈ λm / (m + μ). Note than in this approximation, a ≤ λ so if λ <<ƒ, as is marginally satisfied for a typical tropical boundary
layer with not-too-strong winds, then σ <<ƒ as well. Thus, G ≈ 1 is a reasonable approximation in normal tropical conditions at latitudes exceeding 10° - 15°. Closer to the equator, a more careful
evaluation of (17) must be made. (As the unapproximated dispersion relation (17) is cubic in σ, there must be two additional solutions. By plotting the left side of (17) versus the right side as a
function of σ for a variety of numerical values of ƒ and λ, it is evident that there is only one real solution of (17). Thus, the remaining two solutions must be complex. We do not consider those
solutions here).
Finally, we integrate (16) over the interval [0, t] assume that π = π[G] at t = 0 and invoke (17) to obtain the kinematic pressure perturbation as a function of time and height:
where we have used the geostrophic balance condition ƒ u[G] = -ilπ[G] to eliminate π[G]. The x velocity is too complex to write explicitly, but can be obtained from (3), (18), and (21):
We now compare the cross-isobaric wind predicted by our model with the simple, steady state wind v[STEADY] obtained by setting ∂u /∂t = 0 in (2). For simplicity we continue to assume that λ,σ <<ƒ,
which results in significant simplification of the results but does not change their essence. Combining (2) and (6), we find at z; t = 0 that v[STEADY] = λu[S] / ƒ ≈ Xu[G] / ƒ, where we have used u
[s]t) = u[G] exp(-σt). From (17) and (21) and setting G(σ) = 1, the actual cross-isobaric wind at z, t = 0 is
Two limits are evident in this result. If μ >>m, then v ≈ v[STEADY], and the actual cross-isobaric wind at the surface is well approximated by the steady-state result. This occurs when the layer in
which surface stresses are deposited is much shallower than the resulting penetration depth of momentum fluxes m^-1 ≈ ƒ / (IN). In the other limit in which μ <<m, v is less than v[STEADY] by the
factor μ/ (m + μ).
In the first case, the initial jets spin down in the layer below z ≈ m^-1 with significant cross-isobaric flow. This flow results in vertical velocities which lift low-level air underneath cyclonic
vorticity regions aloft, and cause air to descend underneath anti-cyclonic regions, in agreement with Charney's ideas cited in section 1. In the second case, insignificant cross isobaric flow occurs,
and the jets just spin down in place below z ≈ μ≈^-1.
As an example, let us assume that l = 10^-5 m, which corresponds to jet widths of about 300 km. Taking typical tropical values of ƒ = 3 x 10^-5 s^-1 and N = 10^-2 s^-1, then m^-1 = 300 m. For μ^-1 =
1500 m, as may occur in a region of trade wind clouds, then μ / (m + μ) = 1/6, and the cross-isobaric flow is highly suppressed. In order for μ/ (m + μ) > 2/3 in this case, then l ^-1 > 1000 km and
jet widths would have to exceed ≈ 3000 km. Thus, for typical trade wind jets with horizontal dimensions less than of order 1000 km, the magnitude of cross-isobaric flow and the associated convergence
and divergence are much smaller than computed from the steady-state approximation.
Vertical velocities are correspondingly small. Eliminating μ[G] in favor of u[G] in (20) and setting G(σ) = 1 in (17), we find for t = 0 that
Figure 2 shows the vertical velocity scaled by λu[G] / N and evaluated at z = μ ^1,
as a function of m/μ. Since m ∝ l, larger values of m correspond to smaller jet widths.
For weak stratification m/μ << 1. In this case the term in absolute value brackets becomes 1 -exp(-1) and w* is a linear function of m/μ. The dashed line in Figure 2 shows the scaled vertical
velocity in this weak stratification limit extrapolated to strong stratifications. Comparison of the solid to the dashed line thus shows how the suppressive effect of stratification on the vertical
velocity increases with m/μ.
3. Stratified troposphere topping boundary layer
The analysis in section 2 assumes an infinitely thin boundary layer, which is arguably adequate. In the event that it is not, then a more complex model is called for, one which incorporates a
frictional neutral boundary layer topped by a stably stratified layer free of friction. Such a model is developed in this section.
Figure 3 illustrates the ambient structure of the two-layer atmosphere. The neutrally stable boundary layer has an ambient thickness of h and satisfies modified shallow water equations in linearized
form. Assuming horizontal structure exp (ily) for all variables as in section 2 and time evolution for the velocity components given by exp(-σt), we have
where (u[B], v[B]) is the boundary layer wind, the actual boundary layer depth is h(1 + η), π[B] is the kinematic pressure in the boundary layer at z = 0, M[B] is the mass source in the boundary
layer, and all other symbols are as in section 2. The time derivative of η is retained in (26) as we anticipate a temporal structure for η more complex than simple exponential decay, in analogy with
that exhibited by π in section 2. Combining (26)-(28) to eliminate u[B] and v[B], we find
For the normal shallow water equations π[B] = ghη, but the presence of the overlying stratified atmosphere alters the equation for π[B] in a manner that we now describe. We assume continuity in
potential temperature across the interface between the boundary layer and the free troposphere. This is so because the interface itself is an isentropic surface, so differential advection between the
boundary layer and the free troposphere cannot result in potential temperature discontinuities. Since the interface occurs at the elevation z[I] = hη, we can express this condition as
where we have assumed that the buoyancy of boundary layer air b[B] is uniformly zero in space and time. Expanding b(hη) in a Taylor series about z = 0 and invoking the hydrostatic relation in the
free troposphere, we find
where (∂b / ∂z) hη is ignored because it is nonlinear in the small quantities b and η. The pressure derivative term refers to conditions in the free troposphere.
Ignoring friction and mass sources in the free troposphere, (13) simplifies to
where m^2 is defined in (9). Based on the experience of section 2, we assume that the pressure in the jet spindown problem takes the form
where the constant π[G] has the same meaning as in that section and π[X] is to be determined by the interface condition (31). The assumed form of π satisfies (32) trivially. From this equation we
infer that π[B](t) = π(0, t) = π[G] - [1 - exp(-σt)] π[λ] and that (∂π / ∂z)[z]=[0] = m [1- exp(-σt)] π[X]. Eliminating π[X] between these conditions results in
which tells us that π[B] = π[G] when η = 0. Substitution into (29) provides us with the governing equation for η:
Simplifying immediately to the case in which λ, σ <<ƒ and M[B] = 0, m = IN / ƒ and (35) reduces to
Using a solution method similar to that used for π in section 2, we find
subject to the dispersion relation
The cross-isobaric flow in the boundary layer is obtained from (27):
since the cross-isobaric flow in the steady case is v[B]_[STEADY] = (λ /ƒ)u[B]. Thus, the actual cross-isobaric flow is much less than the steady-state cross-isobaric flow if l^- <<hN / ƒ . For h =
600 m, N = 10^-2 s^-1, and ƒ = 3 x 10^-5 s^-1, hN / ƒ = 200 km. For jet widths much less than about three times this value, the decaying jet lacks significant cross-isobaric flow.
The vertical velocity at the top of the boundary layer is
As in section 2, we define a dimensionless version of this vertical velocity as
which is a function only of the dimensionless wavenumber, lhN / ƒ Without the suppressing effects of stratification, we would simply have w* = lhN /ƒ Figure 4 shows w* as a function of lhN / ƒ in
each of these cases. Suppression by stratification is less extreme than in the uniformly stratified case shown in Figure 2, but it is nevertheless significant.
4. Frictional convergence and convection
Let us now ask what profile of convective mass source (M[S]) is required to maintain a steady state in the uniform stratification scenario of section 2. Setting ∂π /∂t = 0 in (13) results in
where we have used the approximation that π[S] ≈ π[G] = ifuG / /. In deriving this expression we have taken σ = 0 in agreement with the steady state assumption, which yields G = ƒ^2 / (ƒ^2 + λ^2) and
m^2 = l^2 N^2/ƒ^2. This form of M[S] exhibits a non-zero vertical integral, and is thus not possible by itself; it must be part of a net M[S] profile M=M[S] + M[F] which does satisfy this condition.
In regions of cyclonic vorticity and negative π[S], the z integral of M[S] must be negative, which means that the z integral of M[F] must be positive. A general result arising from these
considerations is that no configuration of jets and convection can attain a true steady state in this model.
Another consequence of (42) is that the steady-state M[S] increases as λ increases until it equals ƒ and decreases thereafter. This is consistent with the numerical results of Montgomery et al.
(2010) for intensifying tropical cyclones. Alternatively, as one approaches the equator with fixed λ and u[G], the mass flux M[S] increases, reaches a maximum where ƒ = λ and then decreases.
Similar arguments can be made for the boundary layer topped by a stratified free troposphere, as discussed in section 3, except that the boundary layer mass source required for steady state is
Charney and Eliassen (1964) argued in essence that the depth of the free tropospheric secondary circulation associated with cross-isobaric frictional flow is comparable to the depth of the
troposphere. As noted in section 1, the steady-state approximation would be valid if this were true, and the vertical motion at the top of the boundary layer could be said to be caused by the action
of surface friction. However, as we have shown, typical values of atmospheric stratification result in a far shallower secondary circulation in response to surface friction. Under these
circumstances, the time tendency of the wind along the isobars cannot be neglected for a flow of any reasonable horizontal scale (say < 1000 km) in the tropics. For horizontal scales of order 100 km
or less, the vertical scale of the secondary circulation is typically much smaller than the thickness of the layer over which surface friction is deposited, especially when there is widespread
shallow, moist convection such as occurs in the trade wind regions. In this situation there is essentially no cross-isobaric transport in a boundary layer spinning down under the influence of
friction. If some mechanism external to boundary layer dynamics (such as mass sources imposed by deep convection) acts to maintain the boundary layer flow in a nearly steady state, then the time
tendency of the boundary layer wind along the isobars becomes small or zero and cross-isobaric flow and resulting convergence exists more or less in the amount predicted by steady-state boundary
layer theory. However, the immediate cause of this convergence and the resulting vertical motion at the top of the boundary layer is actually the convection itself, and not boundary layer dynamics;
surface friction could be completely absent and the convergence would still exist if the convection were still present.
As discussed in section 1, many investigators postulate that frictional convergence plays an important role in forcing convection in the tropics. However, the suppression of frictional convergence
for smaller scales (less than several hundred kilometers in the present context) suggests that this assumption needs to be re-examined. Holton's investigations (Holton et al., 1971; Holton, 1974)
demonstrate that friction can have interesting effects on tropical disturbances without invoking the assumption of frictional convergence, i. e., without neglecting the time derivative in the
boundary layer momentum equation.
An alternate way of looking at the interaction of convection and frictional convergence might be to imagine that the net effect of the convection is to reduce the effective static stability of the
atmosphere, thus resulting in a deeper secondary circulation. However, if the reduction of the static stability is the only effect of the convection, the system would still spin down, albeit more
slowly. This is easily ascertained by noting the effects of reducing N^2 in the analysis of section 2. For the boundary layer flow to spin up, boundary convergence in excess of that produced by
frictional convergence alone is needed. Thus, modeling the effects of moist convection as a simple reduction in static stability does not adequately represent the rich behavior of this phenomenon.
For example, if convection acted this way, tropical cyclones would never intensify.
5. Conclusions
We have demonstrated that the stratification of the atmosphere strongly suppresses frictional convergence for typical tropical atmospheric disturbances with scales less than several hundred
kilometers. By "typical", we mean disturbances with perturbation velocities of order 5 m s^-1. This suppression occurs because the stratification causes the free tropospheric secondary circulation to
be much shallower than would be expected in the unstratified case. As a consequence, only a thin layer aloft spins down in response to the secondary circulation as the surface friction dissipates
energy. The spindown is therefore more rapid, and the time tendency of the along-isobar component of the horizontal wind in the momentum equation cannot be neglected. This result is demonstrated here
for both an atmosphere of uniform static stability and a neutral boundary layer topped by a stably stratified free troposphere.
Charney and Elaissen's (1964) scale analysis of frictional convergence assumes a deep secondary circulation and fails for this reason in the case of weak boundary layer flows. In the limit of a
small-scale disturbance, the primary balance in the component of the momentum equation along the isobars is between this time tendency term and friction, not between friction and the Coriolis term.
The result is much weaker cross-isobaric flow than would normally be expected when the disturbance is less than a critical size, which is of order several hundred kilometers in our analysis.
In the case of a deep secondary circulation such as envisioned by Charney and Eliassen (1964), the spindown of a disturbance is much slower and the time tendency of the along-isobar component of the
wind can be neglected in the momentum equation. The cross-isobaric flow and resulting frictional convergence is close to that obtained from time-independent arguments in this case. If in addition,
the frictional convergence produces sufficient ascent to reduce convective inhibition, then the resulting convection can be said to be "caused" by the frictional convergence. When the effects of
stratification are included, the situation becomes much more complex. One could imagine that convection located in the regions of boundary layer convergence could boost the strength of this
convergence, perhaps supplying enough energy to maintain the boundary layer in a near-steady state. However, even though this convergence is equivalent in magnitude to that produced spontaneously by
a boundary layer associated with a deep secondary circulation, one can no longer say that the boundary layer convergence "causes" the convection. In fact, the opposite is true, the convection
actually causes the convergence in this case. The convergence cannot have caused the convection, because the convergence would not have been there prior to the convection. The origin of the
convection must be sought in some other mechanism.
The attribution of causality in the shallow and deep secondary circulation cases is subtle, but important from the point of view of the parameterization of cumulus convection. Parameterizations in
which convection is controlled by frictional convergence computed by a steady-state boundary layer model get the arrow of causality wrong, at least for disturbances small enough to exhibit shallow
secondary circulations. This is likely to produce incorrect results in many cases. The solution to this problem is two-fold: (1) include the time derivatives in the boundary layer momentum equations,
and (2) drive convection in the model via the mechanisms that are actually responsible for producing it. With these changes, the issue of causality resolves itself.
One tempting shortcut to fixing the problem of shallow secondary circulations is to attempt to represent the effects of deep, moist convection as a simple reduction in the effective static stability,
with a corresponding increase in the depth of the secondary circulation. This quick fix has the problem that the resulting model of convection is not nearly rich enough to represent what convection
actually does in nature, and is therefore seriously deficient.
Two issues could potentially lead to some alteration of these conclusions. The first is a mathematical point; the analysis performed here is linear. The manifestation of nonlinear effects could be
important. Of particular interest is the possibility that advection of vorticity by the convergent flow in the boundary layer could lead to a phenomenon similar to frontal collapse, with the
convection resulting from the intense convergence along the front. The geostrophic momentum approximation and semi-geostrophic formalism (Hoskins, 1975) could perhaps be used to tackle this problem.
Raymond et al. (2006) found that an Ekman balance model (pressure gradient + Coriolis force + friction = 0) accurately predicted the meridional wind, and hence meridional convergence, on a day with
quiescent conditions in the east Pacific ITCZ. On this day a strong convergent region associated with shallow ascent to 800 hPa occurred near 4° - 6° N, suggesting a front-like structure. Though
there was no deep convection occurring in this convergence zone, there certainly must have been ample shallow convection. On a day exhibiting strong, deep convection, large pressure perturbations
associated with the convection itself existed in the boundary layer and there was no hint of Ekman balance. This suggests that instances of frictionally driven frontal collapse are delicate and occur
only in convectively quiescent conditions. As soon as significant deep convection occurs, more robust mechanisms are likely to dominate.
The second issue has to do with the range of environmental conditions assumed in the present model. Tropical storms have far stronger winds and environmental vorticity. This can result in much deeper
and stronger secondary circulations, even on relatively small scales, thus paving the way for significant frictional convergence in the boundary layer. The tropical storm case deserves an independent
analysis of the effects of stratification on frictional convergence.
This work was supported by National Science Foundation grant ATM-1021049 and Office of Naval Research grant N000140810241.
Charney J. G. and A. Eliassen, 1949. A numerical method for predicting the perturbations of the middle-latitude westerlies. Tellus 1, 38-54. [ Links ]
Charney J. G. and A. Eliassen, 1964. On the growth of the hurricane depression. J. Atmos. Sci. 21, 68-75. [ Links ]
Charney J., 1971. Tropical cyclogenesis and the formation of the intertropical convergence zone. In: Lectures in Applied Mathematics. American Mathematical Society 355-368. [ Links ]
Charney J., 1973. Movable CISK. J. Atmos. Sci. 30, 50-52. [ Links ]
Duck P. W. and M. R. Foster, 2001. Spin-up of homogeneous and stratified fluids. Ann. Rev. Fluid. Mech. 33, 231-263. [ Links ]
Ekman, V. W. 1905. On the influence of the earth's rotation on ocean-currents. Arkiv F. Matem, Astr, o Fysik (Stockholm), Band 2, No. 11, 1-53. [ Links ]
Ekman V. W. 1906. Beitráge zur Theorie der Meeresstrõmungen. Ann. Hydrograph. Marit. Meteorol. 2, 1-50. [ Links ]
Greenspan H. P. and L. N. Howard, 1963. On a time-dependent motion of a rotating fluid. J. Fluid Mech. 17, 385-404. [ Links ]
Holton J. R., J. M. Wallace and J. A. Young, 1971. On boundary layer dynamics and the ITCZ. J. Atmos. Sci. 28, 275-280. [ Links ]
Holton, J. R., 1974. On the influence of boundary layer friction on mixed Rossby-gravity waves. Tellus 27, 107-115. [ Links ]
Holton J. R., 1965. The influence of viscous boundary layers on transient motions in a stratified rotating fluid. Part I. J. Atmos. Sci. 22, 402-411. [ Links ]
Holton J. R. and P. H. Stone, 1968. A note on the spin-up of a stratified fluid. J. FluidMech. 33, 127-129. [ Links ]
Hoskins B. J., 1975. The geostrophic momentum approximation and the semi-geostrophic equations. J. Atmos. Sci. 32, 233-243. [ Links ]
Montgomery M. T., R. K. Smith and S. V. Nguyen, 2010. Sensitivity of tropical-cyclone models to the surface drag coefficient. Q. J. Roy. Meteor. Soc. 136, 1945-1953. [ Links ]
Ooyama K., 1969. Numerical simulation of the life cycle of tropical cyclones. J. Atmos. Sci. 26, 3-40. [ Links ]
Pedlosky J., 1967. The spin up of a stratified fluid. J. Fluid Mech. 28, 463-479. [ Links ]
Raymond D. J., C. S. Bretherton and J. Molinari, 2006. Dynamics of the intertropical convergence zone of the east Pacific. J. Atmos. Sci. 63, 582-597. [ Links ]
Sakurai T., 1969. Spin down problem of rotating stratified fluid in thermally insulated circular cylinders. J. Fluid Mech. 37, 689-699. [ Links ]
Wang B., 1988. Dynamics of tropical low-frequency waves: An analysis of the moist Kelvin wave. J. Atmos. Sci. 45, 2051-2065. [ Links ]
Wang B. and H. Rui, 1990. Dynamics of the coupled moist Kelvin-Rossby wave on an equatorial beta plane. J. Atmos. Sci. 47, 397-413. [ Links ]
|
{"url":"http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S0187-62362012000300003&lng=es&nrm=iso&tlng=es","timestamp":"2014-04-20T20:57:46Z","content_type":null,"content_length":"70182","record_id":"<urn:uuid:54559b21-906d-4954-b8d0-e9fe6e723c13>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Success rates for EPSRC Fellowships
I was recently at a presentation where the success rates for EPSRC fellowships were given by theme. The message of the talk was that Engineering fellowships were under-subscribed and so we should all
be preparing our applications. But just because a theme is under-subcribed doesn’t mean that you’ve got a better chance of getting funded.
A more sensible way of thinking about the problem is by using a funnel plot. Funnel plots allow you to compare the outcomes of “random” events in organizations of different sizes. For example, you
might want to compare the performance of healthcare institutions or the feedback rate of student surveys within different departments. I say “random” because, in this case, we’re assuming that the
chance of getting funded can be modelled as a binomial distribution with a probability of success p.
The results are shown below and reinforce the EPSRC’s message, though for different reasons. The plot shows that, yes, there are fewer engineering fellowship applications than in some other themes.
But more importantly, the probability of being funded is better than the average EPSRC success rate for fellowships. On the other hand, if your engineering research might be classified under the
Manufacturing the Future theme, then your odds look even better there.
For more details on the method, please see this paper by Cambridge’s David Spiegelhalter. Alternatively the full data and my analysis code in R are available on Github.
|
{"url":"http://www.jameskeirstead.ca/blog/success-rates-for-epsrc-fellowships/","timestamp":"2014-04-20T08:54:18Z","content_type":null,"content_length":"25536","record_id":"<urn:uuid:905ff198-8c2c-476a-9ca5-e690951f04fe>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bronxville Algebra 2 Tutor
Find a Bronxville Algebra 2 Tutor
...I help to cultivate middle school and high school students' math aptitude. I can make myself available at any time 7 days a week. My tutoring experience so far has included Algebra (I, II),
Geometry, Trigonometry, Pre-Calc, Calculus (I, II), and Physics.
16 Subjects: including algebra 2, physics, calculus, geometry
...I try to put myself in the shoes of my students. This way, I know exactly how to show a student how to do a problem, because I can know exactly what parts of a problem a student is struggling
with. I never assume any knowledge that a student may have and I never say "don't worry, this is easy" ...
6 Subjects: including algebra 2, calculus, algebra 1, prealgebra
...I am familiar with how to tutor the TEAS and the changes that have recently come into play. I can get you to pass that test! I have tutored Social Studies for over ten years.
76 Subjects: including algebra 2, reading, biology, calculus
...I am certified in math in both Connecticut and New York, plus I previously taught Algebra 2. And FYI, I scored a perfect 10 out of 10 on the WyzAnt.com Algebra 2 certification test. You'll
definitely want to contact me for your Algebra 2 tutoring needs!
10 Subjects: including algebra 2, geometry, algebra 1, SAT math
...I have done this work with English language learners of many different proficiency levels, nationalities, and professions. In two positions, one at an international school in Uganda and one
with the Pittsburgh Public Schools, I worked to provide special needs students, a number of whom were on t...
39 Subjects: including algebra 2, reading, Spanish, ESL/ESOL
|
{"url":"http://www.purplemath.com/bronxville_ny_algebra_2_tutors.php","timestamp":"2014-04-20T21:07:33Z","content_type":null,"content_length":"23860","record_id":"<urn:uuid:c590257d-7262-4704-9685-963602b05796>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Do I Use a Body Fat Caliper [Archive] - WannaBeBig Bodybuilding and Weightlifting Forums
View Full Version : How Do I Use a Body Fat Caliper
Hey I have one of those manual calipers and I have no idea how to use it. Can anyone help me?
Ok i see the measurements and where to measure but I dontunderstand the formula on how to figure the exact percentage
Thanks alot. You guys helped a bunch. I will let you guys know what I come up with
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
|
{"url":"http://www.wannabebig.com/forums/archive/index.php/t-32161.html","timestamp":"2014-04-20T10:54:26Z","content_type":null,"content_length":"11396","record_id":"<urn:uuid:56a5efd1-9467-478a-9d24-90b9c4bdc219>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interactive opportunity
Plus Maths by Email
Plus is an internet magazine published five times a year which aims to introduce readers to the beauty Maths by Email is a free fortnightly newsletter produced by CSIRO in Australia, in partnership
and the practical applications of mathematics. Whether you want to know how to build a sundial, how to with the Australian Mathematical Sciences Institute. Each edition is provided by email to
keep your messages safe or what shape the universe is, it's all here. Published by the Millennium subscribers and contains a number of components including a feature article, a hands-on
Mathematics Project team at Cambridge University, each issue has typically five articles, each activity, a brain teaser, some web links and news of events in the world of mathematics.
restricted to a few pages long, and an interview with someone who uses mathematics in their career. Although directed at upper primary students, the newsletter contains materials of interest to
Previous issues are archived. A wonderful resource for students, but also for others. older students, their parents and teachers. The link above provides details and a subscription
opportunity, while the current edition can be seen here.
MAA Columns Investigating Patterns
The Mathematical Association of America publishes a number of online columns, an outstanding monthly (or Jill BrittonÕs website comprises a huge range of interesting activities and information
thereabouts) resource. Columns are archived and can all be accessed from this page. Most likely to be of associated with symmetry and tessellations. A wonderful site for browsing, and, although not
value to senior secondary students are Keith DevlinÕs DevlinÕs Angle and Ivars PetersenÕs The really a site intended mainly for reading by students, there are so many interesting things
Mathematical Tourist, written by two of the finest popularisers of mathematics alive today. Archives of here that students will find lots to read and think about (as well as to do). The site is
these and other columns (including some very good discontinued ones) are available too. All readings are lavishly illustrated and will help students see the connections between mathematical ideas and
short, as a magazine column, and mostly accessible to a wide and educated audience. aesthetic ideas in powerful ways. Related sites by the same author a similar assortment for
Curves and Topology and for Polyhedra.
NOVA The Fibonacci Site
NOVA is published by the Australian Academy of Science and the mathematics section includes some good The extraordinary site created by Dr Ron Knott in the UK provides a huge amount of fascinating
examples of mathematical modelling of various kinds, mostly located in an Australian context. The reading related to the Fibonacci sequence and the golden rectangle, and is regularly updated
examples usually contain some text, a glossary of terms, some activities to explore, some further with new and surprising connections between mathematics, the natural world, the built world
reading and some useful related web links. These will give students a good sense of how mathematics is and aesthetics, all seen through the lens of the amazing Fibonacci sequence. Millions of
used and important in a wide variety of settings. visitors and a large number of international awards attest to the quality of the material,
which will help students see another side of mathematics in a delightful way.
NRICH Numb3rs
The wonderful NRICH site in the UK offers a range of materials to challenge and inspire younger pupils, This website allows (sophisticated) users to read about some of the mathematics behind the
including those of primary school age. The material is generally less sophisticated than that offered in popular CBS television series, Numb3rs, with mathematical additions from Mathematica. ItÕs a
the PLUS magazine, which is produced by the same group at Cambridge University. Amongst the materials good, if rare, example of mathematics in the popular domain. Some of the mathematics from
are some nice enrichment articles for students, classified by stages about various aspects of recent episodes is described and illustrated and can be explored using the software. (There is
mathematics. a link to other episodes at the top of the screen.)
Maths Careers Engaging Maths
Although it is focussed on the UK, with different courses and nomenclature from those in Australia, this A good collection of modern examples of how mathematics plays a big, but frequently hidden,
is a really nice attempt to help students at different stages see how important further study of part in peopleÕs lives. The chosen areas of security, communications, the environment,
mathematics is. The website is constructed and maintained by a consortium of professional associations, finance, transport, industry and health are of central importance to the modern world, and the
sponsored by the UK government, and gives a modern view of the place of mathematics in the world. brief descriptions of the role played by mathematics are likely to interest many students and
Mathematics Illuminated Mathematical Movies
This is a 13-part video course produced for public television in the USA, rather than reading material, This website offers a number of short mathematical film clips, highlighting some of the ways
and is directed at teachers and adults rather than students. However, it is a recent series that is in which mathematical ideas appear in the everyday world. The material is produced (and
likely to be of interest to older students and will serve the important purpose of bringing a fresh subscriptions sold) by The Futures Channel in the USA, so there is an inevitable bias towards
perspective on mathematics in an engaging way. The series explores major themes in the field of US contexts. However, many of the film clips are likely to be of interest to students, and
mathematics, from humankind's earliest study of prime numbers, to the cutting-edge mathematics used to there are associated activity materials provided as well for downloading. The emphasis of the
reveal the shape of the universe. The website contains interactives and a mathematical history chart movies is on connecting mathematical ideas to the Ôreal worldÕ.
(Family Tree) as well as videos that can be watched online (or purchased), but not downloaded, as well
as a range of excellent support materials, including readable text.
Mathematical Moments AMS Feature Columns
This nice series of mathematical moments is produced by the American Mathematical Society. It comprises The American Mathematical Society hosts a monthly essay on some aspect of mathematics, with an
brief snippets related to the ways in which mathematics is of relevance to todayÕs world. The main archive of columns stretching back to the late 1990Õs. Columns vary in style and topic,
materials comprise one-page posters (in US letter size, unfortunately) on contemporary topics, with many although most are designed to be accessible to a wide audience, even if some of the
of these including also some supporting materials, including podcast interviews and web links in many mathematical details seem at first to be a little daunting. (As the editors note, even if you
cases. The posters can be printed for classroom display; a shorter and less technical version is get a little bogged down, it is a good idea to look at what comes later in an article). These
available for most. The moments are sorted into categories of science, nature, technology and human columns give a good sense of the myriad ways that mathematics is used these days as well as
culture, and some are available in languages other than English as well. topics that mathematicians find interesting.
Experiencing Mathematics Dimensions
This is a wonderful virtual exhibit of many aspects of mathematics, sponsored by UNESCO. This site A lovely mathematical film that can be watched online, downloaded or purchased as an
allows students (and their parents) an opportunity to experience many aspects of mathematical thinking inexpensive DVD. Constructed by a French team of mathematicians and film-makers, it is
in an interactive way. Many of the exhibits also include associated activities for printing. This is a available in several languages, including English. The film comprises nine chapters, dealing
first-class mathematical museum, for which there is a real version as well, touring internationally. with various aspects of dimensions (in the spatial sense). There is a good guide available
(See the home page for details.) Available in four languages, one of which is English. here, offering advice about ways of using the materials for teachers and others.
Mathematical Imagery Powers of Ten
As the title suggests, many of the materials on this site, or linked to from this site, have a strong This website offers an opportunity to experience the effects of adding another zero to a
visual sense, so that a major attraction might be concerned with aesthetics. The site is maintained by number. Each of the successive screens shows a power of ten in a journey that goes from
the American Mathematical Society. In recent years, with the rise of computer graphics and the use of Ôquarks to quasarsÕ. The entire universe is explored, from the farthest reaches of human
mathematics in films and other media, new mathematical ideas such as chaos and fractals have become very knowledge of space into the tiniest particles in an atom. A mind-boggling experience, a
prominent. Many of these are visible on this site, and are of interest even if the details of the version of a wonderful black and white film by Charles and Ray Eames many years ago. While it
mathematics are beyond the ÔreadersÕ, at least at first. will help students make sense of scientific notation, the website offers much more than that.
Math Monday
Each Monday, there is an addition to this collection of articles about mathematics in the everyday
world, written by George Hart for an online blogging magazine (Make magazine) concerned with making
things. With each item short and highly visual, the collection contains many interesting elements and is
sponsored by the Museum of Mathematics.
|
{"url":"http://wwwstaff.murdoch.edu.au/~kissane/pd/reading.htm","timestamp":"2014-04-16T21:51:08Z","content_type":null,"content_length":"34876","record_id":"<urn:uuid:5f5a72b0-9e61-4fcd-b115-3470a4395bf4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Time value of money
1. 294716
Calculate the future value of $2000 in
a) 5 years at interest rate of 5% per year
b) 10 years at an interest rate of 5% per year
c) 5 years at an interest rate of 10% per year
d) why is the amount of interest earned in part (a) less than half the amount of interest earned in (b)
You are thinking of retiring. Your retirement plan will either pay you $250000 immediately on retirement or $350000 five years after
date of retirement. Which alternative should you choose if interest rate is;
0% per year
8% per year
20% per year
you have been offered a unique investment opportunity. If you invest $10,000 today you will receive $500 one year from now; $1500 two years from now and $10,000 ten years from now
what is the NPV of the opportunity if the interest rate is 6% per year? Should you take the opportunity?
what is the NPV of the opportunity if the interest rate is 2% per year? Should you take the opportunity?
What is the present value of $1000 paid at the end of each of the next 100 years if interest rate is 7% per year?
you have found 3 investment choices for a one year deposit
****10% APR compounded monthly
****10% APR compounded annually
****9% APR compounded daily
Compute the EAR for each investment with 365 days in the year
6)Key Bank is offering a 30 year mortgage with an EAR of 5 and 3/8%. If I borrow $ 150,000 what will be my monthly payment?
if the rate of inflation is 5% what nominal interest rate is necessary for you to earn a 3% real interest rate on your investment
8) your best taxable investment has an EAR of 4%. Your best tax-free investment opportunity has an EAR of 3%.
If your tax rate is 30%, which opportunity provides the higher after-tax interest rate?
The solution explains some multiple choice questions relating to time value of money
|
{"url":"https://brainmass.com/business/the-time-value-of-money/294716","timestamp":"2014-04-18T00:14:42Z","content_type":null,"content_length":"28238","record_id":"<urn:uuid:4a7e2c2d-aa36-441c-9eef-961c330741c6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by eli
Total # Posts: 85
A jogger with a mass of 66.0 kg reaches the top of a staircase in 44.0 s. If the distance from the bottom to the top of the staircase is 14.0 m, how much power will the jogger create in climbing the
Spanish Help please
These look correct to me
Please help me ASAP!!!
Three uniform spheres are fixed at the positions shown in the diagram. Assume they are completely isolated and there are no other masses nearby. (a) What is the magnitude of the force on a 0.20 kg
particle placed at the origin (Point P)? What is the direction of this force? (b...
Could you recheck this question? It seems there isn't any information as to if there is a slope, or not. Who's to say the amount of bacteria won't double, triple, or quintuple over a minute's time? I
only picked up that at 0 min. There are 3000 of the bacteria ...
World History
Lol I guess so
World History
By rallying commoners that were not French to serve their nation-states(?)
World History
Could someone explain how nationalism contributed to Napoleon's defeat?
how do i solve if it took 1.5 qt of paint to paint a wall 8 ft by 15 ft how many qts will it need to paint a wall that is 10 ft by 24ft.
Can you retype the equation please? I would love to help.
I already have it up, its about an essay, its right below this one in english.
No prob. You should help me with my question ;)
I agree with Naah, the other ones have proof, whereas, D is an opinion, because there is not really a way to "prove" it.
i would say D. The other ones have statistical evidence.
This is exactly what I needed. :)
Thank you very much.
Please be as critical as possible
Yes, here it is, I mostly need peer editing . Guy Montag lives a dystopian society, with a distant wife. He had never done anything extraordinary, however; he suddenly became a grand book thief.
Montag soon encountered a very different girl, by the name of Clarisse, after that...
I don't have a thesis yet, however I need to show how Fahrenheit 451 shows the hero's journey.
essay outline about the hero;s journey in fahrenheit 451?
Revenge-support groups? More communication? Protection-more qualified adults/police to help keep the peace/ Attention Seeking...All I can think of is support groups yet again, or psychiatrists
For pooled sp = (n1-1)s1^2 + (n2-1)s2^2 /(n1+n2-2) Sqrt(29841/41 ) = 26.978 Confidence interval The degrees of freedom of t is n1+ n2 -2 xbar1-xbar2 -+ta/2 *sp*sqrt(1/n1 +1/n2)) (39-26)-+ 2.02*
26.978sqrt(1/27+ 1/16)) (-4.19, 30.19)
xbar1 - xbar2 -+ ta/2 * sqrt(s1^2/n2 + s2^2/n2) 13 -+ 2.02* sqrt(21^2/27 + 35^2 /16)) (-6.47, 32.47)
In yeast, ethanol is produced from glucose under anaerobic conditions. What is the maximum amount of ethanol (in millimoles) that could theoretically be produced under the following conditions? A
cell-free yeast extract is placed in a solution that contains 3.50 × 102 mm...
Give two equivalent fractions for the smallest amount of problems you need to complete correctly on a homework assignment to get 85%
what is the value of y when x=2 in the equation y=-4x - 5?
a state of balance with nature
a motorbike has a mass of 300kg, and is being ridden along a straight road. The rider sees a traffic queue ahead. He applies the brakes and reduces the speed of the motorbike from 18m/s to 3 m/s. Use
this equation to calculate the kinetic energy lost by the motorbike. Kinetic ...
Every day Sally saved a penny, a dime, and a quarter. What is the least number of days required for her to save an amount equivalent to an integral number of dollars?
if a car generates 18 hp when traveling at a steady 88 km/h, what must be the average force exerted on the car due to friction and air resistance
A mettallurgist decided to make a special alloy. He melted 30 grams of Au(Gold) with 30 grams of Pb. This melting process resulted to 42 grams of an Au-Pb alloy and some leftover Pb. This scientist
wanted to make 420 grams of this alloy with no leftovers of any reactants. how ...
Can someone help me with these 2 questions from principles of management.thank you 1) Using the comprehensive approach, tell what steps you would take to manage a change. For each step, give specific
examples of actions you would take at that step. 2) Discuss social responsibi...
Melting Snowball: A spherical snowball melts at a rate proportional to its surface area. Show that the rate of change of the radius is constant. (Hint: Surface Area =4(pi)r^2)
Can anyone please help me with these questions for the final exam? 1) It is April, and you are the plant manager at a unionized parts warehouse. You have been asked by top management to inform the
line workers that the plant will be shut down during December due to excess part...
how much 2% solution do I need to convert 20 ml of 5% solution to a 3% solution?
Given that the side of the square has a length b-a, find the area of one of the four triangles and the area of the small inner square. Give the area of one of the triangles followed by the area of
the small inner square separated by a comma. Express your answers in terms of th...
Accuracy is how close a measured value is to the actual (true) value. Precision is how close the measured values are to each other.
Accuracy is how close a measured value is to the actual (true) value. Precision is how close the measured values are to each other. Say you are shooting at a Target. You shoot five times, and they
all hit the bulls eye near each other; this is accurate and precise, because the...
Accuracy is how close a measured value is to the actual (true) value. Precision is how close the measured values are to each other. but how do they relate to measurement?
Measurements can be described as accurate and/or precise. Define accuracy and precision and give examples of how those terms could be used to describe a measurement.
whole number
A pyramid is inscribed in a cube whose volume is 52pi cm^3. The volume of a sphere is equal tpo the volume of this pyramid. Find the volume of a second cube whose edge is equal to the radius of the
acrostic poem
Math goals
How many digits are in the smallest positive number divisible by th first ten numbers in the Fibonacci sequence?
Let U = {5, 10, 15, 20, 25, 30, 35, 40} A = {5, 10, 15, 20} B = {25, 30, 35, 40} C = {10, 20, 30, 40}. Find A ⋂ B.
find the volume of the solid whose cross section is perpendicular to the x axis from isosceles right triangles and has a leg that run from the lines y=-x to the line y=.5x
Calculate the total change in aggregate demand because of an initial $300 decrease in investment spending, given that C = 150 + 0.50YD. $1,200 decrease $300 decrease $150 decrease $600 decrease i
think it is 300 but i think i am wrong
Medical Coding, Part 1
10. Which of the following codes is used for the diagnosis "closed dislocation of the sternum"? A.839.61 C. 839.71 B.839.8 D. 839.9 11. A 50-year-old new female patient has had a sore throat and head
congestion for five days. The physician performs an expanded proble...
a sample of oxygen gas has a volume of 2730 cm3 at 21.0 C. At what temperature would gas have a volume at 4000 cm3?
needing to write 5 sentences that are no repeat sentences about New Years resolution and you can not repeat any words already used in any of the other sentences
College Algebra
Solve the system: x + y + z = -1 2x + 2y + 5z = 1 5x + 2y + 3z = 8 what is the value of x in the solution?
pre calculus
simplify sin(2 pi/3+x)
I have the first few words:did you hear about the water polo player who... whats the rest?
A stone is dropped into a deep well and is heard to hit the water 2.95 s after being dropped. Determine the depth of the well.
The principals is buying pizza for the school-wide pizza party. If there are 1852 students in the school & each student in the school & each student can eat approximately 3 slices of pizza, estimate
the 3 of slices the students will eat in total.
Which of the following is the formation reaction for CaO(s) ? A. CaO(s) Ca(s) + 21O2(g) B. Ca2+(aq) + O2 (aq) CaO(s) C. Ca(s) + 21O2(g) CaO(s). D. 2 Ca(s) + O2(g) 2CaO(s) E. 2 CaO(s) 2 Ca(s) + O2(g)
When 0.655 g of Ca metal is added to 200.0 mL of 0.500 M HCl(aq), a temperature increase of 107C is observed. Assume the solution's final volume is 200.0 mL, the density is 1.00 g/mL, and the heat
capacity is 4.184 J/gC. (Note: Pay attention to significant figures. Do not ...
When 0.945 g of CaO is added to 200.0 mL of 0.500 M HCl(aq), a temperature increase of 387C is observed. Assume the solution's final volume is 200.0 mL, the density is 1.00 g/mL, and the heat
capacity is 4.184 J/gC. (Note: Pay attention to significant figures. Do not round...
Josh is going to rent a truck for one day. There are two companies he can choose from, and they have the following prices. Company A charges and allows unlimited mileage. Company B has an initial fee
of and charges an additional for every mile driven. For what mileages will Co...
Chemistry Lab+Help
what is the structural formula for these two esters C3H6 02 and what are their names
ok thanks :)
i mean other than make...
what might be the synonym to the word "create" does someone know the good dictionary of synonyms (if something like this even exist....)
hmm how to write this in ur own words... I do agree that this poem is like "the whoosh of the imagination at work". Firstly, there is the style of writing in the poem. There is no punctuation at all
in this poem. This shows how quickly the boy's imagination is wo...
Fractions From greatest to least
1/9, 1/12,1/6
but i dont get it, mr norton relies on invisible man for his fate so therefore he isnt expressing emerson's theory. also he isnt teaching anyone everyhting is for his benefit: does that tell me
Invisible Man: Mr. Norton- "you are my fate, young man" he talks about emerson's self realiance and how emerson is important but isnt he going aganist emerson's theory of self reliance??
i am having problems solving this question can you please help? b[(12c-3a)2-10= a=2, b=3, c=4
A tennis ball is thrown vertically upward with an initial velocity of +8.0 m/s. a. what will the ball's speed be when it returns to its starting point? b. how long will the ball take to reach its
starting point?
im trying to write a sentence using the word dictator and traipse but i don't know know so far i have, "The dictator was..." i don't know if i should write the dictator was traipsing or something
If you need 138 balloons and have 5 bags, each bag has same number of balloons in it. We need 18 more balloons to finish How many balloons in each bag?
a type of breathing the word is 6 letters long.Please Help
what is body composition measured by using calipers?
How do I make an outline for a research paper?
social studies
what volter mean by seying goverment need to have both shepherd and butchers
Marie-Josee intends to buy her father company. He wishes to sell her his company for a $850 000 amount because he would like to bequeath his inheritance to his daughter in his lifetime. Knowing the
big generosity of her father, Marie-Josee wishes to make projections to estimat...
15. it's money--$1,$2,$5....
A. If you would buy a machine and had to pay payments on it that would increase the asset and liability accounts. B. If you had to use an asset for an expense it would decrease both asset and owners
equity account. C. If you were to sell a property that you owned it would incr...
science - controlled experiment
i say that you are a poop monkey and there is a reason u have no friends *fart* poop fart
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=eli","timestamp":"2014-04-20T11:49:38Z","content_type":null,"content_length":"22841","record_id":"<urn:uuid:8f7c7520-0b27-4a89-ae2b-dcd354c4700f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coconut Creek, FL Math Tutor
Find a Coconut Creek, FL Math Tutor
...If I had to take the exam again today, after six years of college and decades of worldly experience, I would undoubtedly score even higher. I know how to take tests. Allow me to assist you in
achieving your highest possible score on this very important exam.
16 Subjects: including discrete math, algebra 1, algebra 2, calculus
...I would like to increase your potential to memorize and use all concepts relative to your success in your classes and in your own academic goals. I have experience learning diligently the
introductory and intermediate classes for high school and college chemistry (general chemistry sequence with...
36 Subjects: including prealgebra, chemistry, geometry, grammar
...I am certified as Math teacher by Florida Department of Education for grades 6-12. I have been teaching full-time astronomy at Palm Beach State College for two years. I have completed
astrophysics coursework required towards a master's degree, and I have certifications to teach it to younger ki...
27 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I have tutored Biological sciences, life sciences, anatomy 1 & 2 and physiology 1 & 2 for three consecutive semesters and mentored the students for the same for one whole semester. I have
experienced my students improving on their courses after taking tutoring lessons with me and I feel very del...
6 Subjects: including algebra 1, biology, anatomy, physiology
I graduated from the University of Florida in December 2012 with a degree in Criminology. I have since become certified to teach Math grades 5-9. I taught a 7th grade math class at the end of
last school year and have tutored students all the way up to grade 9 (Algebra).
5 Subjects: including discrete math, linear algebra, logic, algebra 1
Related Coconut Creek, FL Tutors
Coconut Creek, FL Accounting Tutors
Coconut Creek, FL ACT Tutors
Coconut Creek, FL Algebra Tutors
Coconut Creek, FL Algebra 2 Tutors
Coconut Creek, FL Calculus Tutors
Coconut Creek, FL Geometry Tutors
Coconut Creek, FL Math Tutors
Coconut Creek, FL Prealgebra Tutors
Coconut Creek, FL Precalculus Tutors
Coconut Creek, FL SAT Tutors
Coconut Creek, FL SAT Math Tutors
Coconut Creek, FL Science Tutors
Coconut Creek, FL Statistics Tutors
Coconut Creek, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Boca Raton Math Tutors
Coral Springs, FL Math Tutors
Deerfield Beach Math Tutors
Lauderdale By The Sea, FL Math Tutors
Lauderdale Lakes, FL Math Tutors
Lauderhill, FL Math Tutors
Margate, FL Math Tutors
North Lauderdale, FL Math Tutors
Oakland Park, FL Math Tutors
Parkland, FL Math Tutors
Pompano Beach Math Tutors
Sunrise, FL Math Tutors
Tamarac, FL Math Tutors
Weston, FL Math Tutors
Wilton Manors, FL Math Tutors
|
{"url":"http://www.purplemath.com/coconut_creek_fl_math_tutors.php","timestamp":"2014-04-19T19:56:11Z","content_type":null,"content_length":"24268","record_id":"<urn:uuid:0b5abc48-6425-41d1-98c4-616eba7a6c06>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do i simplify 4x^2-7x+3 divided by x^2+5x-6?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f2cbc7ae4b0571e9cba3c74","timestamp":"2014-04-19T17:24:40Z","content_type":null,"content_length":"45541","record_id":"<urn:uuid:921668ab-5b60-4f63-8ded-3192c2493d1d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• A Triangle.
• A line defined by a point and a vector.
Asked to Write Code for following:
• How to test if line intersect inside triangle.
• none
Steps to Solve:
1. Basic Idea is to find out if line intersect the plain of the triangle. And then test if this intersection point is inside the triangle. Using Clockness test: if the three triangles from by each
side of the triangle to intersect point have same clocknees as the triangle then point is inside triangle.
2. Start by finding Normal to triangle.
3. Take dot product of triangle's normal and line's vector.
Negitive: answer means the angle between tri's normal and line's vector is greater the 90 degrees, which means line hit front of triangle, this is the correct answer for a hit.
Positive: answer means the angle is less then 90 and line's vector hit from the backside of triangle.
Zero: answer means line is parallel to triangle's plane.
4. Using plane equation solve for the intersection point. By finding time t to this point and line's vector.
t is time if line's vector is given as units/time or t is distance if vector is given as just units, such as if given a line segment without a speed.
5. Use clockness to test if this point is inside triangle.
6. In 3D a triangle chould be clockwise from front view and counterclockwise from back in XYZ space.
So you have to be careful of how you define your triangles clockness.
7. Funtion check_same_clock_dir finds the norm to triangle formed by two points of triangle and intersect point. It then does a dot product with the triangle normal.
If answer is zero or positive, intersect point lies on the line or the corect side of line to have same normal as triangle and SAME_CLOCKNESS is returned. If negitive answer then DIFF_CLOCKNESS
is returned.
8. Funtion check_tri_hit returns 1 for intersect with pt_int is fill in with intersection point ,and 0 for no intersection none with pt_int not a vaild answer.
9. CODE
Some Notes:
Could speed up if you allready stored the triangle's normal.
If your line is a unit vector and speed then multiply unit vector with speed to make the line's vector.
Have not tested code but should work.
Code uses:
Everything is adds, subtractions, and multiplies with one divide.
// Start Code
// must include at least these
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define SAME_CLOCKNESS = 1;
#define DIFF_CLOCKNESS = 0;
typedef struct fpoint_tag
float x;
float y;
float z;
} fpoint;
fpoint pt1 = {0.0, 0.0, 0.0};
fpoint pt2 = {0.0, 3.0, 3.0};
fpoint pt3 = {2.0, 0.0, 0.0};
fpoint linept = {0.0, 0.0, 6.0};
fpoint vect = {0.0, 2.0, -4.0};
fpoint pt_int = {0.0, 0.0, 0.0};
int check_same_clock_dir(fpoint pt1, fpoint pt2, fpoint pt3, fpoint norm)
float testi, testj, testk;
float dotprod;
// normal of trinagle
testi = (((pt2.y - pt1.y)*(pt3.z - pt1.z)) - ((pt3.y - pt1.y)*(pt2.z - pt1.z)));
testj = (((pt2.z - pt1.z)*(pt3.x - pt1.x)) - ((pt3.z - pt1.z)*(pt2.x - pt1.x)));
testk = (((pt2.x - pt1.x)*(pt3.y - pt1.y)) - ((pt3.x - pt1.x)*(pt2.y - pt1.y)));
// Dot product with triangle normal
dotprod = testi*norm.x + testj*norm.y + testk.norm.z;
if(dotprod < 0) return DIFF_CLOCKNESS;
else return SAME_CLOCKNESS;
int check_intersect_tri(fpoint pt1, fpoint pt2, fpoint pt3, fpoint linept, fpoint vect,
fpoitnt* pt_int)
float V1x, V1y, V1z;
float V2x, V2y, V2z;
fpoint norm;
float dotprod;
float t;
// vector form triangle pt1 to pt2
V1x = pt2.x - pt1.x;
V1y = pt2.y - pt1.y;
V1z = pt2.z - pt1.z;
// vector form triangle pt2 to pt3
V2x = pt3.x - pt2.x;
V2y = pt3.y - pt2.y;
V2z = pt3.z - pt2.z;
// vector normal of triangle
norm.x = V1y*V2z-V1z*V2y;
norm.y = V1z*V2x-V1x*V2z;
norm.z = V1x*V2y-V1y*V2x;
// dot product of normal and line's vector if zero line is parallel to triangle
dotprod = norm.x*vect.x + norm.y*vect.y + norm.z*vect.z;
if(dotprod < 0)
//Find point of intersect to triangle plane.
//find t to intersect point
t = -(norm.x*(linept.x-pt1.x)+norm.y*(linept.y-pt1.y)+norm.z*(linept.z-pt1.z))/
// if ds is neg line started past triangle so can't hit triangle.
if(t < 0) return 0
pt_int->x = linept.x + vect.x*t;
pt_int->y = linept.y + vect.y*t;
pt_int->z = linept.z + vect.z*t;
if(check_same_clock_dir(pt1, pt2, pt_int, norm) == SAME_CLOCKNESS)
if(check_same_clock_dir(pt2, pt3, pt_int, norm) == SAME_CLOCKNESS)
if(check_same_clock_dir(pt3, pt1, pt_int, norm) == SAME_CLOCKNESS)
// answer in pt_int is insde triangle
return 1;
return 0;
// End Code
|
{"url":"http://www.angelfire.com/fl/houseofbartlett/solutions/line2tri.html","timestamp":"2014-04-17T04:26:32Z","content_type":null,"content_length":"19275","record_id":"<urn:uuid:344f3e1e-ec93-4794-ac28-d6b462aabe28>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to get the desired frequency in simplified synchronous machine?
1 Answer
Edited by
John Doe
on 6 May 2013
Accepted answer
No products are associated with this question.
How to get the desired frequency in simplified synchronous machine?
when i simulate the model using simplified synchronous machine the output frequency is not proper. The parameters which i am supposed to use to get a frequency of 400Hz is N=6000(the input to the
model is 628rad/s), pole pairs=4. when i give this parameter i don't get a freq of 400Hz, why is this? is there any other parameters that i have to change? if so please help me.
8 Comments
Show 5 older comments
It's not easy to help you when don't even check if your uploads work. Again, no photo copy, and no code...
sorry, i don't know how to upload the pic here. and my simulation is very simple. i have a simplified generator connected to the RLC load. and for an electrical engineer the above given information
is enough for him to just build the model in less than 2min. i thought you would do tat and give some useful solution. Anyways thanks Robert :) thanks for your consideration.
I also need da same thing ,want to measure the frequency...and its variation after increasing load.........plz help me tnx....
The electrical frequency is, as stated in the comments:
f_elec = poles * f_mec / 2
I was wondering what you were doing, because I can't see where you get in trouble. In a synchronous machine, the output frequency is only a function of the number of poles, and the rotor frequency,
nothing else.
Since I don't know how you achieved 0.153Hz, I asked to see your model, so that I could try to figure out where your error is.
Thus, the answer to your question: "Is there any other parameters that i have to change?" is No, nothing else must be changed.
Best of luck!
0 Comments
|
{"url":"http://www.mathworks.com/matlabcentral/answers/74608","timestamp":"2014-04-19T00:22:51Z","content_type":null,"content_length":"33476","record_id":"<urn:uuid:adf19ac0-d130-471e-838f-16ab22eac8de>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parenthesizing in audio
[Next] [Up] [Previous]
Next: Using pauses Up: Producing audio notation Previous: Producing audio notation
The technique used by written mathematical notation to cue tree structure is insufficient for audio renderings. Using a wide array of delimiters to write mathematics works, since the eye is able to
quickly traverse the written formulae and pair off matching delimiters. The situation is slightly different in audio; merely announcing the delimiters as they appear is not enough -when listening to
a delimited expression, the listener has to remember the enclosing delimiters. This insight was gained as a result of work in summer 91[+], when we implemented a prototype audio formatter for
mathematical expressions. Fleeting sound cues (with the pitch conveying nesting level) were used to ``display'' mathematical delimiters, but deeply nested expressions were difficult to understand.
AsTeR enables a listener to keep track of the nesting level by using a persistent speech cue, achieved by moving along dim-children, when rendering the contents of a delimited expression. This, in
combination with fleeting cues for signalling the enclosing delimiters, permits a listener to better comprehend deeply nested expressions. This is because the ``nesting level information'' is
implicitly cued by the currently active voice (a persistent cue ) used to render the parenthesized expression.
To give some intuition, we can think of different visual delimiters as introducing different ``functional colors'' at different subtrees of the expression. Using different AFL states to render the
various subtrees introduces an equivalent ``audio coloring''. The structure imposed on the audio space by the AFL operators enables us to pick ``audio colors'' that introduce relative changes. This
notion of relative change is vital in effectively conveying nested structures.
Mathematical expressions are spoken as infix or prefix depending on the operator and the currently active rendering style. The large operators such as [tex2html_wrap5698], in addition to the
mathematical functions like [tex2html_wrap5700], are rendered as prefix. All other expressions are rendered as infix. A persistent speech cue indicates the nesting level -the AFL state is varied
along audio dimension dim-children before rendering the children of an operator. The number of new states is minimized -complexity of math objects and precedence of mathematical operators determine
if a new state is to be used (see s:post_processing for details on the complexity measure used). Thus, while new AFL states are used when rendering the numerator and denominator of
[tex2html_wrap5702], no new AFL state is introduced when rendering [tex2html_wrap5704]. Similarly, when rendering [tex2html_wrap5706], no new AFL state is used to speak [tex2html_wrap5708], but when
rendering [tex2html_wrap5710], a new AFL state is used to render the argument to [tex2html_wrap5712].
In the context of rendering sub-expressions, introducing new AFL states can be thought of as parenthesizing in the visual context. In the light of this statement, the above assertion about minimizing
AFL states can be interpreted as avoiding the use of unnecessary parentheses in the visual context. Thus, we write [tex2html_wrap5714], rather than [tex2html_wrap5716], but we use parentheses to
write [tex2html_wrap5718]. Analogously, it is not necessary to introduce a new state for speaking the fraction when rendering [tex2html_wrap5720], whereas a new rendering state is introduced to speak
the numerator and denominator of [tex2html_wrap5722].[+]
Dimension dim-children has been chosen to provide five to six unique points. This means that deeply nested structures such as continuous fractions are rendered unambiguously.
Consider the following example:
Here, the voice drops by one step as each level of the continuous fraction is rendered. Since this effect is cumulative, the listener can perceive the deeply nested nature of the expression. The
rendering rule for fractions is shown in fig:fraction-rule. Notice that this rendering rule handles simple fractions differently. When rendering fractions of the form [tex2html_wrap5726], no new AFL
states are used. In addition, there is a subtle verbal cue; when rendering simple fractions, AsTeR speaks ``over'' instead of ``divided by''. This distinction seems to make the renderings more
effective, and in some of the informal tests we have carried out, listeners disambiguated between expressions using this distinction without even being aware of it.
: Rendering rule for fractions.
[Next] [Up] [Previous]
Next: Using pauses Up: Producing audio notation Previous: Producing audio notation
TV Raman
Thu Mar 9 20:10:41 EST 1995
|
{"url":"http://www.cs.cornell.edu/Info/People/raman/phd-thesis/html/node73.html","timestamp":"2014-04-17T12:55:08Z","content_type":null,"content_length":"6616","record_id":"<urn:uuid:ae5f47e1-5432-4851-8889-970b6b5ffa35>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Primes and Number Riddles
Date: 4/15/96 at 21:17:0
From: Anonymous
Subject: Pre Algebra: Finding primes as the result of solving number
I am having difficulty understanding this riddle. Please help me to
find the proper method to solve this and future riddles like it.
Thanks and here is the problem as it appears in its entirety.
Three different one-digit primes
Produce me, if you're using times;
If my digits you add,
Another prime will be had.
Two answers - and nothing else
Thank you very much for your assistance!
Date: 4/22/96 at 20:19:12
From: Doctor Joshua
Subject: Re: Pre Algebra: Finding primes as the result of solving
number riddles
By process of elimination, there are only four one-digit primes:
2,3,5, and 7.
Multiplying any three of these together:
2X3X5 = 30 Sum of digits = 3, this works!
2X3X7 = 42 Sum of digits = 6
2X5X7 = 70 Sum of digits = 7, this works as well.
3X5X7 = 105 Sum of digits = 6
So the answers to the riddle are 30 and 70.
-Doctor Joshua, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/56743.html","timestamp":"2014-04-17T16:34:48Z","content_type":null,"content_length":"5874","record_id":"<urn:uuid:10ee6888-9152-47f2-a484-81e777e8e0b0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equation-Free Accelerated Simulations of the Morphological Relaxation of Crystal Surfaces
G. J. Wagner, X. Zhou, S. J. Plimpton, Int J for Multiscale Computational Engineering, 8, 423-439 (2010).
A method for accelerating kinetic Monte Carlo simulations of solid surface morphology evolution, based on the equation-free projective integration (EFPI) technique, is developed and investigated.
This method is demonstrated through application to the 1+1 dimensional solid-on-solid model for surface evolution. EFPI exploits the multiscale nature of a physics problem, using fine-scale
simulations at short times to evolve coarse length scales over long times. The method requires identification of a set of coarse variables that parameterize the system, and it is found that the most
obvious coarse variables for this problem, those related to the ensemble-averaged surface position, are inadequate for capturing the dynamics of the system. This is remedied by including among the
coarse variables a statistical description of the fine scales in the problem, which in this case can be captured by a two-point correlation function. Projective integration allows speedup of the
simulations, but if speed-up of more than a factor of around 3 is attempted the solution can become oscillatory or unstable. This is shown to be caused by the presence of both fast and slow
components of the two-point correlation function, leading to the equivalent of a stiff system of equations that is hard to integrate. By fixing the fast components of the solution over each
projection step, we are able to achieve speedups of a factor of 20 without oscillations, while maintaining accuracy.
Return to Publications page
|
{"url":"http://www.sandia.gov/~sjplimp/abstracts/ijmce10.html","timestamp":"2014-04-18T05:53:48Z","content_type":null,"content_length":"1941","record_id":"<urn:uuid:f4414738-0d3e-4f8f-b420-0d2909c5f14e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Expression Evaluation with Mixed Numbers
6.12: Expression Evaluation with Mixed Numbers
Created by: CK-12
Practice Expression Evaluation with Mixed Numbers
Have you ever had to combine pieces of something to make a whole?
Travis is doing exactly that. He has three pieces of pipe that has to be clamped together. It will be connected by a professional, but Travis needs to combine the pieces of pipe and figure out the
total length that he has.
The first piece of pipe measures $5 \frac{1}{3}$
The second piece of pipe measures $6 \frac{1}{2}$
The third piece of pipe measures $2 \frac{1}{3}$
If Travis is going to combine these together, then he has to add mixed numbers.
This Concept will teach you how to evaluate numerical expressions involving mixed numbers. Then we will return to this original problem once again.
Sometimes, we can have numerical expressions that have both addition and subtraction in them. When this happens, we need to add or subtract the mixed numbers in order from left to right.
Here is a problem with two operations in it. These operations are addition and subtraction. All of these fractions have the same common denominator, so we can begin right away. We start by performing
the first operation. To do this, we are going to add the first two mixed numbers.
Now we can perform the final operation, subtraction. We are going to take the sum of the first two mixed numbers and subtract the final mixed number from this sum.
Our final answer is $6\frac{1}{6}$
What about when the fractions do not have a common denominator?
When this happens, you must rename as necessary to be sure that all of the mixed numbers have one common denominator before performing any operations.
After this is done, then you can add/subtract the mixed numbers in order from left to right.
The fraction parts of these mixed numbers do not have a common denominator. We must change this before performing any operations. The lowest common denominator between 6, 6 and 2 is 6. Two of the
fractions are already named in sixths. We must rename the last one in sixths.
Next we can rewrite the problem.
Add the first two mixed numbers.
Now we can take that sum and subtract the last mixed number.
Don’t forget to simplify.
This is our final answer.
Now it's time for you to try a few on your own. Be sure your answer is in simplest form.
Example A
Solution: $7 \frac{5}{8}$
Example B
Solution: $5 \frac{4}{9}$
Example C
$2\frac{1}{3}+ 5\frac{1}{3}-6\frac{1}{4}=\underline{\;\;\;\;\;\;\;\;\;}$
Solution: $1 \frac{5}{12}$
Now back to Travis and the pipe. Here is the original problem once again.
Travis is doing exactly that. He has three pieces of pipe that has to be clamped together. It will be connected by a professional, but Travis needs to combine the pieces of pipe and figure out the
total length that he has.
The first piece of pipe measures $5 \frac{1}{3}$
The second piece of pipe measures $6 \frac{1}{2}$
The third piece of pipe measures $2 \frac{1}{3}$
If Travis is going to combine these together, then he has to add mixed numbers.
To solve this, we can begin by writing an expression that shows all three mixed numbers being added together.
$5\frac{1}{3}+ 6\frac{1}{2} + 2\frac{1}{3}=\underline{\;\;\;\;\;\;\;\;\;}$
Now we can convert all of the mixed numbers to improper fractions.
$\frac{16}{3} + \frac{13}{2} + \frac{7}{3}$
Next, we rename each fraction using the lowest common denominator. The LCD of 3 and 2 is 6.
$\frac{32}{6} + \frac{39}{6} + \frac{14}{6}$
Next, we add the numerators.
$\frac{85}{6} = 14 \frac{1}{6}$
This is our answer.
Here are the vocabulary words in this Concept.
Mixed Number
a number that has a whole number and a fraction.
Numerical Expression
a number expression that has more than one operation in it.
addition, subtraction, multiplication and division
Guided Practice
Here is one for you to try on your own.
$2\frac{1}{8}+ 3\frac{1}{4}-2\frac{1}{2}=\underline{\;\;\;\;\;\;\;\;\;}$
To start, we need to convert all of the mixed numbers to improper fractions.
$\frac{17}{8} + \frac{13}{4} - \frac{5}{2}$
Now we rename each fraction using the lowest common denominator. The LCD of 8, 4 and 2 is 8.
$\frac{17}{8} + \frac{26}{8} - \frac{20}{8}$
Now we can combine and simplify.
$\frac{23}{8} = 2 \frac{7}{8}$
This is our answer.
Video Review
Here are videos for review.
Khan Academy Subtracting Mixed Numbers
James Sousa Subtracting Mixed Numbers
James Sousa Example of Subtracting Mixed Numbers
Directions: Evaluate each numerical expression. Be sure your answer is in simplest form.
1. $2\frac{1}{3}+4\frac{1}{3}-1\frac{1}{3}=\underline{\;\;\;\;\;\;\;\;\;}$
2. $6\frac{2}{5}+6\frac{2}{5}-1\frac{1}{5}=\underline{\;\;\;\;\;\;\;\;\;}$
3. $7\frac{3}{9}+8\frac{1}{9}-1\frac{2}{9}=\underline{\;\;\;\;\;\;\;\;\;}$
4. $8\frac{3}{10}+2\frac{5}{10}-6\frac{4}{10}=\underline{\;\;\;\;\;\;\;\;\;}$
5. $6\frac{1}{5}+2\frac{3}{5}-1\frac{1}{5}=\underline{\;\;\;\;\;\;\;\;\;}$
6. $9\frac{4}{9}+2\frac{4}{9}-3\frac{5}{9}=\underline{\;\;\;\;\;\;\;\;\;}$
7. $6\frac{9}{12}+3\frac{2}{12}-8\frac{4}{12}=\underline{\;\;\;\;\;\;\;\;\;}$
8. $7\frac{8}{9}-1\frac{1}{9}+1\frac{3}{9}=\underline{\;\;\;\;\;\;\;\;\;}$
9. $6\frac{4}{8}+3\frac{4}{8}-6\frac{6}{8}=\underline{\;\;\;\;\;\;\;\;\;}$
10. $14\frac{2}{3}-2\frac{1}{3}+1\frac{1}{3}=\underline{\;\;\;\;\;\;\;\;\;}$
11. $12\frac{6}{9}+12\frac{8}{9}-10\frac{7}{9}=\underline{\;\;\;\;\;\;\;\;\;}$
12. $9\frac{1}{7}+12\frac{3}{7}+1\frac{2}{7}=\underline{\;\;\;\;\;\;\;\;\;}$
13. $14\frac{3}{4}+2\frac{1}{4}-1\frac{3}{4}=\underline{\;\;\;\;\;\;\;\;\;}$
14. $18\frac{6}{15}+2\frac{3}{15}-4\frac{2}{15}=\underline{\;\;\;\;\;\;\;\;\;}$
15. $12\frac{1}{9}+2\frac{1}{3}-1\frac{1}{6}=\underline{\;\;\;\;\;\;\;\;\;}$
Files can only be attached to the latest version of Modality
|
{"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r4/section/6.12/","timestamp":"2014-04-18T04:01:28Z","content_type":null,"content_length":"137139","record_id":"<urn:uuid:a50c4e18-fb1d-45f7-893f-aa3a2076355a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kalman filtered compressed sensing
Results 1 - 10 of 40
- in Proc. IEEE Int. Symp. Inf. Theory (ISIT), 2009
"... Abstract—We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some
errors. The “known ” part of the support, denoted, may be available from prior knowledge. Alternatively, in a ..."
Cited by 42 (14 self)
Add to MetaCart
Abstract—We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors.
The “known ” part of the support, denoted, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the
support estimate from the previous time instant as the “known ” part. The idea of our proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that
satisfies the data constraint and is sparsest outside of. We obtain sufficient conditions for exact reconstruction using modified-CS. These are much weaker than those needed for compressive sensing
(CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called regularized modified-CS (RegModCS) is
developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown. Index Terms—Compressive sensing, modified-CS, partially known
support, prior knowledge, sparse reconstruction.
- IEEE TSP
"... Abstract—We consider the problem of recursively and causally reconstructing time sequences of sparse signals (with unknown and time-varying sparsity patterns) from a limited number of noisy
linear measurements. The sparsity pattern is assumed to change slowly with time. The key idea of our proposed ..."
Cited by 13 (9 self)
Add to MetaCart
Abstract—We consider the problem of recursively and causally reconstructing time sequences of sparse signals (with unknown and time-varying sparsity patterns) from a limited number of noisy linear
measurements. The sparsity pattern is assumed to change slowly with time. The key idea of our proposed solution, LS-CS-residual (LS-CS), is to replace compressed sensing (CS) on the observation by CS
on the least squares (LS) residual computed using the previous estimate of the support. We bound CS-residual error and show that when the number of available measurements is small, the bound is much
smaller than that on CS error if the sparsity pattern changes slowly enough. Most importantly, under fairly mild assumptions, we show “stability ” of LS-CS over time for a signal model that allows
support additions and removals, and that allows coefficients to gradually increase (decrease) until they reach a constant value (become zero). By “stability, ” we mean that the number of misses and
extras in the support estimate remain bounded by time-invariant values (in turn implying a time-invariant bound on LS-CS error). Numerical experiments, and a dynamic MRI example, backing our claims
are shown. Index Terms—Compressive sensing, least squares, recursive reconstruction, sparse reconstructions. I.
, 2010
"... Abstract—This paper considers the problem of recovering time-varying sparse signals from dramatically undersampled measurements. A probabilistic signal model is presented that describes two
common traits of time-varying sparse signals: a support set that changes slowly over time, and amplitudes that ..."
Cited by 12 (5 self)
Add to MetaCart
Abstract—This paper considers the problem of recovering time-varying sparse signals from dramatically undersampled measurements. A probabilistic signal model is presented that describes two common
traits of time-varying sparse signals: a support set that changes slowly over time, and amplitudes that evolve smoothly in time. An algorithm for recovering signals that exhibit these traits is then
described. Built on the belief propagation framework, the algorithm leverages recently developed approximate message passing techniques to perform rapid and accurate estimation. The algorithm is
capable of performing both causal tracking and non-causal smoothing to enable both online and offline processing of sparse time series, with a complexity that is linear in all problem dimensions.
Simulation results illustrate the performance gains obtained through exploiting the temporal correlation of the time series relative to independent recoveries. I.
- IEEE J. Sel. Topics Signal Process , 2011
"... Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated.
Existing algorithms do not consider such temporal correlation and thus their performance degrades signific ..."
Cited by 12 (3 self)
Add to MetaCart
Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated.
Existing algorithms do not consider such temporal correlation and thus their performance degrades significantly with the correlation. In this work, we propose a block sparse Bayesian learning
framework which models the temporal correlation. We derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the
presence of high temporal correlation. Furthermore, our algorithms are better at handling highly underdetermined problems and require less row-sparsity on the solution matrix. We also provide
analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV
problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.
- in Euro. Conf. Comp. Vision , 2010
"... Abstract. Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate.
Despite significant progress in the theory and methods of CS, little headway has been made in compressive vid ..."
Cited by 12 (3 self)
Add to MetaCart
Abstract. Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite
significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events,
which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the
evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then
reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel
compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to lower
the compressive measurement rate considerably. We validate our approach with a range of experiments involving both video recovery, sensing hyper-spectral data, and classification of dynamic scenes
from compressive data. Together, these applications demonstrate the effectiveness of the approach.
"... Compressed sensing (CS) lowers the number of measurements required for reconstruction and estimation of signals that are sparse when expanded over a proper basis. Traditional CS approaches deal
with time-invariant sparse signals, meaning that, during the measurement process, the signal of interest d ..."
Cited by 10 (0 self)
Add to MetaCart
Compressed sensing (CS) lowers the number of measurements required for reconstruction and estimation of signals that are sparse when expanded over a proper basis. Traditional CS approaches deal with
time-invariant sparse signals, meaning that, during the measurement process, the signal of interest does not exhibit variations. However, many signals encountered in practice are varying with time as
the observation window increases (e.g., video imaging, where the signal is sparse and varies between different frames). The present paper develops CS algorithms for time-varying signals, based on the
least-absolute shrinkage and selection operator (Lasso) that has been popular for sparse regression problems. The Lasso here is tailored for smoothing time-varying signals, which are modeled as
vector valued discrete time series. Two algorithms are proposed: the Group-Fused Lasso, when the unknown signal support is time-invariant but signal samples are allowed to vary with time; and the
Dynamic Lasso, for the general class of signals with time-varying amplitudes and support. Performance of these algorithms is compared with a sparsity-unaware Kalman smoother, a support-aware Kalman
smoother, and the standard Lasso which does not account for time variations. The numerical results amply demonstrate the practical merits of the novel CS algorithms.
- in Proc. of IEEE International Conference on Image Processing,Nov
"... This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) – a solution for Distributed Video Coding (DVC) based on the recently emerging Compressed Sensing
theory. The DISCOS framework compressively samples each video frame independently at the encoder. However, it ..."
Cited by 9 (2 self)
Add to MetaCart
This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) – a solution for Distributed Video Coding (DVC) based on the recently emerging Compressed Sensing theory.
The DISCOS framework compressively samples each video frame independently at the encoder. However, it recovers video frames jointly at the decoder by exploiting an interframe sparsity model and by
performing sparse recovery with side information. In particular, along with global frame-based measurements, the DISCOS encoder also acquires local block-based measurements for block prediction at
the decoder. Our interframe sparsity model mimics state-of-the-art video codecs: the sparsest representation of a block is a linear combination of a few temporal neighboring blocks that are in
previously reconstructed frames or in nearby key frames. This model enables a block to be optimally predicted from its local measurements by l1-minimization. The DISCOS decoder also employs a sparse
recovery with side information to jointly reconstruct a frame from its global measurements and its local block-based prediction. Simulation results show that the proposed framework outperforms the
baseline compressed sensing-based scheme of intraframecoding and intraframe-decoding by 8 − 10dB. Finally, unlike conventional DVC schemes, our DISCOS framework can perform most encoding operations
in the analog domain with very low-complexity, making it be a promising candidate for real-time, practical applications where the analog to digital conversion is expensive, e.g., in Terahertz
imaging. Index Terms — distributed video coding, Wyner-Ziv coding, compressed sensing, compressive sensing, sparse recovery with decoder side information, structurally random matrices. 1.
, 2008
"... In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow time-varying sparsity patterns, from a limited number of
linear “incoherent” measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The ..."
Cited by 9 (7 self)
Add to MetaCart
In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow time-varying sparsity patterns, from a limited number of linear
“incoherent” measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The key idea is to run a reduced order KF only for the current signal’s estimated nonzero
coefficients’ set, while performing CS on the Kalman filtering error to estimate new additions, if any, to the set. KF may be replaced by Least Squares (LS) estimation and we call the resulting
algorithm LS-CS. In this work, (a) we bound the error in performing CS on the LS error and (b) we obtain the conditions under which the KF-CS (or LS-CS) estimate converges to that of a genie-aided KF
(or LS), i.e. the KF (or LS) which knows the true nonzero sets.
- in Asilomar Conf. on Signals, Systems and Computers 2009 , 2009
"... Abstract—Fixed-interval smoothing of time-varying vector processes is an estimation approach with well-documented merits for tracking applications. The optimal performance in the linear
Gauss-Markov model is achieved by the Kalman smoother (KS), which also admits an efficient recursive implementatio ..."
Cited by 6 (0 self)
Add to MetaCart
Abstract—Fixed-interval smoothing of time-varying vector processes is an estimation approach with well-documented merits for tracking applications. The optimal performance in the linear Gauss-Markov
model is achieved by the Kalman smoother (KS), which also admits an efficient recursive implementation. The present paper deals with vector processes for which it is known a priori that many of their
entries equal to zero. In this context, the process to be tracked is sparse, and the performance of sparsityagnostic KS schemes degrades considerably. On the other hand, it is shown here that a
sparsity-aware KS exhibits complexity which grows exponentially in the vector dimension. To obtain a tractable alternative, the KS cost is regularized with the sparsity-promoting ℓ1 norm of the
vector process – a relaxation also used in linear regression problems to obtain the leastabsolute shrinkage and selection operator (Lasso). The Lasso (L)KS derived in this work is not only capable of
tracking sparse time-varying vector processes, but can also afford an efficient recursive implementation based on the alternating direction method of multipliers (ADMoM). Finally, a weighted (W)-LKS
is also introduced to cope with the bias of the LKS, and simulations are provided to validate the performance of the novel algorithms. I.
"... Abstract—In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional
Kalman filter as a one-step update optimization procedure which leads us to a more unified framework, useful f ..."
Cited by 5 (1 self)
Add to MetaCart
Abstract—In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional Kalman
filter as a one-step update optimization procedure which leads us to a more unified framework, useful for incorporating sparsity constraints. We introduce three combinations of two sparsity
conditions (sparsity in the state and sparsity in the innovations) and write recursive optimization programs to estimate the state for each model. This paper is meant as an overview of different
methods for incorporating sparsity into the dynamic model, a presentation of algorithms that unify the support and coefficient estimation, and a demonstration that these suboptimal schemes can
actually show some performance improvements (either in estimation error or convergence time) over standard optimal methods that use an impoverished model.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9965697","timestamp":"2014-04-18T11:48:00Z","content_type":null,"content_length":"43089","record_id":"<urn:uuid:6aba373f-b1ec-4a3a-bd1c-643b7f1f1e2e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tangent lines and derivatives Help!
October 19th 2008, 03:26 PM
Tangent lines and derivatives Help!
I can solve derivatives alright but I don't understand tangent lines at all.
Find the equation of the tangent line to the curve http://webwork.math.uwyo.edu/webwork...494d69bcd1.png at the point http://webwork.math.uwyo.edu/webwork...61ec146821.png. The equation of this
tangent line can be written in the form http://webwork.math.uwyo.edu/webwork...9a0aa8b841.png
where m is?
And b is?
I solved the derivative down to
I think this is right and hopefully you needed to solve the derivative.
Thanks to anyone who can help
October 19th 2008, 03:50 PM
If $f$ is differentiable at $x_0$ then the equation of the tangent line at $<br /> \left( {x_0 ,f\left( {x_0 } \right)} \right)$ is $y = f'\left( {x_0 } \right)\left( {x - x_0 } \right) + f\left(
{x_0 } \right)$.
Note that $f'\left( {x_0 } \right)$ is the slope of the tangent line.
October 19th 2008, 04:01 PM
still confused
do i plug in (pi/3,6) with the equation y-y1=m(x-X1)? If so how do you turn the derivative in to an equation?
October 19th 2008, 04:23 PM
By asking that question, you have told me that you need some real help that is beyond an “On-Line” environment.
You need a sit-down with a live instructor.
Please ask your instructor to help you.
|
{"url":"http://mathhelpforum.com/calculus/54549-tangent-lines-derivatives-help-print.html","timestamp":"2014-04-17T06:07:25Z","content_type":null,"content_length":"6913","record_id":"<urn:uuid:aca5fa44-7ad3-4b39-a1e8-e7bbc2095ed5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help! Exam in one hour! Find equation of plane?
Can you find the direction vector of both lines? What does the cross product of two vectors give us with respect to information about the plane?
Before I dive in can I clarify, the direction vector of line one is <-3, 2, 1> and line 2 is <-1, -2, 4> and by taking the cross product I find the normal vector of the plane. With this normal
vector, I can take a point, either (2, -1, 3) OR (8, -5, 1) and use the normal vector to find the equation of the plane?
|
{"url":"http://www.physicsforums.com/showthread.php?t=587571","timestamp":"2014-04-17T10:02:24Z","content_type":null,"content_length":"25535","record_id":"<urn:uuid:be7fb0e3-8879-4f1d-bb0c-fff1aa2d6ad0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Serra's Discovering Geometry, Rhoad's Geometry for Enjoyment
Replies: 17 Last Post: Jul 10, 2013 10:45 AM
Messages: [ Previous | Next ]
Re: Serra's _Discovering Geometry_
Posted: Apr 3, 1995 6:23 AM
>Is this book referred to by Rhoad, Milauskas, ... written by teachers at
>a school called NewTrier(?) near Chicago.
This is the one.
> For example, in one of the sections there is presented the "Isosceles
>Triangle Theorem" without proof, with an indication that it will follow
>in the exercises. In the exercises, the following problem is presented:
>Given: AB = AC in Triangle ABC; Prove: <A = <B (I have used equals for
>congruent here.)
>Following is the "Proof"
>1. AB = AC 1. Given
>2. <A = <B 2. Isoscles Triangle Theorem
(This is odd. The authors never say "Isosceles Triangle THeorem" in any of
the example proofs. Instead they use an iconic shorthand for "If two sides
of a triangle are congruent then the base angles are congruent.")
I have the book in front of me and the isosceles triangle theorem is
thoroughly and rigorously proved immediately upon introduction in Section
3.6 (3.7 in the new edition).
I don't see the problem to which you refer, but in the previous section,
which introduces the definition of isosceles (et al.) triangles, there is
the following problem:
Given: AD and CD are legs of isosc. triangle ACD.
B is the midpoint of AC. (insert figure here)
Prove: <A = <C.
Except for the mention of "isosceles" (from which the student infers
AD=CD), this is virtually identical to problems in an earlier problem set
on drawing auxilliary lines. The student draws aux. line segment DB from
the vertex to midpoint B, proves the resultant triangles congruent by SSS,
and gets the angles by CPCTC. See also problem 5 in "Beyond CPCTC".
>Throughout, in the middle of a proof, suddenly a new "property",
>"theorem", or "postulate" is referenced without any foundation.
Where? What you may be objecting to is the way in which the authors use
sample problems to demonstrate how to use a new theorem or definition in a
proof. (E.g. final statement in sample problem 4 in section 3.6 (New
Edition: sample problem 3 in section 3.7).)
>Rarely, did I find a definition written with any degree of semblance of
The definitions are remarkably meaningful to the students I work with.
That's what's important, isn't it? how well it works for kids? The kids
end up knowing far more geometry and being far more capable of writing
mathematical arguments than the kids using any other book, rigorous and
deductive or exploratory and inductive. That's my experience with my small
sample size. Your mileage may vary -- but I note that in your message you
didn't say anything about how the book worked with kids.
>As far as the problems, I can not remember any problem contained therein
>that is not found in many other books.
You aren't serious! Look at the section on the three triangle congruence
theorems! Compare the variety of those problems to the problems in the
equivalent section of any other book! Notice how many of those problems
preview problems and ideas that will come later!
The merits of the Rhoad book didn't become apparent to me until I saw it in
use with kids. Similarly, I thought the Serra book was really neat until I
saw it in use with kids.
Tom McDougal University of Chicago Artificial Intelligence
Date Subject Author
4/2/95 Serra's Discovering Geometry, Rhoad's Geometry for Enjoyment Tom McDougal
4/2/95 Re: Serra's _Discovering Geometry_ Michael Keyton
4/3/95 Re: Serra's _Discovering Geometry_ Tom McDougal
4/3/95 Re: Serra's _Discovering Geometry_ Lynn Stallings
4/3/95 Re: Serra's _Discovering Geometry_ Lois B. Burke
4/4/95 Rhoad Geometry for Enjoyment and Challenge John A Benson
5/27/10 Re: Rhoad Geometry for Enjoyment and Challenge Todd Miller
12/20/12 Re: Rhoad Geometry for Enjoyment and Challenge Achla
7/10/13 Re: Rhoad Geometry for Enjoyment and Challenge Debbie Hunt
4/5/95 Re: Rhoad Geometry for Enjoyment and Challenge Lee Rudolph
4/5/95 Re: Rhoad Geometry for Enjoyment and Challenge Dennis Wallace
4/5/95 Re: Rhoad Geometry for Enjoyment and Challenge Lois B. Burke
4/6/95 Re: Serra's _Discovering Geometry_ Linda Dodge
4/11/95 Michael Keyton
4/6/95 Re: Serra's _Discovering Geometry_ Top Hat Salmon
4/7/95 Re: Serra's _Discovering Geometry_ (fwd) F. Alexander Norman
4/11/95 Re: Serra's _Discovering Geometry_ Mike de Villiers
4/29/08 Re: Serra's Discovering Geometry, Rhoad's Geometry for Enjoyment Raja Singh
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=1074732","timestamp":"2014-04-20T16:13:38Z","content_type":null,"content_length":"39936","record_id":"<urn:uuid:27b35853-de8e-4c05-93c7-5645a5d9887f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Possible Answer
Explains how to compute the mean, median, mode, and range of a list of numbers. - read more
How do you work out. the mean, mode, average and range. also how do you work out line equations? thanks.? - read more
Share your answer: how do you work out the mean mode median and range?
Question Analizer
how do you work out the mean mode median and range resources
|
{"url":"http://www.askives.com/how-do-you-work-out-the-mean-mode-median-and-range.html","timestamp":"2014-04-20T07:49:17Z","content_type":null,"content_length":"35774","record_id":"<urn:uuid:fa70e0b3-9a4a-4611-816e-f1bffa09d9fc>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Under consideration for publication in J. Fluid Mech. 1
The large-scale flow structure in turbulent
rotating Rayleigh-B´enard convection
Stephan Weiss and Guenter Ahlers
Department of Physics, University of California, Santa Barbara, CA 93106, USA
(Received 1 June 2011)
We report on the influence of rotation about a vertical axis on the large-scale circulation
(LSC) of turbulent Rayleigh-B´enard convection in a cylindrical vessel with aspect ratio
D/L = 0.50 (D is the diameter and L the height of the sample). The working fluid
was water at an average temperature Tav = 40 with a Prandtl number Pr = 4.38. For
rotation rates < 1 rad s-1
, corresponding to inverse Rossby numbers 1/Ro between
zero and twenty, we investigated the temperature distribution at the side wall and from it
deduced properties of the LSC. The work covered the Rayleigh-number range 2.3×109 <
Ra < 7.2 × 1010
. We measured the vertical side-wall temperature-gradient, the dynamics
of the LSC, and flow-mode transitions from single-roll states (SRS) to double-roll states
(DRS). We found that modest rotation stabilizes the SRS. For modest 1/Ro < 1 we found
the unexpected result that the vertical LSC plane rotated in the prograde direction (i.e.
faster than the sample chamber), with the rotation at the horizontal mid-plane faster
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/348/2952495.html","timestamp":"2014-04-20T21:18:41Z","content_type":null,"content_length":"8443","record_id":"<urn:uuid:b55d450d-13fe-402e-8bb4-a02e5d6f2ed0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Newsletter: Math Forum Internet News 2.37 (Sept. 15)
Replies: 0
Newsletter: Math Forum Internet News 2.37 (Sept. 15)
Posted: Sep 13, 1997 10:54 AM
15 September 1997 Vol.2, No.37
Rainbows | New Prime | Middle School POW | Grading/Non-grading
How are rainbows formed? Why do they only occur when the sun
is behind the observer? If the sun is low on the horizon, at
what angle in the sky should we expect to see a rainbow?
This lab helps to answer these and other questions by
examining a mathematical model of light passing through a
water droplet. The contents include:
- How does light travel?
- Reflection
- Refraction
- Rainbows: Exploration
- Rainbows: Analysis
- Conclusion
Objectives of the lab:
- to examine the use of Fermat's Principle of least-time to
derive the Law of Reflection and the Law of Refraction
- to experimentally determine the angle at which rainbows
appear in the sky
- to understand geometric properties of rainbows by
analyzing the passage of light through a raindrop
- to apply a knowledge of derivatives to a problem in
the physical sciences
- to better understand the relation between the geometric,
symbolic, and numerical representation of derivatives
This lab from the Curriculum Initiative Project at the
University of Minnesota is based on a module that was
developed by Steven Janke and published in "Modules in
Undergraduate Mathematics and its Applications" in 1992.
On August 24th, Gordon Spence, using a program written by
George Woltman, discovered what is now the largest known
prime number. The prime number, 2^(2976221)-1, is one of a
special class of prime numbers called Mersenne primes;
it is 895,932 digits long.
An introduction to prime numbers can be found in the
Dr. Math FAQ:
A project designed to challenge middle school students with
non-routine problems, and to encourage them to verbalize their
solutions. Responses will be read and assessed and comments
will be returned; incorrect solutions will be sent back with
an explanation of the error and students will be urged to
try again.
The problems are intended for students in grades 6-9 (ages
11-14), but may also be appropriate for students in other
grades. A variety of problem-solving techniques are encouraged,
- guess and check
- make a list
- draw a picture
- make a table
- act it out
- logical thinking
- algebraic equations
A conversation about the merits of giving grades, student
motivation and interest, the need for grades for college
admission, what grading does to the grader, the illusion
of absolute truth, the effects of praise in the classroom,
GPA, SAT and IQ tests, and the effects of testing.
Grading vs. non-grading was first mentioned in the context
of grade inflation by Michael Paul Goldenberg, during a
discussion of "At-risk Algebra Students":
The archives of the Math-Teach mailing list are hosted by
the Math Forum:
CHECK OUT OUR WEB SITE:
The Math Forum http://forum.swarthmore.edu/
Ask Dr. Math http://forum.swarthmore.edu/dr.math/
Problem of the Week http://forum.swarthmore.edu/geopow/
Internet Resources http://forum.swarthmore.edu/~steve/
Join the Math Forum http://forum.swarthmore.edu/join.forum.html
SEND COMMENTS TO comments@forum.swarthmore.edu
_o \o_ __| \ / |__ o _ o/ \o/
__|- __/ \__/o \o | o/ o/__/ /\ /| |
\ \ / \ / \ /o\ / \ / \ / | / \ / \
The Math Forum ** 15 September 1997
You will find this newsletter, a FAQ,
and subscription directions archived at
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=485923","timestamp":"2014-04-24T17:35:11Z","content_type":null,"content_length":"19569","record_id":"<urn:uuid:a5fa0009-7f21-428a-bceb-c49c3b02dd07>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vectors-finding distance
Find the distance of coordinate (1,0,0) from the line r=t(12i-3j-4k)
kingkaisai2 Although there is a formula, use the following(you can derive the formula using it) One thing I want to say is that the distance between a line and a point is the length of the
perpendicular drawn from the the point to the line. Let P(x,y,z) be a point lying on the line.(Let your point be A) For distance AP to be minimum, line segmentAP should be perpendicular to the line.
(Using this contion you could find the x,y,z). After finding P, you could find distance AP using Distance formula. Keep Smiling Malay
Sweetheart you are going to have to state your questions better next time you post. Theorem: The distance from point $P$ to a line in $\mathbb{R}^3$ (3 dimensions) is, $\frac{||\bold{v}\times \bold
{QP}||}{||\bold{v} ||}$ where $\bold{v}$ is aligned with the line and $Q$ is any point on line. ----- Solution ----- First find $\bold{v}$ any vector aligned with your line, which for example is $\
bold{v}=12\bold{i}-3\bold{j}-4\bold{k}$ Now select a point on the line, I will select for simplicity the origion $Q=(0,0,0)$. So that, $\bold{QP}=\bold{i}$ Thus, $\bold{v}\times \bold{QP}=\left| \
begin{array}{ccc} \bold{i} & \bold{j} & \bold{k}\\ 12 & -3 & -4 \\ 1& 0&0 \end{array} \right|=-4\bold{j}+3\bold{k}$ Thus, $||\bold{v} \times\bold{QP}||=||-4\bold{j}+3\bold{k}||=\sqrt{0^2+4^2+3^2}=5$
And, $||\bold{v}||=\sqrt{12^2+3^2+4^2}=13$ Therefore, the distance is their quotient as in the theorem, $\mbox{distance }=\frac{5}{13}$
Last edited by ThePerfectHacker; July 15th 2006 at 08:05 PM.
|
{"url":"http://mathhelpforum.com/geometry/4142-vectors-finding-distance.html","timestamp":"2014-04-17T11:36:47Z","content_type":null,"content_length":"39011","record_id":"<urn:uuid:7197b1ef-36f1-44ba-b50a-81cf766f1199>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 94
Research Directions in Computational Mechanics 9 UNCERTAINTY AND STOCHASTIC PROCESSES IN MECHANICS Probabilistic computational mechanics is the methodology that forms the basis of the structure
reliability and risk analysis of mechanical components and systems. Reliability and risk analyses are of critical importance both to public safety and the competitiveness of products manufactured in
the United States. Reliability analysis applications to enhance public safety include the performance of structures when subjected to seismic loads, determination of inspection cycles for aging
aircraft, and evaluation of existing infrastructure such as bridges and lifelines. In the design of mechanical components and systems where safety is not a crucial issue, reliability engineering is
also important because it can provide cost-effective methods for enhanced fabrication and inspection. The problem common in reliability engineering is that certain features of the problem are
uncertain or stochastic in character. Two of the most important sources of uncertainty in reliability engineering are unavoidable defects in the structures such as cracks and the environment, which
includes factors such as load and temperature. PROBABILISTIC FRACTURE MECHANICS Cracks, whose behavior is described by the field of fracture mechanics, are one of the most pervasive causes of failure
and therefore play a critical role in reliability engineering. Many of the problems of aging structures and aircraft, component life, and behavior under extreme loads are due to the growth of minor
defects into major cracks. The growth of cracks is, however, an inherently stochastic process. Both the sizes and locations of the initial defects that lead to major cracks are random, and the growth
of a crack under cyclic loading is stochastic in character. Generally, the growth of a crack under cyclic loading is modeled by the Paris law, in which the length of the crack a is governed by where
n is the number of load cycles, and ΔK is the range of the stress intensity factor in the load cycle; D and m are constants that are
OCR for page 94
Research Directions in Computational Mechanics fit to experimental data and exhibit significant scatter, or randomness. In current engineering practice the reliability of a structure against
excessive crack growth is usually ascertained by performing linear stress analysis and then using S-n charts, which provide the engineer with the probability of failure due to fatigue fracture of a
component subjected to n cycles to a maximum stress S. These S-n charts are usually based on a simple rod specimen subjected to a uniaxial state of stress, which may be quite different from the
complex stress pattern encountered in an actual component. Furthermore, the assumption of a perfect cyclic character with an amplitude that does not vary with time is usually quite unrealistic.
Computational mechanics is now reaching the stage where the actual growth of cracks in structures can be modeled along with the uncertainties in the crack growth law, initial flaw size, and
randomness in loading. These methods can be based on Monte Carlo procedures; however, they are often expensive in terms of computer cost. Alternatively, the approximations of first-and second-order
moment methods may be used. As described later, the latter may not be of sufficient accuracy in cases where the underlying problem is strongly nonlinear. To make these advances useful to engineers,
better methods and an improved understanding of the limitations of available methods for these problems is needed. The stochasticity in parameters D and m in the Paris law is probably due to
randomness in the strength or toughness of the material and the randomness of the microstructure of the material. These ideas have been examined only very cursorily. A better understanding and
methodologies for treating these problems are urgently needed for the following reasons: The development of the Paris law data involves many tests, which are often not feasible when advanced,
high-cost materials are considered, and the Paris law is directly applicable only to mode I crack growth (crack growth under tension) and is not applicable to cracks that do not remain rectilinear,
as in the presence of shear or in three-dimensional crack models. By computational studies of the stochastic character of materials and their failure in conjunction with experiments, it may be
possible to develop more generally applicable crack growth laws. The implications of such improved computational mechanics methodologies are quite startling. It would be
OCR for page 94
Research Directions in Computational Mechanics possible to relate lifetimes of components to the size and distribution of defects that are introduced in the fabrication process and, thus, design
fabrication processes and optimal cost effectiveness. Inspection cycle and nondestructive evaluation techniques for structures such as bridges, pipelines, and aircraft could be optimized for
reliability and cost. UNCERTAINTY AND RANDOMNESS IN LOADS Loads are the second major source of uncertainty in reliability analysis. Loads, man-made or natural, acting on mechanical and structural
systems are often difficult to predict in terms of their time of occurrence, duration, and intensity. The temporal and spatial load characteristics needed for detailed analysis are also subject to
considerable uncertainty. Nowhere in the engineering field does this fact manifest itself more strongly than in earthquake engineering. In view of this, the uncertainty issues associated with
earthquake engineering, particularly with earthquake ground accelerations as loads to mechanical and structural systems, are used as an example to demonstrate the complexity of the problem associated
with the uncertainty in loading conditions. There are many ways in which strong-motion earthquake phenomena can be modeled from the engineering point of view. Each model consists of a number of
component models that address themselves to particular phenomena of seismic events. For example, a succession of earthquake arrival times at a site may be modeled as a stationary or nonstationary
Poisson process, and the duration of significant ground motion in each earthquake may be modeled as a random variable with its distribution function specified. Also, temporal and spatial
ground-motion characteristics may be idealized as a trivariate and three-dimensional nonstationary and nonhomogeneous stochastic wave with appropriate definitions of intensity. Although further study
is definitely needed, the progress made in this area has been rather remarkable. Some of the current models are able to reflect the randomness in the seismic source mechanism, propagation path, and
surface layer soil amplification. The difference in ground motion and resulting structural response estimates arising from the use of various models represents modeling as well as parametric
uncertainties, since each component model contains a certain number of parameters to which appropriate values must be assigned for numerical analysis. Hence, the total uncertainty consists of
modeling uncertainty and parametric uncertainty.
OCR for page 94
Research Directions in Computational Mechanics In fact, a number of methods are available and have been used to identify the extent of uncertainty of parametric origin. The process for modeling
uncertainty appears to be limited by the extent of the plausible models that can be constructed and the ability to examine the variability of the results from these different models. The degree of
variability expressed in terms of range or any other meaningful quantity may be seen as representative of modeling error when, for example, ''best estimates'' are used for parameters within each
model. SYSTEM STOCHASTICITY The last several years have seen a resurgence of research interest in the area of system stochasticity. The problem of system stochasticity arises when, among other
things, the stochastic variability of the system parameters must be taken into consideration for evaluation of the system reliability under specified loading conditions. Indeed, the parameters that
control the constitutive behavior crack growth and strength of the material tend to be intrinsically random and/or uncertain due to a lack of knowledge. The stochastic variability of these parameters
is idealized in terms of stochastic fields, multivariate and multidimensional as appropriate, for continuous systems and by means of a multivariate random variable for discretized systems. The
resurgence of interest appears to have arrived at a time when the finite element method has finally reached its maturity, so that the finite element solution to the problem of system stochasticity
can augment existing software packages and thus provide added value. The recent effort in this direction has led to the establishment of a genre nouveau, "stochastic finite elements." However, many
important issues remain to be addressed. In fact, it was only recently that the basic accuracy and convergence issue arising from the various methods of approximation was addressed in the context of
Neuman expansion or Born approximation, primarily when dealing with static problems. Not only that, but the issue of stochastic shape functions has never really been resolved. From the purely
technical point of view, the subsequent comments seem in order with respect to stochastic finite element methods. Exact analytic solutions are available only for simple structures subjected to static
loads. Mean-centered perturbation methods are the most widely used, accurate only for small values of variability of the stochastic properties of
OCR for page 94
Research Directions in Computational Mechanics the system and inadequate to deal with nonlinear and/or dynamic problems. Solutions based on the variability response function are accurate only for
small values of variability of the stochastic properties of the system and inadequate to deal with nonlinear and/or dynamic problems. In the analysis of response variability arising from system
stochasticity, however, introduction of the variability response function has provided conceptual and practical novelty. The same analytical procedure can be used as in random vibration analysis
where the response variance is obtained from the integral over the frequency of the frequency response function squared multiplied by the spectral density function of the stationary random input
function. Monte Carlo simulation techniques are accurate for any variability value of the system's stochastic properties, applicable to nonlinear and dynamic problems, less time consuming, and more
efficient in static problems if than Neuman expansion methods are used, and are applicable to non-Gaussian fields. BOUNDING TECHNIQUES The primary difficulty associated with probabilistic models
dealing with intrinsic randomness and other sources of uncertainty often lies in the fact that a number, for that matter usually a large number, of assumptions must be made in relation to the random
variables and/or stochastic processes that analytically idealize the behavior of mechanical and structural systems. In this regard the following statement is in order: When uncertainty problems cloud
the process of estimating the structural response, the use of bounding techniques permits estimation of the maximum response, which depends on only one or two key parameters of design and analysis.
The maximum response thus estimated provides a good idea as to the range of the structural response variability. Although the applicability of bounding techniques is, at this time, limited to less
complicated load and structural models, a strong case can be made for the use and further development of this technique. The bounding techniques indicated here for system stochasticity are of great
engineering significance because these bounds can be estimated without knowledge of the spatial autocorrelation function of the stochastic field, which is difficult, if not impossible, to establish
|
{"url":"http://www.nap.edu/openbook.php?record_id=1909&page=94","timestamp":"2014-04-20T14:02:04Z","content_type":null,"content_length":"48631","record_id":"<urn:uuid:41387f61-83be-4866-afda-0fc9b35da060>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quadradic Progressions.
June 12th 2006, 06:31 PM #1
Global Moderator
Nov 2005
New York City
Quadradic Progressions.
Consider the following sets.
$S_1=\{a_1n^2+b_1n+c_2|n\in \mathbb{Z}\}$
$S_2=\{a_2n^2+b_2n+c_2|n\in \mathbb{Z}\}$
$S_k=\{a_kn^2+b_kn+c_k|n\in \mathbb{Z}\}$
Where, $a_1,a_2,...,a_kot = 0$
Prove that,
$\bigcup_{j=1}^k S_jot = \mathbb{Z}$
For any $k\geq 1$
What I am trying to show, informally, that a set of integers cannot be placed in classes of quadradic progressions.
For example, with linear progressions it is always possible. Consider
$3k,3k+1,3k+2$-they contain all the numbers.
Consider the number of integers less than X which are represented by the form $an^2 + bn + c$. It is not too hard to show that this number is approximately $\sqrt{X/a}$. So the number of integers
represented by one of k such forms is at most a quantity approximately $\sqrt{X} \sum_{i=1}^k \frac1{\sqrt{a_i}}$ and for large enough X this must be less than X. Hence some numbers are not so
June 24th 2006, 03:22 AM #2
|
{"url":"http://mathhelpforum.com/number-theory/3399-quadradic-progressions.html","timestamp":"2014-04-16T10:20:01Z","content_type":null,"content_length":"34473","record_id":"<urn:uuid:0c4c68ad-8313-4bea-9516-6cc35ad0815c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
gigabits per second
gigabits per second
<unit> (Gbps) A unit of information transfer rate equal to one billion bits per second. Note that, while a gigabit is defined as a power of two (2^30 bits), a gigabit per second is defined as a power
of ten (10^9 bits per second, which is slightly less) than 2^30).
Last updated: 2004-02-10
Try this search on Wikipedia, OneLook, Google
Nearby terms: gig « giga- « gigabit « gigabits per second » gigabyte » gigaflop » gigaflops
Copyright Denis Howe 1985
|
{"url":"http://foldoc.org/gigabits","timestamp":"2014-04-18T23:21:29Z","content_type":null,"content_length":"4825","record_id":"<urn:uuid:509f3c87-b5d4-496e-b345-02d5d16eb751>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
automorphism, in mathematics, a correspondence that associates to every element in a set a unique element of the set (perhaps itself) and for which there is a companion correspondence, known as its
inverse, such that one followed by the other produces the identity correspondence (i); i.e., the correspondence that associates every element with itself. In symbols, if f is the original
correspondence and g is its inverse, then g(f(a)) = i(a) = a = i(a) = f(g(a)) for every a in the set. Furthermore, operations such as addition and multiplication must be preserved; for example, f(a +
b) = f(a) + f(b) and f(a∙b) = f(a)∙f(b) for every a and b in the set.
The collection of all possible automorphisms for a given set A, denoted Aut(A), forms a group, which can be examined to determine various symmetries in the structure of the set A.
|
{"url":"http://www.britannica.com/print/topic/45044","timestamp":"2014-04-23T09:51:14Z","content_type":null,"content_length":"7060","record_id":"<urn:uuid:a50c6e2f-8d7e-4fc7-8ef0-edd63cf76781>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Robion Cromwell Kirby
Born: 25 February 1938 in Chicago, Illinois, USA
Click the picture above
to see two larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Robion Kirby is known to his friends and colleagues as Rob. His parents were [1]:-
... both educated with a bit of graduate school, but comparatively poor. [His] father was a (quiet) conscientious objector during World War II and hence lost a few jobs.
Kirby's father, returning to graduate school, underwent teacher training in 1948 and then became a teaching assistant. This meant that the family were quite poor during the whole of Kirby's childhood
and teenage years. In [1] he describes his years in elementary school:-
I grew up in small towns in Washington and Idaho. Knowing arithmetic and how to read before I entered school meant that I was usually bored and spent my time reading or daydreaming in the back of
the class. The best school was a three-room school in Farragut, Idaho, where I could do the fifth-grade work as well as my own fourth-grade work and thus skipped a grade.
When he was about ten years old he spent many hours trying to cross each of the bridges of Königsberg exactly once. Of course he failed but he never considered the possibility that it was impossible.
At high school, although he was quite good at mathematics, he was far from certain that it was the subject in which he should specialise. He was still undecided when, in 1954, he entered the
University of Chicago. Entering was relatively easy since the university had no prerequisites, with students only being required to take an examination testing their general knowledge and
intelligence. Mathematics was the subject he was best at, but still he had no special love for the subject and, seeing some friends enthused by their law courses, considered a law career. After some
thought, he decided that he should stick with mathematics [1]:-
I adopted the tendency to not listen to lectures, in fact to teachers or coaches or anyone with whom I was not engaged in a back-and-forth conversation. Games engaged me: chess, poker, and a wide
variety of small sports, but not team sports, where one spent too much time listening to a coach or standing in right field.
In fact for his first years at the University of Chicago it was chess that was his main passion [1]:
Our college team won the US intercollegiate championship twice, and I became ranked twenty-fifth in the country without too much effort. I began to lose interest and sometimes wondered if I could
do anything else as well as chess.
He took courses from Paul Halmos, Saunders Mac Lane, Irving Kaplansky and A A Albert. Mac Lane "radiated energy and love for mathematics", Kaplansky was "lucid" while A A Albert was "ambidextrous and
could fill a blackboard faster than anyone, writing with one hand when the other got tired". However it was a course he took with Halmos in 1958 on general topology, using John Kelley's book General
Topology, that was the first mathematics course he really enjoyed.
However, after four years of study at Chicago, he failed a German examination and, as a consequence, was not able to graduate with a B.S. in 1958. He therefore spent a fifth year, retaking German for
his B.S. but at the same time attending the courses put on for the Master's degree. After being awarded his B.S. he asked if he could be admitted to graduate school and take the Master's examination.
He was told that if he got grades close to a B in the autumn quarter then he could take the examination. He got a B in measure theory, given by Halmos, and a C in algebraic topology, given by Eldon
Dyer. With one other course at Pass level he was allowed to continue [2]:-
The Masters Exam could have four outcomes: you could pass with financial aid, pass without aid but with encouragement, pass with advice to pursue studies elsewhere, and fail. I got the third
pass, but really liked Chicago and turned up the next year (1960) anyway.
Of course, Kirby might have scraped in by the back door but he now needed to be able to fund his graduate studies. With a letter of support from Mac Lane, Kirby was appointed as a teaching assistant
at Roosevelt University, in Chicago. He taught mainly college algebra and trigonometry there for four years. Although teaching provided financial support, it was also very time-consuming and
comparatively little time was available for research. First, Kirby had to pass the oral Ph.D. qualifying examination which required that he offer two topics on which to be tested. He chose homotopy
theory and group theory but performed poorly and failed. He was advised that learning mathematics from books was not going to make a mathematician of him, rather he had, in addition, to start
discussing mathematics with fellow students and with staff [2]:-
... start talking mathematics, come to tea, and become part of the mathematical community.
It was good advice and he passed the qualifying examination at his second attempt. However, having performed poorly, particularly in topology, he found it hard to ask a topologist to become his
thesis advisor. Mac Lane told him to take the summer finding a topic before asking one of the faculty to be his supervisor. In the end he asked Eldon Dyer who had been on leave when he took the oral
examinations so had not seen his poor performance. Dyer was an algebraic topologist while Kirby's strengths came from his geometric intuition. He became interested in the 'annulus conjecture' which
was proposed by John Milnor in 1962. The conjecture states:
A region in n-space bounded by two locally flat n - 1 spheres is homeomorphic to an annulus.
At one stage Kirby thought he had found a proof, but it soon proved to be incorrect. Mac Lane suggested that this problem was too hard for a Ph.D. thesis so Kirby moved to another topic but continued
to think about the annulus conjecture. He submitted his thesis Smoothing Locally Flat Imbeddings to Chicago in 1965 and was awarded a Ph.D. He published papers related to his thesis Smoothing locally
flat imbeddings (1966) and Smoothing locally flat imbeddings of differentiable manifolds (1967) but he also proved a weak form of the annulus conjecture which he published in On the annulus
conjecture (1966).
After the award of his doctorate, Kirby was appointed as an assistant professor at the University of California, Los Angeles. It was here that he made his biggest mathematical breakthrough [1]:-
[One evening] in [August] 1968, while looking after my four-month-old son [Rolf], an idea occurred to me, now called the "torus trick." It only took a few days to realize that I had reduced the
annulus conjecture to a problem about PL homotopy tori, and in a different direction had proved the local contractibility of the space of homeomorphisms of n-space. I had already arranged to
spend fall 1968 at the Institute for Advanced Study, a fortuitous choice because I met Larry Siebenmann, who was the perfect collaborator. We finished off the annulus conjecture and another
couple of Milnor's problems, the existence and uniqueness of triangulations of manifolds of dimension greater than four. This used results of Terry Wall, which had been recently proved but not
yet entirely written down.
In September 1970 he was an invited speaker at the International Congress of Mathematicians in Nice, France, where he gave the lecture Some conjectures about four-manifolds. It was his outstanding
work on the annulus conjecture which led to the American Mathematical Society awarding him their Veblen Prize in Geometry in 1971:-
... for his paper "Stable homeomorphisms and the annulus conjecture".
In 1974 he was awarded a Guggenheim Foundation Fellowship and, in 1995, he received the National Academy of Sciences Award for Scientific Reviewing:-
For his list of problems in low-dimensional topology and his tireless maintenance of it; several generations have been greatly influenced by Kirby's list.
It was the first occasion on which this award had been given to a mathematician. He was elected to the National Academy of Sciences in 2001. Although he has remained at the University of California
throughout his career, quite early on he moved from Los Angeles to Berkeley [1]:-
I moved to Berkeley, getting closer to the mountains and rivers that I loved, and adding another fifty PhD students to my mathematical family. These mathematical sons and daughters and further
descendants are great friends and one of the best parts of my career.
Let us quote from [2] where Kirby describes his approach to advising his Ph.D. students:-
From the beginning, I have encouraged my students to find their own problems. It may take an extra year or two, but when they're finished they will have not only have done honest research, but
they will have a backlog of problems that they've worked on, so that they can hit the ground running in their first job. My responsibility has been to help them get into promising subjects with a
future, to nurture a small community of students who learn from and support each other, to organise seminars and give courses on recent developments, and to teat them like mathematicians who just
happen to have not yet written their first paper.
Let us now look at the books that Kirby has published. In 1977 he published, in collaboration with Laurence C Siebenmann, the book Foundational essays on topological manifolds, smoothings, and
triangulations. The book contains five essays all of which had existed in some form of other since 1970, describing the major breakthroughs that the collaboration between Kirby and Siebenmann
produced beginning in 1968. Ronald J Stern writes:-
The authors present the material both expertly and well. ... This book is indispensable for anyone interested in topological manifolds. ... The book is demanding, but self-contained, well
written, and its readers will be rewarded with a full and open understanding.
In 1989 Kirby published The topology of 4-manifolds in the Springer Lecture Notes in Mathematics Series. Christopher W Stark writes:-
These very readable lecture notes give geometric proofs of several fundamental results in 4-manifold topology ... The techniques and objects emphasized here are framed links and the Kirby
calculus, spin structures, and immersion methods.
Another aspect of Kirby's contributions that we should mention is his support for mathematics by bringing important issues to attention. As Associate Editor of the Notices of the American
Mathematical Society he contributed articles such as [3]. We give a quote from that article to indicate the issue that Kirby wanted to highlight:-
We mathematicians simply give away our work (together with copyright) to commercial journals who turn around and sell it back to our institutions at a magnificent profit. Why? Apparently because
we think of them as our journals and enjoy the prestige and honor of publishing, refereeing, and editing for them. ... What can mathematicians do? At one extreme they can refuse to submit papers,
referee, and edit for the high-priced commercial journals. ... A possibility is this: one could post one's papers (including the final version) at the arXiv and other websites and refuse to give
away the copyright. If almost all of us did this, then no one would have to subscribe to the journals, and yet they could still exist in electronic form.
Let us end this biography by indicating something of Kirby's life outside mathematics [1]:-
I could not avoid being pulled into other activities. I was a single parent for a number of years with custody of my son and daughter, an avid whitewater kayaker with some classic first descents
with Dennis Johnson, a constant thinker about public policy (often with my father and brother), and for twenty-six years now a happy husband of Linda, who enjoys mathematicians more than any
other mathematical spouse that I know.
Article by: J J O'Connor and E F Robertson
List of References (4 books/articles)
Mathematicians born in the same country
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © March 2011 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Kirby.html","timestamp":"2014-04-18T18:24:44Z","content_type":null,"content_length":"24560","record_id":"<urn:uuid:33977afe-889d-42e4-bbe9-4cfda971006d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Total number of integral points lying inside the triangle formed by the lines \(xy=0\) and \(x+y=a\), where \(a\) is any positive integer.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/503baa0ae4b007f9003100e3","timestamp":"2014-04-19T02:04:52Z","content_type":null,"content_length":"47970","record_id":"<urn:uuid:dcb45594-97b5-4855-af6f-0fe3ce303e1b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
motion of a particle
February 15th 2007, 08:41 AM #1
Nov 2006
Suppose that the equations of motion of a particle moving along an elliptic path are x= a cos omega t , y=sin omega t.
a) show that the acceleration is directed toward the origin.
b) show that the magnitude of the acceleration is proportional to the distance from the particle to the origin.
Let's call omega w until LaTeX is back up...
a) Suppose the particle is at the point (x, y). Then
x = a cos(wt)
y = sin(wt)
The acceleration is s'' where s is the displacement vector and each derivative is taken with respect to time. The displacement is a vector from the origin to the point in question, so in terms of
the unit vectors i and j:
s = x*i + y*j = a cos(wt)*i + sin(wt)*j
Then taking the derivatives:
s' = -aw sin(wt)*i + w cos(wt)*j
s'' = -aw^2 cos(wt)*i - w^2 sin(wt)*j
Let's play with this a bit and factor a -w^2:
s'' = -w^2 (a cos(wt) + sin(wt)) = -w^2*s
So s'' is a vector that is w^2 longer than s and, most importantly, points in exactly in the opposite direction to s. Since s points from the origin to the point, s'' points from the point to the
b) The magnitude of the acceleration is |s''|, so
|s''| = w^4 * |s|
Thus the two are proportional.
Note carefully that the centripetal acceleration does NOT point toward the origin in general. The centripetal acceleration is the acceleration component in a direction perpendicular to the
February 15th 2007, 10:06 AM #2
|
{"url":"http://mathhelpforum.com/calculus/11623-motion-particle.html","timestamp":"2014-04-18T18:42:58Z","content_type":null,"content_length":"34309","record_id":"<urn:uuid:9013af4a-9011-4aaa-aa70-b969cf1dc03f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: RIF: st: mfx with xtlogit
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: RIF: st: mfx with xtlogit
From Johannes Geyer <JGeyer@diw.de>
To statalist@hsphsun2.harvard.edu
Subject Re: RIF: st: mfx with xtlogit
Date Tue, 26 Feb 2008 12:20:00 +0100
>the mfx default does not correspond to the median of the dependent
variable evaluated at sample means of >independent variables? to evaluate
at the mean I guess it is necessary to predict the mean of the dependent
>variable before mfx
from -mfx- helptext:
Exactly what mfx can calculate is determined by the previous estimation
command and the predict(predict_option) option. The values at which the
marginal effects or elasticities are to be evaluated is determined by the
at(atlist) option. By default, mfx calculates the marginal effects or
elasticities at the means of the independent variables using the default
prediction option associated with the previous estimation command.
Johannes Geyer
Deutsches Institut für Wirtschaftsforschung (DIW Berlin)
German Institute for Economic Research
Department of Public Economics
DIW Berlin
Mohrenstraße 58
10117 Berlin
Tel: +49-30-89789-258
"Mussida Chiara" <chiara.mussida@unicatt.it>
Gesendet von: owner-statalist@hsphsun2.harvard.edu
26/02/2008 12:11
Bitte antworten an
RIF: st: mfx with xtlogit
the mfx default does not correspond to the median of the dependent
variable evaluated at sample means of independent variables? to evaluate
at the mean I guess it is necessary to predict the mean of the dependent
variable before mfx
-----Messaggio originale-----
Da: owner-statalist@hsphsun2.harvard.edu per conto di
Johannes Geyer
Inviato: mar 26/02/2008 11.44
A: statalist@hsphsun2.harvard.edu
Oggetto: Re: st: mfx with xtlogit
Dear Alejandro,
it happens that coefficients are significant and marginal
effects not.
They simply depend on all other coefficients which makes
it hard to
predict their significance - e.g. it might be important
to decide whether
to include insignificant coefficients of other variables
in your
You have to decide over which marginal effect to
calculate - did you try
-margeff- by Tamas Bartus? In my view it is more flexible
than mfx. You
can e.g. calculate average marginal effects, i.e.
calculate the mean
effect in your sample (margeff can do this) or simply
evaluate your
function at sample means (mfx default). If your sample is
small you could
also think of bootstrapping standard errors of marginal
Hope that helps,
Johannes Geyer
Deutsches Institut für Wirtschaftsforschung (DIW Berlin)
German Institute for Economic Research
Department of Public Economics
DIW Berlin
Mohrenstraße 58
10117 Berlin
Tel: +49-30-89789-258
Alejandro Delafuente <alejandro.delafuente@sant.ox.ac.uk>
Gesendet von: owner-statalist@hsphsun2.harvard.edu
26/02/2008 11:20
Bitte antworten an
st: mfx with xtlogit
Dear statalisters,
I estimated marginal effects after running xtlogit with
fixed effects as
xtlogit depvar independentvars, i(folio) fe
mfx, predict(pu0)
Where depvar indicates reception of transfers. All the
marginal effects of
interest are insignificant (most of which are dummy
variables), except for
variable which has few observations across my 3-round
panel. This came as
surprise for two reasons:
1) the xtlogit coefficients for some of those same
variables were significant;
2) I had run probit over the pooled sample (ie: dprobit
xtprobit (ie: xtprobit depvar indepvars, re) BEFORE
xtlogit and some of
same coefficients were highly significant as well.
My understanding is that there isn't one single way for
capturing marginal
effects after xtlogit, but am a bit puzzled with what
I've found thus far
wonder whether other commands for estimating mfx after
xtlogit exist? Or
advice as to why this insignificancy might be taking
Alejandro de la Fuente
Department of International Development/QEH
University of Oxford, Mansfield Road, Oxford OX1 3TB
Tel: 01865 281836
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Attachment: winmail.dat
Description: Binary data
|
{"url":"http://www.stata.com/statalist/archive/2008-02/msg01079.html","timestamp":"2014-04-18T19:14:28Z","content_type":null,"content_length":"11028","record_id":"<urn:uuid:86bbe530-8f15-45c8-9f97-aba6b33a0d09>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Major program: The Mathematics major requires a year of differential and integral calculus, Vectors and Matrices or Linear Algebra, Multivariable Calculus, an elementary knowledge of mathematical
algorithms and computer programming, Abstract Algebra, Fundamentals of Analysis, and a coherent selection of at least four additional courses in advanced mathematics chosen in consultation with an
advisor from the department.
Honors Program: An undergraduate may achieve the BA with honors in mathematics or honors in computer science via one of several routes:
• Honors thesis
• A strong performance in a suitable sequence of coursesalong with a public lecture on a topic chosen together with a faculty advisor
• A comprehensive examination
Special Programs: Mathematics-Economics Program
Sample Courses: Introduction to Mathematical Thought; Elementary Statistics; Multivariable Calculus; Vectors and Matrices; Linear Algebra; Fundamentals of Analysis; Complex Analysis; Discrete
Mathematics; Differential Equations; Probability; An Introduction to Mathematical Statistics; Set Theory; Topology; Abstract Algebra; Graph Theory
Number of Professors: 22
|
{"url":"https://www.wesleyan.edu/admission/academic_sampler/departments/math.html?KeepThis=true&TB_iframe=true&height=500&width=800","timestamp":"2014-04-21T02:47:12Z","content_type":null,"content_length":"3854","record_id":"<urn:uuid:eacf11fc-c535-4181-839d-f5fd41ca4ec0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Classifying According to Existing Categories
Components of Decision Analytics
1. Classifying According to Existing Categories
This chapter provides an overview of the techniques used to classify data according to existing categories and clusters.
Regardless of what line of work we’re in, we make decisions about people, medical treatments, marketing programs, soil amendments, and so on. If we’re to make informed, sensible decisions, we need to
understand how to find clusters of people who are likely or unlikely to succeed in a job; how to classify medications according to their efficacy; how to classify mass mailings according to their
likelihood of driving revenue; how to divide fertilizers into those that will work well with our crops and those that won’t.
The key is to find ways of classifying into categories that make sense and that stand up in more than just one sample. Decision analytics comprises several types of analysis that help you make that
sort of classification. The techniques have been around for decades, but it’s only with the emergence of the term analytics that the ways that those techniques can work together have gained real
This initial chapter provides a brief overview of each of the techniques discussed in the book’s remaining chapters, along with an introduction to the conditions that might guide you toward selecting
a particular technique.
Classifying According to Existing Categories
Several techniques used in decision analytics are intended for sets of data where you already know the correct classification of the records. The idea of classifying records into known categories
might seem pointless at first, but bear in mind that this is usually a preliminary analysis. You typically intend to apply what you learn from such a pilot study to other records—and you don’t yet
know which categories those other records belong to.
Using a Two-Step Approach
A classification procedure that informs your decision making often involves two steps. For example, suppose you develop a new antibiotic that shows promise of preventing or curing new bacterial
infections that have so far proven drug-resistant. You test your antibiotic in a double-blind experiment that employs random selection and assignment, with a comparison arm getting a traditional
antibiotic and an experimental arm getting your new medication. You get mixed results: Your medication stops the infection in about one third of the patients in the experimental arm, but it’s
relatively ineffective in the remaining patients.
You would like to determine whether there are any patient characteristics among those who received your new medication that tend either to enable or to block its effects. You know your classification
categories—those in whom the infection was stopped, and those in whom the infection was unaffected. You can now test whether other patient characteristics, such as age, sex, infection history, blood
tests and so on, can reliably distinguish the two classification categories. Several types of analysis, each discussed in this book, are available to help you make those tests: Multivariate analysis
of variance and discriminant function analysis are two such analyses. If those latter tests are successful, you can classify future patients into a group that’s likely to be helped by your medication
and a group that’s unlikely to be helped.
Notice the sequence in the previous example. You start with a group whose category memberships are known—those who received your medication and were helped and those who weren’t. Pending a successful
test of existing patient characteristics and their response to your medication, you might now be in a position to classify new patients into a group that your medication is likely to help, and a
group that isn’t. Health care providers can now make more informed decisions about prescribing your medication.
Multiple Regression and Decision Analytics
The previous section discusses the issue of classifying and decision making purely from the standpoint of design. Let’s take another look from the point of view of analysis rather than design—and,
not incidentally, in terms of multiple regression, which employs ideas that underlie many of the more advanced techniques described in this book.
You’re probably familiar to some degree with the technique of multiple regression. That technique seeks to develop an equation that looks something like this one:
Y = a[1]X[1] + a[2]X[2] + b
In that equation, Y is a variable such as weight that you’d like to predict. X[1] is a variable such as height, and X[2] is another variable such as age. You’d like to use your knowledge of people’s
heights and ages to predict their weight.
You locate a sample of, say, 50 people, weigh them, measure each person’s height, and record their ages. Then you push that data through an application that calculates multiple regression statistics
and in that way learn the values of the remaining three items in the equation:
• a[1], a coefficient you multiply by a person’s height
• a[2], a coefficient you multiply by a person’s age
• b, a constant that you add to adjust the scale of the results
You can now find another person whose weight you don’t know. Get his height and age and plug them into your multiple regression equation. If your sample of 50 people is reasonably representative, and
if height and age are reliably related to weight, you can expect to predict this new person’s weight with fair accuracy.
You have established the numeric relationships between two predictor variables, height and age, and a predicted variable, weight. You did so using a sample in which weight—which you want to
predict—is known. You expect to use that information with people whose weight you don’t know.
At root, those concepts are the same as the ones that underlie several of the decision analytics techniques that this book discusses. You start out with a sample of records (for example, people,
plants, or objects) whose categories you already know (for example, their recent purchase behaviors with respect to your products, whether they produce crops in relatively arid conditions, whether
they shatter when you subject them to abnormal temperature ranges). You take the necessary measures on those records and run the numbers through one or more of the techniques described in this book.
Then you apply the resulting equations to a new sample of people (or plants or objects) whose purchasing behavior, or ability to produce crops, or resistance to unusual temperatures is unknown. If
your original sample was a representative one, and if there are useful relationships between the variables you measured and the ones you want to predict, you’re in business. You can decide whether
John Jones is likely or unlikely to buy your product, whether your new breed of corn will flourish or wither if it’s planted just east of Tucson, or whether pistons made from a new alloy will shatter
in high temperature driving.
I slipped something in on you in the last two paragraphs. The first example in this section concerns the prediction of a continuous variable, weight. Ordinary, least-squares multiple regression is
well suited to that sort of situation. But the example in the previous section uses categories, nominal classifications, as a predicted variable: cures infection versus doesn’t cure it. As the values
of a predicted variable, categories present problems that multiple regression has difficulty overcoming. When the predictor variables are categories, there’s no problem. In fact, the traditional
approach to analysis of variance (ANOVA) and the regression approach to ANOVA are both designed specifically to handle that sort of situation. The problem arises when it’s the predicted variable
rather than the predictor variables that is measured on a nominal rather than a continuous scale.
But that’s precisely the sort of situation you’re confronted with when you have to make a choice between one of two or more alternatives. Will this new product succeed or fail? Will this new medicine
prolong longevity or shorten it due to side effects? Based solely on their voting records, which political party did these two congressional representatives from the nineteenth century belong to?
So, to answer that sort of question, you need analysis techniques—decision analytics—designed specifically for situations in which the outcome or predicted variable is measured on a nominal scale, in
terms of categories. That, of course, is the focus of this book: analysis techniques that enable you to use numeric variables to classify records into groups, and thereby make decisions about the
records on the basis of the group you project them into. To anticipate some of the examples I use in subsequent chapters:
• How can you classify potential borrowers into those who are likely to repay loans in accordance with the loan schedules, and those who are unlikely to do so?
• How can you accurately classify apparently identical plants and animals into different species according to physical characteristics such as petal width or length of femur?
• Which people in this database are so likely to purchase our resort properties in the Bahamas that we should fly them there and house them for a weeklong sales pitch?
Access to a Reference Sample
In the examples I just cited, it’s best if you have a reference sample: a sample of records that are representative of the records that you want to classify and that are already correctly classified.
(Such samples are often termed supervised or training samples.) The second example outlined in this chapter, regarding weight, height, and age, discussed the development of an equation to predict
weight using a sample in which weight was known. Later on you could use the equation with people whose weight is not known.
Similarly, if your purpose is to classify loan applicants into Approved versus Declined, it’s best if you can start with a representative reference sample of applicants, perhaps culled from your
company’s historical records, along with variables such as default status, income, credit rating, and state of residence. You could develop an equation that classifies applicants into your Approved
and Declined categories.
Multiple regression is not an ideal technique for this sort of decision analysis because, as I noted earlier, the predicted variable is not a continuous one such as weight but is a dichotomy.
However, multiple regression shares many concepts and treatments with techniques that in fact are suited to classifying records into categories. So you’re ahead of the game if you’ve had occasion to
study or use multiple regression in the past. If not, don’t be concerned; this book doesn’t assume that you’re a multiple regression maven.
Multiple regression does require that you have access to a reference sample, one in which the variable that is eventually to be predicted is known. That information is used to develop the prediction
equation, which in turn is used with data sets in which the predicted variable is as yet unknown. Other analytic techniques, designed for use with categorical outcome variables, and which also must
make use of reference samples, include those I discuss in the next few sections.
Multivariate Analysis of Variance
Multivariate analysis of variance, or MANOVA, extends the purpose of ordinary ANOVA to multiple dependent variables. (Statistical jargon tends to use the term multivariate only when there is more
than one predicted or outcome or dependent variable; however, even this distinction breaks down when you consider discriminant analysis.) Using ordinary univariate ANOVA, you might investigate
whether people who pay back loans according to the agreed terms have, on average at the time the loan is made, different credit ratings than people who subsequently default. (I review the concepts
and procedures used in ANOVA in Chapter 3, “Univariate Analysis of Variance (ANOVA).”) Here, the predictor variable is whether the borrower pays back the loan, and the predicted variable is the
borrower’s credit rating.
But you might be interested in more than just those people’s credit ratings. Do the two groups differ in average age of the borrower? In the size of the loans they apply for? In the average term of
the loan? If you want to answer all those questions, not just one, you typically start out with MANOVA, the multivariate version of ANOVA. Notice that if you want MANOVA to analyze group differences
in average credit ratings, average age of borrower, average size of loan, and average term of loan, you need to work with multiple predicted variables, not solely the single predicted variable you
would analyze using univariate ANOVA.
MANOVA is not a classification procedure in the sense I used the phrase earlier. You do not employ MANOVA to help determine whether some combination of credit rating, borrower’s age, size of loan,
and term of loan accurately classifies applicants according to whether they can be expected to repay the loan or default. Instead, MANOVA helps you decide whether those who repay their loans differ
from those who don’t on any one of, or a combination of, the outcome variables—credit rating, age, and so on.
You don’t use one univariate ANOVA after another to make those inferences because the outcome variables are likely correlated with one another. Those correlations have an effect, which cannot be
quantified, on the probability estimate of each univariate ANOVA. In other words, you might think that each of your univariate F-tests is operating at an alpha level of .05. But because of the
correlations the F-tests are not independent of one another and the actual alpha level for one test might be .12, for another test .08, and so on. MANOVA helps to protect you against this kind of
problem by taking all the outcome variables into account simultaneously. See Chapter 4, “Multivariate Analysis of Variance (MANOVA),” for a discussion of the methods used in MANOVA.
It surprises some multivariate analysts to learn that you can carry out an entire MANOVA using Excel’s worksheet functions only. But it’s true that by deploying Excel’s matrix functions
properly—MDETERM(), MINVERSE(), MMULT(), TRANSPOSE() and so on—you can go from raw data to a complete MANOVA including Wilks’ Lambda and a multivariate F-test in just a few steps. Nevertheless, among
the files you can download from the publisher’s website is a MANOVA workbook with subroutines that automate the process for you. Apart from learning what’s involved, there’s little point to doing it
by hand if you can turn things over to code.
But MANOVA, despite its advantages in this sort of situation, still doesn’t classify records for you. The reason I’ve gone on about MANOVA is explained in the next section.
Discriminant Function Analysis
Discriminant function analysis is a technique developed by Sir Ronald Fisher during the 1930s. It is sometimes referred to as linear discriminant analysis or LDA, or as multiple discriminant analysis
, both in writings and in the names conferred by statistical applications such as R. Like MANOVA, discriminant analysis is considered a true multivariate technique because its approach is to
simultaneously analyze multiple continuous variables, even though they are treated as predictors rather than predicted or outcome variables.
Discriminant analysis is typically used as a followup to a MANOVA. If the MANOVA returns a multivariate F-ratio that is not significant at the alpha level selected by the researcher, there is no
point to proceeding further. If the categories do not differ significantly as to their mean values on any of the continuous variables, then the reverse is also true. The continuous variables cannot
reliably classify the records into the categories of interest.
But if the MANOVA returns a significant multivariate F-ratio, it makes sense to continue with a discriminant analysis, which, in effect, turns the MANOVA around. Instead of asking whether the
categories differ in their mean values of the continuous variables, as does MANOVA, discriminant analysis asks how the continuous variables combine to separate the records into different categories.
The viewpoint adopted by discriminant analysis brings about two important outcomes:
• It enables you to look more closely than does MANOVA at how the continuous variables work together to distinguish the category membership.
• It provides you with an equation called a discriminant function that, when used like a multiple regression equation, assigns individual records to categories such as Repays versus Defaults or
Buys versus Doesn’t Buy.
Chapter 5, “Discriminant Function Analysis: The Basics,” and Chapter 6, “Discriminant Function Analysis: Further Issues,” show you how to obtain the discriminant function, and what use you can make
of it, using Excel as the platform. An associated Excel workbook automates a discriminant analysis using the results of a preliminary MANOVA.
Both MANOVA and discriminant analysis are legitimately thought of as multivariate techniques, particularly when you consider that they look at the same phenomena, but from different ends of the
telescope. They are also parametric techniques: Their statistical tests make use of theoretical distributions such as the F-ratio and Wilks’ lambda. Therefore these parametric techniques are able to
return to you information about, say, the likelihood of getting an F-ratio as large as the one you observed in your sample if the population means were actually identical.
Those parametric properties invest the tests with statistical power. Compared to other, nonparametric tests, techniques such as discriminant analysis are better (sometimes much better) able to inform
you that an outcome is a reliable one. With a reliable finding, you have every right to expect that you would get the same results from a replication sample, constructed similarly.
But that added statistical power comes with a cost: You have to make some assumptions (which of course you can test). In the case of MANOVA and discriminant analysis, for example, you assume that the
distribution of the continuous variables is “multivariate normal.” That assumption implies that you should check scattercharts of each pair of continuous variables, across all your groups, looking
for nonlinear relationships between the variables. You should also arrange for histograms of each variable, again by group, to see whether the variable’s distribution appears skewed.
As another example, MANOVA assumes that the variance-covariance matrix is equivalent in the different categories. All that means is that if you assembled a matrix of your variables, showing each
variable’s variance and its covariance with the other continuous variables in your design, the values in that matrix would be equivalent for the Repayers and for the Defaulters, for the Buyers and
the Non-Buyers. Notice that I used the word “equivalent,” not “equal.” The issue is whether the variance-covariance matrices are equal in the population, not necessarily in the sample (where they’ll
never be equal). Again, you can test whether your data meets this assumption. Bartlett’s test is the usual method and the MANOVA workbook, which you can download from the publisher’s website, carries
that test out for you.
If these assumptions are met, you’ll have a more powerful test available than if they are not met. When the assumptions are not met, you can fall back on what’s typically a somewhat less powerful
technique: logistic regression.
Logistic Regression
Logistic regression differs from ordinary least squares regression in a fundamental way. Least squares regression depends on correlations, which in turn depend on the calculation of the sums of
squared deviations, and regression works to minimize those sums—hence the term “least squares.”
In contrast, logistic regression depends not on correlations but on odds ratios (or, less formally, odds). The process of logistic regression is not a straightforward computation as it is in simple
or multiple regression. Logistic regression uses maximum likelihood techniques to arrive at the coefficients for its equation: for example, the values for a[1] and a[2] that I mentioned at the
beginning of this chapter. Conceptually there’s nothing magical about maximum likelihood. It’s a matter of trial and error: the educated and automated process of trying out different values for the
coefficients until they provide an optimal result. I discuss how to convert the probabilities to odds, the odds to a special formulation called the logit, how to get your maximum likelihood estimates
using Excel’s Solver—and how to find your way back to probabilities, in Chapter 2, “Logistic Regression.”
Largely because the logistic regression process does not rely on reference distributions such as the F distribution to help evaluate the sums of squares, logistic regression cannot be considered a
parametric test. One important consequence is that logistic regression does not involve the assumptions that other techniques such as MANOVA and discriminant analysis employ. That means you can use
logistic regression with some data sets when you might not be able to use parametric tests.
For example, in logistic regression there is no assumption that the continuous variables are normally distributed. There is no assumption that the continuous variables are related in a linear rather
than curvilinear fashion. There is no assumption that their variances and covariances are equivalent across groups.
So, logistic regression positions you to classify cases using continuous variables that might well fail to behave as required by MANOVA and discriminant analysis. It extends the number of data sets
that you can classify.
But the same tradeoff is in play. Although you can get away with violations in logistic regression that might cause grief in MANOVA, nothing’s free. You pay for discarding assumptions with a loss of
statistical power. Logistic regression simply is not as sensitive to small changes in the data set as is discriminant analysis.
|
{"url":"http://www.informit.com/articles/article.aspx?p=2153182","timestamp":"2014-04-18T15:51:39Z","content_type":null,"content_length":"49461","record_id":"<urn:uuid:8d16aaf0-b90b-40dc-8395-a58480691002>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-User] Help in python
Amr Radwan amr_or@yahoo....
Mon Nov 29 04:26:28 CST 2010
Hello Sir,
It would be grateful if you can help me in a simple python code. I am
learning python nowadays. I have two systems of ordinary differential
equations, one of them the state functions x(u,t) and the other for the
adjoint functions psi(t,u,x)
state :x'=f(x,u,t) x(0) is given
adjoint : psi' = g(x,u,t) psi(T) is given where T is the final time
I want to do a loop to integrate the state system forward in time and using
the results to integrate the adjoint backward in time using scipy after
that I can add the rest of my algorithm.
I have found a code written in matlab and I did my own python code as matlab
I am not sure if it is right or no.
Could you please, have a look at my small code and let me know
1- whether my small code is right or no
2- in matlab solver ode45 we write something like [T,X]= ode45 so we can get
T(time) and X...how I can get this in scipy using odeint
3- matlab ode45 as scipy odeint?
I hope that I was able to explain my problem.
many thanks in advance
-------------- next part --------------
A non-text attachment was scrubbed...
Name: eg2OC_Descent.m
Type: application/octet-stream
Size: 1884 bytes
Desc: not available
Url : http://mail.scipy.org/pipermail/scipy-user/attachments/20101129/f4d3c703/attachment-0002.obj
-------------- next part --------------
A non-text attachment was scrubbed...
Name: finalf.py
Type: application/octet-stream
Size: 1171 bytes
Desc: not available
Url : http://mail.scipy.org/pipermail/scipy-user/attachments/20101129/f4d3c703/attachment-0003.obj
More information about the SciPy-User mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-November/027696.html","timestamp":"2014-04-18T11:23:56Z","content_type":null,"content_length":"4249","record_id":"<urn:uuid:7c1b7468-d8c8-46ae-adbf-9d91813a9f85>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimal production?
October 1st 2010, 05:15 PM #1
Junior Member
Jan 2010
Optimal production?
I need to find the optimal level of inputs, the minimum total cost, and the average cost given the following conditions:
Min 25K + 5L
s.t. $100=L^{.75}K^{.25}$
The second equation looks like a standard Cobb-Douglas production function, but I don't understand how they interact.
Last edited by Quixotic; October 1st 2010 at 06:18 PM.
The constraint condition is:
raise this to the fourth power:
Substituting this into the objective gives:
$Ob(L)=\dfrac{25 \times 10^8}{L^3}+5L$
Now the minimum of the objective occurs when:
October 1st 2010, 09:28 PM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/business-math/158133-optimal-production.html","timestamp":"2014-04-21T05:41:38Z","content_type":null,"content_length":"34616","record_id":"<urn:uuid:0b0a6949-d996-4026-b149-87c6a14dc4f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Characterizing the Radon transforms of log-concave functions
up vote 5 down vote favorite
f:R^d→R[≥0] is log-concave if log(f) is concave (and the domain of log(f) is convex).
Theorem: For all σ on the sphere S^d-1 and r∈R, g[σ](r) := ∫[σ.x=r]f(x)dS(x) is a log-concave function of r. (Note: g, as a function of σ and r, is the Radon transform of f.)
Question: does this characterize log-concavity? That is, if g[σ](r) is log-concave as a function of r for all σ, is f log-concave?
20-questions real-analysis
add comment
2 Answers
active oldest votes
If I understand the question correctly, I think the answer is no.
Start with the following : if f is the indicator function of the unit ball, then the function g(r) is strictly log-concave close to 0 (this function does not depend on theta).
up vote 4 down vote Now, let h be the indicator function of the ball of radius r<1. Then f-epsilon.h is never log-concave for any epsilon>0, and its Radon transform (which again is independent of
accepted theta) remains log-concave is epsilon is small enough.
(this is especially easy to see in dimension 2, in which case the Radon transforms of both f and h are second-degree polynomials on their support)
Great! Such a simple example, too. – Darsh Ranjan Oct 23 '09 at 1:28
add comment
The following answer exploits a wider context, where non-negative functions $f$, viewed as densities of absolutely continuous measures $f(x)dx$, are replaced by non-negative measures $\
mu$. Because a concave function unbounded from above is $\equiv+\infty$, a log-concave measure $\mu$ is naturally an absolutely continuous measure with log-concave density.
up vote 5 Therefore the Lebesgue measure over the $2$-dimensional sphere $S^2$ is not log-concave over ${\mathbb R}^3$. Nevertheless, its Radon transform in a direction $\sigma$ is $$g_\sigma=2\pi\
down vote chi_{(-1,1)},$$ which is log-concave for every $\sigma$.
I like this example a lot, since a uniform measure on a sphere is far from log-concave. (And apparently I had never computed its Radon transform.) – Darsh Ranjan Sep 15 '10 at 0:18
add comment
Not the answer you're looking for? Browse other questions tagged 20-questions real-analysis or ask your own question.
|
{"url":"http://mathoverflow.net/questions/452/characterizing-the-radon-transforms-of-log-concave-functions?sort=oldest","timestamp":"2014-04-20T05:52:14Z","content_type":null,"content_length":"56273","record_id":"<urn:uuid:2533f41a-9d24-4291-8ec4-f9bf256f005b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra Problems
Algebra Problems on the Compass Test
Here are practice problems and solutions for the areas that are covered on the algebra section of the Compass mathematics test.
Basic Algebraic Calculations with Polynomials
Remember that a polynomial is a mathematical equation that contains more than one variable, such as x or y.
Problem: (x + 2y)^2 = ?
Solution: This type of algebraic expression is known as a polynomial.
To solve this sort of problem, you need the "FOIL" method.
"FOIL" stands for First - Outside - Inside - Last.
So, you have to multiply from the terms from each of the two pairs of parentheses in this order:
(x + 2y)^2 = (x + 2y)(x + 2y)
FIRST − Multiply the first term from the first pair of parentheses with the first term from the second pair of parentheses.
x × x = x^2
OUTSIDE − Multiply the terms at the outer part of each pair of parentheses.
x × 2y = 2xy
INSIDE − Multiply the terms at the inside, in other words, from the right on the first pair of parentheses and from the left on the second pair of parentheses.
2y × x = 2xy
LAST − Multiply the second terms from each pair of parentheses.
2y × 2y = 4y^2
Then we add all of the above parts together for the final answer.
x^2 + 2xy + 2xy + 4y^2 =
x^2 + 4xy + 4y^2
Factoring Polynomials and Other Algebraic Expressions
"Factoring" means that you need to break down the equations into smaller parts.
Problem: Factor the following algebraic expression.
x^2 + x − 30
Solution: For any problem like this, the answer will be in the following format: (x + ?)(x − ?)
We know that we need to have a plus sign in one pair of parentheses and a minus sign in the other pair of parentheses because 30 is negative.
We can get a negative number in problems like this only if we multiply a negative and a positive.
We also know that the factors of 30 need to be one number different than each other because the middle term is x, in other words 1x.
The only factors of 30 that meet this criterion are 5 and 6.
Therefore the answer is (x + 6)(x − 5)
Substituting Values into Algebraic Equations
You will see an equation with x and y, and you will have to replace x and y with the values stipulated in the problem.
Problem: What is the value of the expression 2x^2 − xy + y^2 when x = 4 and y = −1 ?
Solution: To solve this problem, put in the values for x and y and multiply.
Remember to be careful when multiplying negative numbers.
2x^2 − xy + y^2 =
(2 × 4^2) − (4 × −1) + (−1^2) =
(2 × 4 × 4) − (−4) + 1 =
(2 × 16) + 4 + 1 =
32 + 4 + 1 =
Algebraic Equations for Practical Problems
These types of problems are expressed in a narrative format. They present a real-life situation.
For instance, you may be asked to create and solve an equation that can be used to determine the discount given on a sale.
Problem: Sarah bought a pair of jeans on sale for $35. The original price of the jeans was $50.
What was the percentage of the discount on the sale?
Solution: To determine the value of a discount, you have to calculate how much the item was marked down: $50 − $35 = $15
Then divide the markdown by the original price: 15 ÷ 50 = 0.30
Finally, convert the decimal to a percentage: 0.30 = 30%
Coordinate Geometry
This aspect of geometry is included on the algebra part of the test because you need to use algebraic concepts in order to solve these types of geometry problems.
Coordinate geometry problems normally consist of linear equations with one or two variables.
You may need to calculate geometric concepts like slope, midpoints, distance, or x and y intercepts.
Problem: State the x and y intercepts that fall on the straight line represented by the following equation.
y = x + 3
Solution: First you should substitute 0 for x.
y = x + 3
y = 0 + 3
y = 3
Therefore, the y intercept is (0, 3).
Then substitute 0 for y.
y = x + 3
0 = x + 3
0 − 3 = x + 3 − 3
−3 = x
So, the x intercept is (−3, 0).
Rational Expressions with Exponents and Radicals
These types of algebra questions consist of equations that have exponents or square roots or both.
Problem: What is the value of b?
Solution: Your first step is to square each side of the equation.
Then get the integers on to just one side of the equation.
5b = 20
Then isolate the variable b order to solve the problem.
5b ÷ 5 = 20 ÷ 5
b = 4
Now have a look at our other practice test material:
Geometry Formulas and Exercises
|
{"url":"http://www.compass-test-practice.com/algebra.htm","timestamp":"2014-04-20T18:22:30Z","content_type":null,"content_length":"14000","record_id":"<urn:uuid:bd1e0502-54f6-4409-9003-db51d01a637f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Varying Motion
This lesson helps students clarify the relationship between the shape of a graph and the movement of an object. Students explore their own movement and plot it onto a displacement vs. time graph.
From this original graph, students create a velocity vs. time graph, and from there create an acceleration vs. time graph. The movement present and how to interpret each type of graph is emphasized
through the lesson, which serve as an excellent introduction to building blocks of calculus.
To introduce the lesson, look at the Velocity of a Car overhead as a class. The overhead shows a flat velocity vs. time graph, which should be used to scaffold understanding of more complex velocity
vs. time graphs.
Velocity of a Car Overhead
While examining the graph, ask students questions like those below. The terms displacement, velocity, and acceleration are crucial to understanding the lesson. These terms are defined later in the
lesson, so do not stress the terms here. If you know your students are unfamiliar with the terms, modify the questions below to distance, speed, and change in speed.
• What is this graph showing?
[The graph is showing the velocity of a car vs. time.]
• What is happening to the velocity of the car?
[The car is moving at a constant velocity.]
• What is happening to the acceleration of the car?
[The car is not accelerating, meaning that it has zero acceleration.]
• What is happening to the displacement of the car?
[The displacement of the car is increasing over time.]
• How might you determine the car’s displacement after 1 hour? Look closely at the units on the graph to help you answer this question.
[By looking at the units, students may be able to determine that the units of the x-axis and the units of the y-axis, when multiplied, give you the units of distance, mi/hr × hr = mi. This
can then be related to the area of a rectangle, which is equal to length × width, or 55 mi in this example.]
Graphs are not always nice straight lines. What happens if the graph line is a curve? How would we determine the area under the graph? Students can explore what the best way to estimate areas under a
curve using the Estimating Areas overhead. A suggestion would be to have students work in small groups to attempt to figure out the areas under the graph for the curve provided. Groups could then
share their methods with the class before moving on to the next part of this activity.
Some of these methods may include (but are not limited to):
• Subdividing the area under the graph into rectangles with the top of the rectangle passing through graph line at the midpoint of the top.
• Subdividing the area under the graph into rectangles with the top of the rectangle touching the graph line at the top, right vertex of the rectangle.
• Subdividing the area under the graph into regular geometric shapes and calculating the areas of each shape.
• Counting the approximate number of grid squares under the curve.
Before moving into the student activity, it is wise to point out the difference between distance and change in position, or displacement. This is discussed in the beginning of the Varying Motion
activity sheet. Depending on the courses completed by students, they may not have an understanding that displacement has a measured value along with a direction. Distance, however, is simply the
measured value without a direction. It is not possible to have a negative distance, as it is simply a number and unit, but it is possible to have a negative displacement, as you could be moving in a
direction that you have defined to be negative. For example, if you define north as positive and move to a position south of your starting point, your displacement is negative.
Varying Motion Activity Sheet
Similarly, there is a difference between speed and velocity. Velocity is the rate of change of your displacement, so it has a measured value, unit, and direction. Speed is simply the rate of change
of distance, so it has a unit but no direction. Read through the example on the activity sheet as a class.
Your students are now ready to collect their own data showing varying motion. Place students in groups of 3. Assign each student to the role of timer, walker, or marker, as defined on the activity
sheet. Each group will need a long walking space like a hallway, gymnasium, or sidewalk outside.
Have students place a piece of tape on the floor, marking their starting line. As one person walks, the timer will call out 4 second intervals. The marker will place a piece of tape on the floor
where they determine the walker’s foot is at the moment the timer calls out. The walker should vary the rate they are walking at, alternating between slower and faster walking speeds. It is important
that the walker does not move so quickly that the marker can’t keep up. The walker should always walk in the same direction to avoid over-complicating later analysis. This should continue until
60 seconds have elapsed (or until they run out of walking room). If time allows, you can have one group demonstrate the procedure to ensure all students understand their individual roles.
As a group, students are to go back and measure the displacement from the starting line to each of the tape markers. Note that it is not the distance between marker tapes that is important; it is the
distance from the starting line to each tape marker that must be measured. In the image below, the distances marked, 6 seconds, 12 seconds, and 18 seconds would be measured in feet or meters. The
values are noted on the Varying Motion activity sheet.
After the class, returns to the classroom, they can begin work on the activity sheet questions. In Question 2, if students cannot identify the independent and dependent variables, you can tell them
that time is the independent variable, while displacement is the dependent variable. However, first encourage them to realize this for themselves. The displacement depends on how much time has
elapsed since the walker started moving, while time is independent of the student's displacement.
Once a displacement vs. time graph has been created, students can answer specific questions from their activity packages. In general, the displacement graph will be some kind of curvy line. A
suggestion would be to use one group’s graph and create an overhead as an example for class discussion.
Begin the discussion by asking students how far the walker traveled in one minute. Draw a line from the beginning position of 0 to the final position. From there, ask them what the average velocity
was in that one minute? They may answer in the form of a certain number of ft/min (or m/min) or simply suggest ways to find it. If students connect the ends of the graph with a straight line, the
slope of that line represents velocity. Guide students to looking at the slope of the drawn line as being the average velocity. Point out that the displacement vs. time graph is not a straight line,
and consists instead of curves that are more and less steep. Therefore, the slope of the line between the ends is only an approximation of the average velocity.
You can use the Varying Motion sketch projected to the front of the room to help guide the discussion of the current graph as well as graphs created later in the lesson.
One observation from the displacement vs. time graph should be that the steeper the slope of the curve, the faster the person was walking during that time interval. Have students recall the slope of
a line:
│ │rise│ │y[2] – y[1] │
│m = ├────┤ or m = ├─────────────┤
│ │run │ │x[2] – x[1] │
What is the significance of the slope of the displacement vs. time graph?
[If students are able to see the significance of statements like faster and slower, they can conclude that the slope is, in fact, equal to the velocity of the walker.]
Students may comment that their graph is a curve not a straight line. How can you calculate the slope of a curve?
[To be able to determine velocities at given times, students must create tangent lines to the graph at certain points. By definition, a tangent line is a line that touches a curve at one point
and whose slope is equal to that of the curve at that point.]
Tangent lines can be taught or reviewed using the Drawing a Tangent Line overhead. It may be helpful to use the overhead in conjunction with the Varying Motion sketch. To draw a tangent line:
• choose a point on a curved graph.
• use a straight edge and touch the point on the curve.
• adjust the angle of the straight edge so that it is ‘equidistant’ from the curve on either side of the point in the immediate vicinity of the point.
Drawing a Tangent Line Overhead
The tangent line will have a large slope if they chose a point where the curve is steep; and have a small slope if they chose a point where the curve is more level. A tangent line lets students
assign a number to the steepness of the curve. Slope is a measure of steepness.
Have students determine the units of slope. They should note that these units are ft/s (or m/s if data was measured in meters). These are the units of velocity. Because the velocity was calculated at
a specific second, or instant in time, it is called an instantaneous velocity.
As students continue the activity sheet complete Questions 5 to 8, they will create a velocity vs. time graph, where time is still the independent variable and velocity is dependent. Again, the
velocity graph will be some kind of curvy line. Using a student group example on overhead, allow students to make observations. One observation by students could be that the rates of change in the
velocity graph are not as large as the rate of change in the displacement graph. This is because the walker was a human being and not a motorized vehicle, and so it is difficult to create a wide
range of velocities. Another observation is that there are times when the slope is negative, which did not occur in the displacement vs. time graph. This is because the walker was slowing down, or
decelerating in that time period.
An additional question to explore is what the slope of the line is representing on a velocity vs. time graph. The slope is representing the acceleration, or change in velocity, of the walker.
Therefore, a tangent line at a point on the curve would represent the instantaneous acceleration.
The final portions of the activity sheet have students create and analyze an acceleration vs. time graph, much like they did for velocity vs. time, and finally analyze the area under the curve. In
the Measuring Area section, as students analyze the units and see that they are multiplying the independent and dependent variables, they will realize that the area under the curve represents the
displacement of the walker. They can reflect back on how to do this from the Estimating Areas overhead from the beginning of the lesson. Once students have calculated the area under the curve for
various times, they can then compare their calculated displacements to the displacements they read directly from the displacement vs. time graph.
As a final reflection, have students share their answers for Question 15 on the activity sheet. If the numbers for the area under the graph and the reading of displacement are not exactly the same,
what might account for the difference? The readings taken directly from the graph and the calculated values for change in displacement will not be exactly the same because tangent lines were drawn as
an estimate of instantaneous velocity to generate the velocity vs time graph, and then the area under that curve was estimated to calculate a displacement. These two estimates would have created
• Stop watches
• Measuring tapes or meter sticks
• Masking tape
• Long hallway or sidewalk
• Varying Motion Sketch
1. Ask students to fill in the following table, leaving the words in brackets blank:
│ Type of Graph │ Reading Directly │ Slope of the Graph │ Area under the Graph │
│Displacement vs Time│ [Displacement] │ [Velocity] │ [N/A] │
│ Velocity vs Time │ [Velocity] │ [Acceleration] │[Change in displacement] │
│Acceleration vs Time│ [Acceleration] │ [N/A] │ [Velocity] │
2. Have students complete a journal reflection including answers to the following:
• What math concepts were completely new to me in this lesson?
• What concepts did I find easy to understand?
• What ideas do I not understand fully?
• What can I do to help me increase my understanding of these ideas?
1. Students can calculate the area under the graph for the acceleration vs. time graph to confirm that it does, in fact, generate similar numbers to those found on the velocity vs. time graph.
2. Use the Varying Motion sketch to have every student estimate the slope of a tangent line when x = 5. Compare these values as a group. This comparison could be turned into a box-and-whisker plot,
using the Mean and Median applet. This will emphasize that error is built into estimating tangent lines and slopes from a curved graph.
3. Have students walk backwards during some parts of the activities. Explore what happens to the graphs. This would allow students to see negative velocities and displacements within certain time
Questions for Students
1. Describe a situation where a person has positive displacement, positive velocity, and negative acceleration.
[Answers may vary. Referring to their activity, if a person was moving forward but slowing down, they would be in this situation. A real world example of this would be the 1st leg of the 100m
sprint relay team after they have handed off their baton. Runners must stay in their leg and will often continue running in a positive direction behind the next runner, slowing down as they
2. Can a person have a negative displacement, positive velocity, and positive acceleration? Describe how this could have happened during this activity.
[This is possible. The explanations may vary. Referring to their activity, if a person started behind the start line and moved forwards, increasing their speed, they would be in this
situation. A real world example of this would be the 2nd leg of a 100m sprint relay team — they start running before their start line, and accelerate before getting the baton and crossing the
line that marks the end of their hand-off zone.]
Teacher Reflection
• How did the students demonstrate understanding of the materials presented?
• What were some of the ways that the students illustrated that they were actively engaged in the learning process?
• Did students explore a variety of ways to determine the area under the curve? What methods seemed to be used most efficiently?
Learning Objectives
Students will:
• Collect and graph data
• Use slopes of tangent lines to create graphs of instantaneous velocities and instantaneous accelerations
• Use the area under a graph line to calculate velocities and displacements at specific moments in time
Common Core State Standards – Practice
• CCSS.Math.Practice.MP5
Use appropriate tools strategically.
• CCSS.Math.Practice.MP7
Look for and make use of structure.
|
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=2955","timestamp":"2014-04-17T09:46:18Z","content_type":null,"content_length":"98643","record_id":"<urn:uuid:bd203d19-ed48-484e-a80f-d1aa41c5c5fd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Universal upper bound on the entropy-to-energy ratio for bounded systems
We present evidence for the existence of a universal upper bound of magnitude $\frac{2\pi R}{\hslash c}$ to the entropy-to-energy ratio $\frac{S}{E}$ of an arbitrary system of effective radius $R$.
For systems with negligible self-gravity, the bound follows from application of the second law of thermodynamics to a gedanken experiment involving a black hole. Direct statistical arguments are also
discussed. A microcanonical approach of Gibbons illustrates for simple systems (gravitating and not) the reason behind the bound, and the connection of $R$ with the longest dimension of the system. A
more general approach establishes the bound for a relativistic field system contained in a cavity of arbitrary shape, or in a closed universe. Black holes also comply with the bound; in fact they
actually attain it. Thus, as long suspected, black holes have the maximum entropy for given mass and size which is allowed by quantum theory and general relativity.
DOI: http://dx.doi.org/10.1103/PhysRevD.23.287
• Received 7 July 1980
• Revised 25 August 1980
• Published in the issue dated 15 January 1981
© 1981 The American Physical Society
|
{"url":"http://journals.aps.org/prd/abstract/10.1103/PhysRevD.23.287","timestamp":"2014-04-18T20:59:01Z","content_type":null,"content_length":"27047","record_id":"<urn:uuid:12eef4c1-2186-463a-b68a-05171570d5e4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
WDM Network Blocking Computation Toolbox - File Exchange - MATLAB Central
File Information
The aim of this toolbox is to compute blocking probabilities in WDM networks. This work was based on [1], [2], [3], [4] and user is referred to those papers for deeper study.
Because WDM networks are circuit switched loss networks blocking may occur because of lack of resources. Also in circuit switched networks many paths use the same links. This toolbox
answers the question how different paths with different loads influence on each other and what is the blocking on each of the defined path. Toolbox is capable of computing blocking for
three different WDM network types: with no wavelength conversion, with full wavelength conversion and with limited range wavelength conversion. It is worth noting that case for full
conversion can be usefull for any circuit switched network without additional constraints (i.e. wavelength continuity constraint in WDM), for example telephone network.
Toolbox contains also scripts for defining network structures (random networks, user defined networks) and traffic matrixes. Three graph algorithms for shortest path computation are also
Description in this toolbox (they are used for traffic matrix creation).
[1] Alexander Birman, "Computing Approximate Blocking Probabilities for a Class of All-Optical Network", IEEE Journal on Selected Areas in Communications, vol. 14, June 1996
[2] Tushar Tripathi, Kumar N. Sivarajan, "Computing Approximate Blocking Probabilities in Wavelength Routed All_optical Networks with Limited-Range Wavelength Conversion", IEEE Journal on
Selected Areas in Communications, vol. 18, October 2000
[3] S. Chung, A. Kashper, K.W. Ross, "Computing Approximate Blocking Propabilities for large loss networks", IEEE/ACM Transactions on Networking, vol. 1, 1993
[4] F.P. Kelly, "Routing and capcity allocation in networks with trunk reservation", Matematics of Operation Research, vol. 15, no. 4, 1990
[5] Milan Kovaceviæ, Anthony Acampora, "Benefits of Wavelength translation in All-Optical Clear Channel Networks", IEEE Journal on Selected Areas in Communications, vol. 14, June 1996
MATLAB MATLAB 6.5 (R13)
Comments and Ratings (19)
hai all,
i've question about this program..in 'blocking_sim_script.m'
26 Mar 2014 thre is a channel which i can change.
Can any one tell me, channel selection in the program based on what?
please help me :(
30 Apr 2012 hi, i am looking for the work directory, and please let me know how i can start running and using this toolbox to create my own design and simulate it. thanks
26 Apr 2012 thanks so much, it can be used for optical communication
14 Apr 2012 for example, how can I create .mat file with create_network ?
guys guys, somehow, with my little knowledge about matlab, I managed to install the toolbox.
My request now is: can someone show me how to use the toolbox ?
14 Apr 2012
I am very interested in using the function create-load-uniform. I just want like a sample files and input i can create/add to use the function.
Thanks a lot !
please can u tell me how to install the toolbox . I kept looking for the toobox directory and work directory and i cant seem to find them.
Also, I don't understand how to change the PATHDEF file. there are 3 of them, which one to choose ??
14 Apr 2012 This is very important to a project I am working on.
Thank you !
please can u tell me how to install the toolbox . I kept looking for the toobox directory and work directory and i cant seem to find them.
14 Apr 2012 Also, I don't understand how to change the PATHDEF file. there are 3 of them, which one to choose ??
to edit PATHDEF
14 Dec 2011 open ...\MATLAB\R2007b\toolbox\local\
find pathdef.m, edit document like a .txt file
I have some problems using this toolbox. Can somebody tell me how do this part:
d) Edit PATHDEF and add at the end of the file:
27 Mar 2011 matlabroot,'\toolbox\wdm\networks;',...
matlabroot,'\work\networks; ',...
Thank you....
any help using this toolbox please
01 Jun 2010 thanks very much
01 Jun 2010
02 Mar 2010 Any help using this toolbox
I have some problems using this toolbox. Can somebody tell me how do this part:
d) Edit PATHDEF and add at the end of the file:
01 Feb 2010 matlabroot,'\toolbox\wdm\graphs;',...
matlabroot,'\work\networks; ',...
Thank you....
17 Apr 2009 Great! Thanks
02 Nov 2007
05 Sep 2006 Simulacion de una fibra optica tipo monomodo, cuando es sometida a esfuerzos axiales y de elongacion
23 Mar 2005 Useful and easy to use.
02 Jun 2004 Very nice tool. Easy to understand.
26 May 2004 excellent
|
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/4797-wdm-network-blocking-computation-toolbox","timestamp":"2014-04-18T08:37:46Z","content_type":null,"content_length":"43005","record_id":"<urn:uuid:c1fada45-3f40-44cb-9c6c-3e4a985009a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How arbitrary are connections?
Suppose that I have a differentiable manifold M on which I don't define a metric. I wish to define an affine connection on that manifold that will allow me to parallel transport vectors from one
tangent space of that manifold to another tangent space.
How arbitrary can I make my choice?
I do know that connections must transform correctly under coordinate transformations in order to keep my vector still a vector if I transport it, but is that it? Are there other restrictions on how I
can choose my connection?
For example, we see pictures of parallel transport on the two sphere and because we can see the 2 sphere embedded in 3-D space, we can "intuit" what parallel transport would be like on that sphere.
However, am I confined to that choice? Could I define an affine connection on a two sphere (given no metric, so I don't have to worry about compatibility issues) that would parallel transport vectors
completely unintuitively to how I "would" do it from my 3-D perspective? Could I make the vectors twist and turn in weird fashion?
It seems to me that the affine connection is quite arbitrary; however, I have also seen equations that link it to how basis vectors "twist and turn" as we move throughout the manifold, so I am
confused on really how arbitrary it is.
Perhaps I am too reliant on bases? @_@
|
{"url":"http://www.physicsforums.com/showthread.php?t=546843","timestamp":"2014-04-20T03:11:32Z","content_type":null,"content_length":"31085","record_id":"<urn:uuid:0c047ee6-1e0a-4e29-a6f8-309efa1f6966>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lesson 2 - How to measure a tree
The two measurements most commonly used to estimate the volume of wood in a tree or log are diameter and log length. Stem quality is also evaluated using a grading system that considers how large and
straight a stem is and whether or not there are defects such as branches.
Measuring Tree Diameter
The convenient point of measurement is determined by a person's height. A standard point is
4 ½ feet
above the ground (above the ground on the uphill side if the tree is on a slope). Stem diameter measured in this manner is referred to as "diameter at breast height" and is abbreviated "dbh".
Diameter is usually measured with a special diameter tape sold by forestry supply companies. However, many carpenter's tapes have a scale on the reverse side for measuring units of diameter. Or, you
may use a regular cloth tape to measure around the tree and divide the reading by 3.14 to obtain the diameter. Measuring dbh to the nearest inch is adequate.
Measuring Log Length
A tree may be thought of as a collection of
logs and pulpwood
. Small trees with dbh's ranging from 5 to 11 inches are only suitable for pulpwood. Trees greater than 11 inches dbh are suitable for sawlogs, from which lumber is cut. Typically, pulpwood is
measured in 8 foot lengths and sawlogs are measured in
16 foot lengths
. Because tree stems taper, the diameter decreases as you move up the stem. Once the upper stem diameter outside of the bark decreases to about 8 inches, the rest of the tree above that point is
considered pulpwood. Measuring logs in a standing tree requires some practice. There are several instruments you can either buy or make for measuring the length of logs in a standing tree. For more
details on this procedure call the Agriculture and Forestry Experiment Station at (304) 293-4421 or This email address is being protected from spambots. You need JavaScript enabled to view it. your
request for "How to Estimate the Value of Timber in Your Woodlot" Circular #148 January 1989 By Harry V. Wiant.
Estimating Pulpwood and Log Volume
Pulpwood is often measured by weight at the point of delivery. A truck containing a load of pulpwood is weighed and the weight of wood is determined by subtracting the truck weight from the total
weight. Sawlogs are the most valuable parts of the tree and accurate volume estimates are critical to receiving a fair price for you timber. Sawlog volumes are estimated in units of board feet.
Simply, a
board foot
is the amount of wood in a piece measuring 12 inches square and 1 inch thick.
By using stem diameters and log lengths, three different "log rules" or mathematical equations, have been developed for estimating the number of board feet in a log. Request the publication mentioned
above for more details. The three log rules are called "Doyle," "Scribner," and "International." The landowner selling timber is advised to estimate the volume of their trees using Scribner or
International. Because the Doyle log rule underestimates board feet volumes, most timber buyers prefer to use the Doyle log rule. If you, as a seller, are not aware of this difference, you may settle
for much less money than your timber is worth. Try to negotiate a volume estimate using either the International or Scribner rules, otherwise, make sure you negotiate a selling price that is about
40% higher per thousand board feet. Note: All of the tree value estimates in the web page are based on volumes determined using the International Log Rule.
Estimating Log Grade and Defects
Defects on a tree stem, such as
rot, splits, cracks, and curvy stems
can reduce the amount of lumber that can be obtained from a given log at the sawmill. Click on the pictures at the bottom of this page to see some examples of defects in standing trees as well as a
high value sawlog. In addition to volume loss, the value of lumber that can be sawn from a given log depends on factors such as the presence or absence of knots (caused by branches). Trees can be
graded according to a system that takes into account the amount of volume and the quality of the lumber that will be lost from the log when it is sawn. Logs given "Grades" where:
• Grade 1 logs are the straightest logs with little or no defects.
• Grade 2 logs have some defects.
• Grade 3 logs have the most defects and can also have somewhat curvy stems.
Logs that are too curvy to be sawn, or have too many defects, are typically sold as pulpwood. The value of trees is related to their grade and species, with Grade 1 trees being worth more than Grade
3. The highest value category of sawlog is called "veneer". This grade of log is typically peeled at the mill and used in the manufacture of fine furniture. For most species (except for black
cherry), veneer logs tend to be larger in diameter than Grade 1 sawlogs and have no defects.
If a landowner selling timber is not familiar with how tree grade is related to tree value, they may not receive a fair price for their timber. A landowner can gain a fair appreciation of the quality
of their trees using observation and common sense. If trees are large with few knots showing on the lower (larger) logs, and stems are straight with no damage or rot then the logs will tend to be of
high value. If there is evidence of a lot of stem rot in lower logs and many branches and crooked stems, then the trees probably have lower value. Tree species is also a very important factor
determining tree value.
|
{"url":"http://www.ahc.caf.wvu.edu/joomla/index.php?option=com_content&view=article&id=240:measure-a-tree&catid=23:some-basic-lessons&Itemid=94","timestamp":"2014-04-21T02:04:06Z","content_type":null,"content_length":"40660","record_id":"<urn:uuid:c2122726-0bca-48ba-be23-03cbea36cc7a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical psychology
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
Mathematical psychology
Mathematical Psychology is an approach to psychological research that is based on mathematical modeling of perceptual, cognitive and motor processes, and on the establishment of law-like rules that
relate quantifiable stimulus characteristics with quantifiable behavior. In practice "quantifiable behavior" is often constituted by "task performance".
As quantification of behavior is fundamental in this endeavor, the theory of measurement is a central topic in mathematical psychology. Mathematical psychology is therefore closely related to
psychometrics. However, where psychometrics is concerned with individual differences (or population structure) in mostly static variables, mathematical psychology focuses on process models of
perceptual, cognitive and motor processes as inferred from the 'average individual'. Furthermore, where psychometrics investigates the stochastic dependence structure between variables as observed in
the population, mathematical psychology almost exclusively focuses on the modeling of data obtained from experimental paradigms and is therefore even more closely related to experimental psychology/
cognitive psychology/psychonomics. Like computational neuroscience and econometrics, mathematical psychology theory often uses statistical optimality as a guiding principle, apparently assuming that
the human brain has evolved to solve problems in an optimized way. Central themes from cognitive psychology—limited vs. unlimited processing capacity, serial vs. parallel processing, etc.—and their
implications, are central in rigorous analysis in mathematical psychology.
Mathematical psychologists are active in many fields of psychology, especially in psychophysics, sensation and perception, problem solving, decision-making, learning, memory, and language,
collectively known as Cognitive Psychology, but also, e.g., in clinical psychology, social psychology, and psychology of music.
Ernst Heinrich Weber
Gustav Fechner
Mathematical modeling has a long tradition in Psychology. Heinrich Weber (1795–1878) and Gustav Fechner (1801–1887) were among the first to apply successful mathematical technique of functional
equations from physics to psychological processes, thereby establishing the fields of experimental psychology in general, and within that psychophysics in particular. During that time, in astronomy
researchers were mapping distances between stars by denoting the exact time of a star's passing of a cross-hair on a telescope. For lack of the automatic registration instruments of the modern era,
these time measurements relied entirely on human response speed. It had been noted that there were small systematic differences in the times measured by different astronomers, and these were first
systematically studied by German astronomer Friedrich Bessel (1782-1846). Bessel constructed personal equations constructed from measurements of basic response speed that would cancel out individual
differences from the astronomical calculations. Independently, physicist Hermann von Helmholtz measured reaction times to determine nerve conduction speed. These two lines of work came together in
the research of Dutch physiologist F. C. Donders and his student J. J. de Jaager, who recognized the potential of reaction times for more or less objectively quantifying the amount of time elementary
mental operations required. Donders envisioned the employment of his mental chronometry to scientifically infer the elements of complex cognitive activity by measurement of simple reaction time^[1]
The first psychological laboratory was established in Germany by Wundt, who amply used Donders' ideas. However, findings that came from the laboratory were hard to replicate and this was soon
attributed to the method of introspection that Wundt introduced. Part of the problems was due to the individual differences in response speed found by astronomers. Although Wundt did not seem to take
interest in these individual variations and kept his focus on the study of the general human mind, Wundt's American student James McKeen Catell was fascinated by these differences and started to work
on them during his stay in England. The failure of Wundt's method of introspection led to the rise of different schools of thought. Wundt's laboratory was directed towards conscious human experience,
in line with the work of Fechner and Weber on the intensity of stimuli. In the United Kingdom, under the influence of the anthropometric developments led by Francis Galton, interest focussed on
individual differences between humans on psychological variables, in line with the work of Bessel. Catell soon adopted the methods of Galton and helped laying the foundation of psychometrics. In the
United States, behaviorism arose in despise of introspectionism and associated reaction time research, and turned the focus of psychological research entirely to learning theory.^[1] Behaviorism
dominated American psychology until then end of the Second World War. In Europe introspection survived in Gestalt psychology. Behaviorism largely refrained from inference on mental processes, and
formal theories were mostly absent (except for vision and audition. During the war, developments in engineering, mathematical logic and computability theory, computer science and mathematics, and the
military need to understand human performance and limitations, brought together experimental psychologist, mathematicians, engineers, physicists, and economists. Out of this mix of different
disciplines mathematical psychology arose. Especially the developments in signal processing, information theory, linear systems and filter theory, game theory, stochastic processes and mathematical
logic gained a large influence on psychological thinking.^[1]^[2]
Two seminal papers on learning theory in Psychological Review, helped establishing the field in a world that was still dominated by behaviorists: A paper by Bush and Mosteller^[3] instigated the
linear operator approach to learning, and a paper by Estes^[4] that started the stimulus sampling tradition in psychological theorizing. These two papers presented the first detailed formal accounts
of data from learning experiments.
The 1950s saw a surge in mathematical theories of psychological processes, including Luce's theory of choice, Tanner and Swets' introduction of Signal detection theory for human stimulus detection,
and Miller's approach to information processing.^[2] By the end of the 1950s the number of mathematical psychologists had increased from a hand full by more than a tenfold—not counting
psychometricians. Most of these were concentrated at the University of Indiana, Michigan, Pennsylvania, and Stanford.^[5]^[2] Some of these were regularly invited by the U.S. Social Science Research
Counsel to teach in summer workshops in mathematics for social scientists at Stanford University, promoting collaboration.
To better define field of mathematical psychology, the mathematical models of the 1950s were brought together in sequence of volumes edited by Luce, Bush, and Galanter: Two readings^[6] and three
handbooks^[7]. This series of volumes turned out to be helpful in the developmen of the field. In the summer of 1963 the need was felt for a journal for theoretical and mathematical studies in all
areas in psychology, excluding work that was mainly factor analytical. An initiative led by R. C. Atkinson, R. R. Bush, W. K. Estes, R. D. Luce, and P. Suppes resulted in the appearance of the first
issue of the Journal of Mathematical Psychology in January, 1964.^[5]
Under the influence of developments in computer science, logic, and language theory, in the 1960s modeling became more in terms of computational mechanisms and devices. Examples of the latter
constitute so called cognitive architectures (e.g., production rule systems, ACT-R) as well connectionist systems or neural networks.
Important mathematical expressions for relations between physical characteristics of stimuli and subjective perception are Weber's law (which is now sometimes called Weber-Fechner Law), Ekman's Law,
Stevens' Power Law, Thurstone's Law of Comparative Judgement, the Theory of Signal Detection (borrowed from radar engineering), the Matching Law, and Rescorla-Wagner rule for classical conditioning.
While the first three laws are all deterministic in nature, later established relations are more fundamentally stochastic. This has been a general theme in the evolution in mathematical modeling of
psychological processes: From deterministic relations as found in classical physics to inherently stochastic models.
Influential Mathematical Psychologists
Important theories and models^[8]
Sensation, Perception, and Psychophysics
Simple detection
Stimulus identification
• Accumulator models
• Random Walk models
• Diffusion models
• Renewal models
• Race models
• Neural network/connectionist models
Simple decision
• Recruitment model
• Cascade model
• Level and Change Race model
Memory scanning, visual search
• Serial exhaustive search (SES) model
• Push-Down Stack
Error response times
Sequential Effects
• Linear operator model
• Stochastic Learning theory
Journals and Organizations
Central journals are the Journal of Mathematical Psychology and the British Journal of Mathematical and Statistical Psychology. There are two annual conferences in the field, the annual meeting of
the Society for Mathematical Psychology in the U.S, and the annual European Mathematical Psychology Group (EMPG) meeting.
See also
Areas covered
1. ↑ ^1.0 ^1.1 ^1.2 T. H. Leahey (1987) A History of Psychology, Englewood Cliffs, NJ: Prentice Hall.
2. ↑ ^2.0 ^2.1 ^2.2 W. H. Batchelder (2002). Mathematical Psychology. In A.E. Kazdin (Ed.) Encyclopedia of Psychology, Washington, DC: APA and New York: Oxford University Press.
3. ↑ R. R. Bush & F. Mosteller (1951). A mathematical model for simple learning. Psychological Review, 58, p. 313-323.
4. ↑ W. K. Estes (1950). Towards a statistical theory of learning. Psychological Review, 57, p. 94-107.
5. ↑ ^5.0 ^5.1 W. K. Estes (2002). History of the Society, [1]
6. ↑ R. D. Luce, R. R. Bush, & Galanter, E. (Eds.) (1963). Readings in mathematical psychology. Volumes I & II. New York: Wiley.
7. ↑ R. D. Luce, R. R. Bush, & Galanter, E. (Eds.) (1963). Handbook of mathematical psychology. Volumes I-III, New York: Wiley. Volume II from Internet Archive
8. ↑ Luce, R. D. (1986) Response Times (Their Role in Inferring Elementary Mental Organization). New York: Oxford University Press.
• Abraham, R. H. (1995). Erodynamics and the dischaotic personality. Westport, CT: Praeger Publishers/Greenwood Publishing Group.
• Adams, E. W. (1997). Bias-independent constructions from biased bisection and adjacency judgements: Journal of Mathematical Psychology Vol 41(4) Dec 1997, 311-318.
• Adams-Webber, J. R. (1990). A model of reflexion from the perspective of personal construct theory. New York, NY: Peter Lang Publishing.
• al-Nowaihi, A., & Dhami, S. (2006). A simple derivation of Prelec's probability weighting function: Journal of Mathematical Psychology Vol 50(6) Dec 2006, 521-524.
• Alper, T. M. (1987). A classification of all order-preserving homeomorphism groups of the reals that satisfy finite uniqueness: Journal of Mathematical Psychology Vol 31(2) Jun 1987, 135-154.
• Azrieli, Y., & Lehrer, E. (2007). Categorization generated by extended prototypes--An axiomatic approach: Journal of Mathematical Psychology Vol 51(1) Feb 2007, 14-28.
• Baker, F. B. (1998). An investigation of the item parameter recovery characteristics of a Gibbs sampling procedure: Applied Psychological Measurement Vol 22(2) Jun 1998, 153-169.
• Ballard, D. H., Zhang, Z., & Rao, R. P. N. (2002). Distributed synchrony: A probabilistic model of neural signaling. Cambridge, MA: The MIT Press.
• Baroody, A. J., & Dowker, A. (2003). The development of arithmetic concepts and skills: Constructing adaptive expertise. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
• Batchelder, W. H. (1971). What's (nu) in Math Psych? : PsycCRITIQUES Vol 16 (10), Oct, 1971.
• Batchelder, W. H. (1986). European Mathematical Psychology: PsycCRITIQUES Vol 31 (7), Jul, 1986.
• Batchelder, W. H. (1990). Some critical issues in Lefebvre's framework for ethical cognition and reflexion. New York, NY: Peter Lang Publishing.
• Batchelder, W. H. (2000). Mathematical psychology: Kazdin, Alan E (Ed).
• Batchelder, W. H., & Crowther, C. S. (1997). Multinomial processing tree models of factorial categorization: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 45-55.
• Baum, W. M. (1989). Quantitative prediction and molar description of the environment: Behavior Analyst Vol 12(2) Fal 1989, 167-176.
• Baumunk, K., & Dowling, C. E. (1997). Validity of spaces for assessing knowledge about fractions: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 99-105.
• Beghi, L., Xausa, E., Tomat, L., & Zanforlin, M. (1997). The depth effect of an oscillating tilted bar: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 11-18.
• Bell, D. E., & Fishburn, P. C. (2003). Probability weights in rank-dependent utility with binary even-chance independence: Journal of Mathematical Psychology Vol 47(3) Jun 2003, 244-258.
• Bennett, B. M., & Lehman, R. C. (2001). Directed convergence in stable percept acquisition: Journal of Mathematical Psychology Vol 45(5) Oct 2001, 732-779.
• Berlyne, D. E. (1971). Effects of auditory prechoice stimulation on visual exploratory choice: Psychonomic Science Vol 25(4) Nov 1971, 193-194.
• Billot, A., & Thisse, J.-F. (1999). A discrete choice model when context matters: Journal of Mathematical Psychology Vol 43(4) Dec 1999, 518-538.
• Birnbaum, M. H. (2004). Causes of Allais common consequence paradoxes: An experimental dissection: Journal of Mathematical Psychology Vol 48(2) Apr 2004, 87-106.
• Bockenholt, U. (1994). The 20th Meeting of the European Mathematical Psychology Group: PsycCRITIQUES Vol 39 (2), Feb, 1994.
• Bogatyrew, K. K. (1990). A marginal note to Vladimir Lefebvre's theory of moral cognition. New York, NY: Peter Lang Publishing.
• Bogin, B. (1988). Patterns of human growth. New York, NY: Cambridge University Press.
• Bolt, D., & Stout, W. (1996). Differential item functioning: Its multidimensional model and resulting SIBTEST detection procedure: Behaviormetrika Vol 23(1) Jan 1996, 67-95.
• Boudewijnse, G.-J. A., Murray, D. J., & Bandomir, C. A. (1999). Herbart's mathematical psychology: History of Psychology Vol 2(3) Aug 1999, 163-193.
• Boudewijnse, G.-J. A., Murray, D. J., & Bandomir, C. A. (2001). The fate of Herbart's mathematical psychology: History of Psychology Vol 4(2) May 2001, 107-132.
• Bresson, F. (1973). Some aspects of mathematization in psychology: Social Science Information/sur les sciences sociales Vol 12(4) Aug 1973, 51-65.
• Brichacek, V. (1973). Mathematical psychology: 1972: Activitas Nervosa Superior Vol 15(4) 1973, 278-286.
• Brown, D. R., & Smith, J. E. K. (1991). Frontiers of mathematical psychology: Essays in honor of Clyde Coombs. New York, NY: Springer-Verlag Publishing.
• Burros, R. H. (1975). Complementary properties of binary relations: Theory and Decision Vol 6(2) May 1975, 177-183.
• Candeal, J. C., De Miguel, J. R., & Indurain, E. (1996). Extensive measurement: Continuous additive utility functions on semigroups: Journal of Mathematical Psychology Vol 40(4) Dec 1996,
• Candeal, J. C., Indurain, E., & Oloriz, E. (1998). Weak extensive measurement without translation-invariance axioms: Journal of Mathematical Psychology Vol 42(1) Mar 1998, 48-62.
• Carlton, E. H., & Shepard, R. N. (1990). Psychologically simple motions as geodesic paths: I. Asymmetric objects: Journal of Mathematical Psychology Vol 34(2) Jun 1990, 127-188.
• Carlton, E. H., & Shepard, R. N. (1990). Psychologically simple motions as geodesic paths: II. Symmetric objects: Journal of Mathematical Psychology Vol 34(2) Jun 1990, 189-228.
• Cartwright, D. (1997). Formalization and Progress in Psychology (1940). Washington, DC: American Psychological Association.
• Chater, N., & Vitanyi, P. (2003). Simplicity: A unifying principle in cognitive science? : Trends in Cognitive Sciences Vol 7(1) Jan 2003, 19-22.
• Chechile, R. A. (2003). Journal of Mathematical Psychology: Telegraphic reviews: Journal of Mathematical Psychology Vol 47(1) Feb 2003, 106-107.
• Chechile, R. A. (2003). Telegraphic reviews: Journal of Mathematical Psychology Vol 47(4) Aug 2003, 472-474.
• Chechile, R. A. (2005). Telegraphic reviews: Journal of Mathematical Psychology Vol 49(3) Jun 2005, 255-256.
• Chechile, R. A. (2006). Telegraphic reviews: Journal of Mathematical Psychology Vol 50(4) Aug 2006, 437-438.
• Chechile, R. A. (2007). Telegraphic reviews: Journal of Mathematical Psychology Vol 51(1) Feb 2007, 53-54.
• Chen, S. (2006). The relationship between mathematical beliefs and performance: A study of students and their teachers in Beijing and New York (China). Dissertation Abstracts International
Section A: Humanities and Social Sciences.
• Chubb, C. (1999). Texture-based methods for analyzing elementary visual substances: Journal of Mathematical Psychology Vol 43(4) Dec 1999, 539-567.
• Cohen, M. A. (1980). Random utility systems: The infinite case: Journal of Mathematical Psychology Vol 22(1) Aug 1980, 1-23.
• Colonius, H. (1991). Progress in European Mathematical Psychology: PsycCRITIQUES Vol 36 (7), Jul, 1991.
• Colonius, H., & Ellermeier, W. (1997). Distribution inequalities for parallel models of reaction time with an application to auditory profile analysis: Journal of Mathematical Psychology Vol 41
(1) Mar 1997, 19-27.
• Confrey, J., & Kazak, S. (2006). A Thirty-Year Reflection on Constructivism in Mathematics Education in PME. Rotterdam, Netherlands: Sense Publishers.
• Coombs, C. H. (1974). What is mathematical psychology? : Pakistan Journal of Psychology Vol 7(3-4) Dec 1974, 3-6.
• Cosyn, E. (2002). Coarsening a knowledge structure: Journal of Mathematical Psychology Vol 46(2) Apr 2002, 123-139.
• Crassini, B. (1998). Editorial: Australian Journal of Psychology Vol 50(3) Dec 1998, ii.
• Creelman, C. D. (1998). Detecting, discriminating, and proselytizing: PsycCRITIQUES Vol 43 (12), Dec, 1998.
• Cuijpers, R. H., Kappers, A. M. L., & Koenderink, J. J. (2003). The metrics of visual and haptic space based on parallelity judgements: Journal of Mathematical Psychology Vol 47(3) Jun 2003,
• Davies, G. B., & Satchell, S. E. (2007). The behavioural components of risk aversion: Journal of Mathematical Psychology Vol 51(1) Feb 2007, 1-13.
• Dawson, W. E., & Miller, M. E. (1978). A reply to the critique by Marks: Perception & Psychophysics Vol 24(6) Dec 1978, 571.
• Degreef, E., Doignon, J.-P., Ducamp, A., & Falmagne, J.-C. (1986). Languages for the assessment of knowledge: Journal of Mathematical Psychology Vol 30(3) Sep 1986, 243-256.
• Diderich, A., & Busemeyer, J. R. (2003). Simple matrix methods for analyzing diffusion models of choice probability, choice response time, and simple response time: Journal of Mathematical
Psychology Vol 47(3) Jun 2003, 304-322.
• Doignon, J.-P., & Falmagne, J.-C. (1991). Mathematical psychology: Current developments. New York, NY: Springer-Verlag Publishing.
• Doignon, J.-P., Fiorini, S., & Joret, G. (2006). Facets of the linear ordering polytope: A unification for the fence family through weighted graphs: Journal of Mathematical Psychology Vol 50(3)
Jun 2006, 251-262.
• Doignon, J.-P., Fiorini, S., & Joret, G. (2007). Erratum to "Facets of the linear ordering polytope: A unification for the fence family through weighted graphs": Journal of Mathematical
Psychology Vol 51(5) Oct 2007, 341.
• Doignon, J.-P., & Regenwetter, M. (2005). Editor's foreword: Journal of Mathematical Psychology Vol 49(6) Dec 2005, 429.
• Dorans, N. J., & MacCallum, R. C. (2005). Ledyard R. Tucker (1910-2004): Psychometrika Vol 70(1) Mar 2005, 1-2.
• Dorfman, D. D. (1992). Man Proposes, God Disposes--Thomas a Kempis: PsycCRITIQUES Vol 37 (3), Mar, 1992.
• Dowling, C. E., Roberts, F. S., & Theuns, P. (1998). Recent progress in mathematical psychology: Psychophysics, knowledge, representation, cognition, and measurement. Mahwah, NJ: Lawrence Erlbaum
Associates Publishers.
• Drosler, J. (2000). An n-dimensional Weber law and the corresponding Fechner law: Journal of Mathematical Psychology Vol 44(2) Jun 2000, 330-335.
• Droste, M. (1987). Classification and transformation of ordinal scales in the theory of measurement. New York, NY: Elsevier Science.
• Dunn, J. C., & James, R. N. (2003). Signed difference analysis: Theory and application: Journal of Mathematical Psychology Vol 47(4) Aug 2003, 389-416.
• Duntsch, I., & Gediga, G. (1996). On query procedures to build knowledge structures: Journal of Mathematical Psychology Vol 40(2) Jun 1996, 160-168.
• Dyer, J. S., & Sarin, R. K. (1978). On the relationship between additive conjoint and difference measurement: Journal of Mathematical Psychology Vol 18(3) Dec 1978, 270-272.
• Dzhafarov, E. N. (1999). Double skew-dual scaling: A conjoint scaling of two sets of objects related by a dominance matrix: Journal of Mathematical Psychology Vol 43(4) Dec 1999, 483-517.
• Dzhafarov, E. N. (2001). Unconditionally selective dependence of random variables on external factors: Journal of Mathematical Psychology Vol 45(3) Jun 2001, 421-451.
• Dzhafarov, E. N. (2003). Selective influence through conditional independence: Psychometrika Vol 68(1) Mar 2003, 7-25.
• Dzhafarov, E. N., & Colonius, H. (2001). Multidimensional Fechnerian Scaling: Basics: Journal of Mathematical Psychology Vol 45(5) Oct 2001, 670-719.
• Edwards, D. C., & Metz, C. E. (2006). Analysis of proposed three-class classification decision rules in terms of the ideal observer decision rule: Journal of Mathematical Psychology Vol 50(5) Oct
2006, 478-487.
• Estes, W. K. (1969). The Magic of Words Plus Numbers: PsycCRITIQUES Vol 14 (11), Nov, 1969.
• Estes, W. K. (1975). Some targets for mathematical psychology: Journal of Mathematical Psychology Vol 12(3) Aug 1975, 263-282.
• Evans, J. S. B. T., & Over, D. E. (1997). Editorial: The contribution of Amos Tversky: Thinking & Reasoning Vol 3(1) 1997, 1-8.
• Falmagne, J.-C. (2000). Suppes, Patrick: Kazdin, Alan E (Ed).
• Falmagne, J.-C. (2005). Mathematical psychology--A perspective: Journal of Mathematical Psychology Vol 49(6) Dec 2005, 436-439.
• Falmagne, J.-C. (2007). A set-theoretical outlook on the philosophy of science: Journal of Mathematical Psychology Vol 51(1) Feb 2007, 45-52.
• Falmagne, J.-C., & Doignon, J.-P. (1998). Meshing knowledge structures. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
• Feldman, C. F. (1997). Boden's middle way: Viable or not? New York, NY: Oxford University Press.
• Feldman, J. (2003). A catalog of Boolean concepts: Journal of Mathematical Psychology Vol 47(1) Feb 2003, 75-89.
• Feldt, L. S., & Ankenmann, R. D. (1998). Appropriate sample size for comparing alpha reliabilities: Applied Psychological Measurement Vol 22(2) Jun 1998, 170-178.
• Fishburn, P. (1998). Utility of wealth in nonlinear utility theory. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
• Fisher, D. L., Wisher, R. A., & Ranney, T. A. (1996). Optimal static and dynamic training schedules: State models of skill acquisition: Journal of Mathematical Psychology Vol 40(1) Mar 1996,
• Fridman, L. M. (1974). Some methodological problems of modeling and mathematization in psychology: Voprosy Psychologii No 5 Sep-Oct 1974, 3-12.
• Fries, S. (1997). Empirical validation of a Markovian learning model for knowledge structures: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 65-70.
• Garson, J. W. (1996). Is Logic the Core of Cognition? : PsycCRITIQUES Vol 41 (9), Sep, 1996.
• Gates, P. (2006). The Place of Equity and Social Justice in the History of PME. Rotterdam, Netherlands: Sense Publishers.
• Geissler, H. G. (1975). Towards a new reconciliation of Stevens' and Helson's approaches to psychophysics: A tentative solution of the Stevens-Greenbaum puzzle: Acta Psychologica Vol 39(6) Dec
1975, 417-426.
• Gentry, T. A. (1995). Fractal geometry and human understanding. Westport, CT: Praeger Publishers/Greenwood Publishing Group.
• George, R. (2005). Review of The development of arithmetic concepts and skills: Constructing adaptive expertise: British Journal of Educational Psychology Vol 75(1) Mar 2005, 142-143.
• Ghahramani, Z. (1996). Computation and psychophysics of sensorimotor integration. Dissertation Abstracts International: Section B: The Sciences and Engineering.
• Gnepp, E. H. (1975). A mathematical reformulation of the persistence of the phobic response: Psychology: A Journal of Human Behavior Vol 12(2) May 1975, 51-52.
• Golden, R. R. (1991). Is a latent trait a matter of degree or a matter of kind? : PsycCRITIQUES Vol 36 (2), Feb, 1991.
• Gonzales, C. (1996). Additive utilities when some components are solvable and others are not: Journal of Mathematical Psychology Vol 40(2) Jun 1996, 141-151.
• Gormezano, I. (2002). Donald D. Dorfman (1933-2001): Obituary: American Psychologist Vol 57(12) Dec 2002, 1124.
• Greeno, J. G. (1972). Mathematics in psychology: Dodwell, P C (Ed) (1972) New horizons in psychology Oxford, England: Penguin.
• Gregson, R. A. M. (1998). Confusing rotation-like operations in space, mind and brain: British Journal of Mathematical and Statistical Psychology Vol 51(1) May 1998, 135-162.
• Griesinger, D. W. (1978). The physics of motivation and choice: IEEE Transactions on Systems, Man, & Cybernetics Vol 8(12) Dec 1978, 902-907.
• Groen, G. (1972). A Limited Introduction: PsycCRITIQUES Vol 17 (12), Dec, 1972.
• Gutierrez, A., & Boero, P. (2006). Handbook of research on the psychology of mathematics education: Past, present and future. Rotterdam, Netherlands: Sense Publishers.
• Hand, I., & Henning, P. A. (2004). Gambling at the stock exchange: A psychological-mathematical analysis: Sucht: Zeitschrift fur Wissenschaft und Praxis Vol 50(3) Jun 2004, 172-186.
• Harel, G., Selden, A., & Selden, J. (2006). Advanced Mathematical Thinking: Some PME Perspectives. Rotterdam, Netherlands: Sense Publishers.
• Harris, R. J. (1976). Handling negative inputs: On the plausible equity formulae: Journal of Experimental Social Psychology Vol 12(2) Mar 1976, 194-209.
• Healy, A. F. (2000). Estes, William Kaye: Kazdin, Alan E (Ed).
• Healy, A. F., Kosslyn, S. M., & Shiffrin, R. M. (1992). Essays in honor of William K. Estes, Vol. 1: From learning theory to connectionist theory; Vol. 2: From learning processes to cognitive
processes. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
• Heller, J. (2006). Illumination-invariance of Plateau's midgray: Journal of Mathematical Psychology Vol 50(3) Jun 2006, 263-270.
• Herden, G. (1995). On some equivalent approaches to Mathematical Utility Theory: Mathematical Social Sciences Vol 29(1) Feb 1995, 19-31.
• Hirst, W. (1988). The making of cognitive science: Essays in honor of George A. Miller. New York, NY: Cambridge University Press.
• Hoffman, W. C., & Dodwell, P. C. (1985). Geometric psychology generates the visual Gestalt: Canadian Journal of Psychology/Revue Canadienne de Psychologie Vol 39(4) Dec 1985, 491-528.
• Iberall, A. S. (1997). Nonlinear dynamics from a physical point of view: Ecological Psychology Vol 9(3) 1997, 223-244.
• Indow, T. (1997). Hyperbolic representation of global structure of visual space: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 89-98.
• Irtel, H. (1987). On specific objectivity as a concept in measurement. New York, NY: Elsevier Science.
• Iverson, G. J. (2006). Analytical methods in the theory of psychophysical discrimination I: Inequalities, convexity and integration of just noticeable differences: Journal of Mathematical
Psychology Vol 50(3) Jun 2006, 271-282.
• Iverson, G. J. (2006). Analytical methods in the theory of psychophysical discrimination II: The Near-miss to Weber's Law, Falmagne's Law, the Psychophysical Power Law and the Law of Similarity:
Journal of Mathematical Psychology Vol 50(3) Jun 2006, 283-289.
• Iverson, G. J. (2006). An essay on inequalities and order-restricted inference: Journal of Mathematical Psychology Vol 50(3) Jun 2006, 215-219.
• Janssens, R. (1998). Structuring complex concepts. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
• Johnston, B., & Yasukawa, K. (2001). Numeracy: Negotiating the world through mathematics. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
• Karni, E., & Safra, Z. (1998). The hexagon condition and additive representation for two dimensions: An algebraic approach: Journal of Mathematical Psychology Vol 42(4) Dec 1998, 393-399.
• Katsikopoulos, K. V., & Martignon, L. (2006). Naive heuristics for paired comparisons: Some results on their relative accuracy: Journal of Mathematical Psychology Vol 50(5) Oct 2006, 488-494.
• Kauffman, L. H. (1990). Self and mathematics. New York, NY: Peter Lang Publishing.
• Khromov, A. G. (2001). Logical self-reference as a model for conscious experience: Journal of Mathematical Psychology Vol 45(5) Oct 2001, 720-731.
• Ki, H. K., & Roush, F. W. (1978). Ultrametrics and matrix theory: Journal of Mathematical Psychology Vol 18(2) Oct 1978, 195-203.
• Kiener, S. (1997). On the relationship between two types of effects caused by color adaptation: Changes in color appearance and color discriminability: Journal of Mathematical Psychology Vol 41
(1) Mar 1997, 107-121.
• Kim, C. (1998). Modeling individual differences in mathematical psychology. Dissertation Abstracts International: Section B: The Sciences and Engineering.
• Knapp, T. (2000). Shepard, Roger N: Kazdin, Alan E (Ed).
• Kobberling, V. (2003). Comments on: Edi Karni and Zvi Safra (1998), the hexagon condition and additive representations for two dimensions: An algebraic approach: Journal of Mathematical
Psychology Vol 47(3) Jun 2003, 370.
• Kondo, K. (1968). RAAG memoirs of the unifying study of basic problems in engineering and physical sciences by means of geometry: IV. Oxford, England: Gakujutsu Bunken Fukyu-Kai.
• Kondo, K., Shimbo, M., Sato, K., & Date, T. (1968). Mathematical foundations of psychophysical recognitions: Kondo, K (Ed) (1968) RAAG memoirs of the unifying study of basic problems in
engineering and physical sciences by means of geometry.
• Koppen, M. G. (1987). On finding the bidimension of a relation: Journal of Mathematical Psychology Vol 31(2) Jun 1987, 155-178.
• Krantz, D. H., Atkinson, R. C., Luce, R. D., & Suppes, P. (1974). Contemporary developments in mathematical psychology: I. Learning, memory and thinking. Oxford, England: W H Freeman.
• Krantz, D. H., Atkinson, R. C., Luce, R. D., & Suppes, P. (1974). Contemporary developments in mathematical psychology: II. Measurement, psychophysics, and neural information processing. Oxford,
England: W H Freeman.
• Krause, B., & Raykov, T. (1987). On models for measuring changes: Zeitschrift fur Psychologie mit Zeitschrift fur angewandte Psychologie Vol 195(2) 1987, 163-170.
• Krylov, V. Y. (1992). Urgent problems of mathematical psychology: Psikhologicheskiy Zhurnal Vol 13(6) Nov-Dec 1992, 13-24.
• Kyngdon, A. (2006). An Introduction to the Theory of Unidimensional Unfolding: Journal of Applied Measurement Vol 7(3) 2006, 260-277.
• Lakshminarayan, K., & Gilson, F. (1998). An application of a stochastic knowledge structure model. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
• Laming, D. (1975). Ten Years Progress in Mathematical Psychology: PsycCRITIQUES Vol 20 (9), Sep, 1975.
• Leary, D. E. (2000). Herbert, Johann Friedrich: Kazdin, Alan E (Ed).
• Leder, G. C., & Forgasz, H. J. (2006). Affect and Mathematics Education: PME Perspectives. Rotterdam, Netherlands: Sense Publishers.
• Lefebvre, V. A. (1990). From psychophysics to the modeling of the soul. New York, NY: Peter Lang Publishing.
• Lefebvre, V. A. (1990). The fundamental structures of human reflexion. New York, NY: Peter Lang Publishing.
• Lefebvre, V. A., & Batchelder, W. H. (1981). The nature of Soviet mathematical psychology: Journal of Mathematical Psychology Vol 23(2) Apr 1981, 153-183.
• Lerman, S. (2006). Socio-Cultural Research in PME. Rotterdam, Netherlands: Sense Publishers.
• Levin, D. N. (2000). A differential geometric description of the relationships among perceptions: Journal of Mathematical Psychology Vol 44(2) Jun 2000, 241-284.
• Lewandowsky, S., Kalish, M., & Dunn, J. C. (1998). Guest editorial: Australian Journal of Psychology Vol 50(3) Dec 1998, iii.
• Lewin, G. W. (1997). Constructs in Field Theory (1944). Washington, DC: American Psychological Association.
• Lovie, S. (2003). Review of Theories of meaningfulness: British Journal of Mathematical and Statistical Psychology Vol 56(1) May 2003, 183-184.
• Lovie, S., & Lovie, P. (2004). A Privileged and Exemplar Resource: Traumatic Avoidance Learning and the Early Triumph of Mathematical Psychology: History of Psychology Vol 7(3) Aug 2004, 248-264.
• Luce, D. (2005). Editorial: Journal of Mathematical Psychology Vol 49(6) Dec 2005, 430-431.
• Luce, R. D. (1989). Mathematical psychology and the computer revolution. Oxford, England: North-Holland.
• Luce, R. D. (1992). A path taken: Aspects of modern measurement theory. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
• Luce, R. D. (1995). Four tensions concerning mathematical modeling in psychology: Annual Review of Psychology Vol 46 1995, 1-26.
• Luce, R. D. (1996). The ongoing dialog between empirical science and measurement theory: Journal of Mathematical Psychology Vol 40(1) Mar 1996, 78-98.
• Luce, R. D. (1996). When four distinct ways to measure utility are the same: Journal of Mathematical Psychology Vol 40(4) Dec 1996, 297-317.
• Luce, R. D. (1997). Several unresolved conceptual problems of mathematical psychology: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 79-87.
• Kuce, R. D. (1999). Where is mathematical modeling in psychology headed? Theory & Psychology, 9, 723-737. Full text
• Luce, R. D., & Suppes, P. (2002). Representational measurement theory. Hoboken, NJ: John Wiley & Sons Inc.
• Marek, T., & Noworol, C. (1985). The sequential testing of the difference between means: Z-sub(mn ) test: Przeglad Psychologiczny Vol 28(2) 1985, 547-553.
• Margineanu, N. (1997). Logical and mathematical psychology: Dialectical interpretation of their relations. Cluj-Napoca, Romania: Editura Presa Universitara Clujeana.
• Maris, G., & Maris, E. (2002). A MCMC-method for models with continuous latent responses: Psychometrika Vol 67(3) Sep 2002, 335-350.
• Marks, L. E. (1978). A critique of Dawson and Miller's "Inverse attribute functions and the proposed modifications of the power law." Perception & Psychophysics Vol 24(6) Dec 1978, 569-570.
• Marr, M. J. (1989). Some remarks on the quantitative analysis of behavior: Behavior Analyst Vol 12(2) Fal 1989, 143-151.
• Mayekawa, S.-i. (1996). Maximum likelihood estimation of the cell probabilities under linear constraints: Behaviormetrika Vol 23(1) Jan 1996, 111-128.
• Mayer, R. E. (1989). Explorations in Mathematical Cognition: PsycCRITIQUES Vol 34 (3), Mar, 1989.
• Mazur, J. E. (1975). The matching law and quantifications related to Premack's principle: Journal of Experimental Psychology: Animal Behavior Processes Vol 1(4) Oct 1975, 374-386.
• McClain, E. G. (1990). Tonal automata. New York, NY: Peter Lang Publishing.
• McDowell, J. J. (1989). Two modern developments in matching theory: Behavior Analyst Vol 12(2) Fal 1989, 153-166.
• McGill, W. J. (1988). George A. Miller and the origins of mathematical psychology. New York, NY: Cambridge University Press.
• McGill, W. J., & Gibbon, J. (1965). The general-gamma distribution and reaction times: Journal of Mathematical Psychology 2(1) 1965, 1-18.
• McNemar, Q. (1968). Review of Basic Mathematical and Statistical Tables for Psychology and Education: PsycCRITIQUES Vol 13 (2), Feb, 1968.
• Meda, G., Martinez, G., & Morgante, E. (1997). Numerical taxonomy applied to Leonhard's classification of endogenous psychoses: Psychopathology Vol 30(5) Sep-Oct 1997, 291-297.
• Meenes, M. (1921). Manhood of humanity: The science and art of human engineering: Journal of Applied Psychology Vol 5(2) Jun 1921, 189-190.
• Menai, M. E. B., & Batouche, M. (2006). An effective heuristic algorithm for the maximum satisfiability problem: Applied Intelligence Vol 24(3) Jun 2006, 227-239.
• Molenaar, P. C. M. (1994). On the viability of nonlinear dynamics in psychology. Lisse, Netherlands: Swets & Zeitlinger Publishers.
• Myung, I. J., & Shepard, R. N. (1996). Maximum entropy inference and stimulus generalization: Journal of Mathematical Psychology Vol 40(4) Dec 1996, 342-347.
• Nandakumar, R., Yu, F., Li, H.-H., & Stout, W. (1998). Assessing unidimensionality of polytomous data: Applied Psychological Measurement Vol 22(2) Jun 1998, 99-115.
• Narens, L. (1980). A note on Weber's Law for Conjoint Structures: Journal of Mathematical Psychology Vol 21(1) Feb 1980, 88-91.
• Narens, L. (2003). A theory of belief: Journal of Mathematical Psychology Vol 47(1) Feb 2003, 1-31.
• Narens, L. (2006). Symmetry, direct measurement, and Torgerson's conjecture: Journal of Mathematical Psychology Vol 50(3) Jun 2006, 290-301.
• Niederee, R. (1987). On the reference to real numbers in fundamental measurement: A model-theoretic approach. New York, NY: Elsevier Science.
• No authorship, i. (1986). Review of Australian Psychology: Review of Research: PsycCRITIQUES Vol 31 (9), Sep, 1986.
• No authorship, i. (1989). Review of Progress in Mathematical Psychology-1: PsycCRITIQUES Vol 34 (1), Jan, 1989.
• No authorship, i. (2005). 37th Annual Meeting of the Society for Mathematical Psychology: Journal of Mathematical Psychology Vol 49(1) Feb 2005, 87-113.
• Norman, M. F. (1981). Lectures on linear systems theory: Journal of Mathematical Psychology Vol 23(1) Feb 1981, 1-89.
• Norman, M. F., & Gallistel, C. R. (1978). What can one learn from a strength-duration experiment? : Journal of Mathematical Psychology Vol 18(1) Aug 1978, 1-24.
• Orth, B. (1987). Applications of the theory of meaningfulness to attitude models. New York, NY: Elsevier Science.
• Ovchinnikov, S. (2005). Hyperplane arrangements in preference modeling: Journal of Mathematical Psychology Vol 49(6) Dec 2005, 481-488.
• Plaud, J. J., & O'Donohue, W. (1991). Mathematical statements in the experimental analysis of behavior: Psychological Record Vol 41(3) Sum 1991, 385-400.
• Ponson, B. (1984). Vote by consensus: Mathematiques et Sciences Humaines Vol 22(87) Fal 1984, 5-32.
• Poon, W.-Y., & Chan, W. (2002). Influence analysis of ranking data: Psychometrika Vol 67(3) Sep 2002, 421-436.
• Presmeg, N. (2006). Research on Visualization in Learning and Teaching Mathematics: Emergence from Psychology. Rotterdam, Netherlands: Sense Publishers.
• Rapoport, A. (1990). Reflexion, modeling and ethics. New York, NY: Peter Lang Publishing.
• Ratcliff, R. (1998). The role of mathematical psychology in experimental psychology: Australian Journal of Psychology Vol 50(3) Dec 1998, 129-130.
• Roberts, F. S. (1984). Review of Mathematical Models in the Social and Behavioral Sciences: PsycCRITIQUES Vol 29 (12), Dec, 1984.
• Rocci, R., & ten Berge, J. M. F. (2002). Transforming three-way arrays to maximal simplicity: Psychometrika Vol 67(3) Sep 2002, 351-365.
• Roskam, E. E. (1973). Mathematical psychology as theory and method: Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden Vol 27(11) Apr 1973, 701-730.
• Roskam, E. E. (1989). Mathematical psychology in progress. New York, NY: Springer-Verlag Publishing.
• Roskam, E. E., & Suck, R. (1987). Progress in mathematical psychology, 1. New York, NY: Elsevier Science.
• Rouder, J. N. (1996). Premature sampling in random walks: Journal of Mathematical Psychology Vol 40(4) Dec 1996, 287-296.
• Samejima, F. (1996). Evaluation of mathematical models for ordered polychotomous responses: Behaviormetrika Vol 23(1) Jan 1996, 17-35.
• Schiffman, H., & Schiffman, E. (2000). Gulliksen, Harold: Kazdin, Alan E (Ed).
• Schmidt, U., & Zimper, A. (2007). Security and potential level preferences with thresholds: Journal of Mathematical Psychology Vol 51(5) Oct 2007, 279-289.
• Schonemann, P. H. (1980). On possible psychophysical maps: II. Projective transformations: Bulletin of the Psychonomic Society Vol 15(2) Feb 1980, 65-68.
• Schonemann, P. H., Cafferty, T., & Rotton, J. (1973). A note on additive functional measurement: Psychological Review Vol 80(1) Jan 1973, 85-87.
• Schonemann, P. H., Cafferty, T., & Rotton, J. (1973). "A note on additive functional measurement": Erratum: Psychological Review Vol 80(2) Mar 1973, 138.
• Schwager, K. W. (1991). The representational theory of measurement: An assessment: Psychological Bulletin Vol 110(3) Nov 1991, 618-626.
• Schwarz, W. (1990). Stochastic accumulation of information in discrete time: Comparing exact results and Wald approximations: Journal of Mathematical Psychology Vol 34(2) Jun 1990, 229-236.
• Shafir, E. (2004). Preference, belief, and similarity: Selected writings by Amos Tversky. Cambridge, MA: MIT Press.
• Sheldon, W. H. (1898). Review of The Mathematical Psychology of Gratry and Boole: Psychological Review Vol 5(4) Jul 1898, 426-428.
• Shepard, R. N. (1988). George Miller's data and the development of methods for representing cognitive structures. New York, NY: Cambridge University Press.
• Sheu, C.-F. (2006). Triadic judgment models and Weber's law: Journal of Mathematical Psychology Vol 50(3) Jun 2006, 302-308.
• Shigemasu, K., & Nakamura, T. (1996). A Bayesian marginal inference in estimating item parameters using the Gibbs Sampler: Behaviormetrika Vol 23(1) Jan 1996, 97-110.
• Smolenaars, A. J. (1987). Likelihood considerations within a deterministic setting. New York, NY: Elsevier Science.
• Sommer, R., & Suppes, P. (1997). Dispensing with the continuum: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 3-10.
• Spellman, B. A. (1996). Conditionalizing causality. San Diego, CA: Academic Press.
• Stefanutti, L., & Koppen, M. (2003). A procedure for the incremental construction of a knowledge space: Journal of Mathematical Psychology Vol 47(3) Jun 2003, 265-277.
• Sternberg, R. J. (1993). How Do You Size Up the Contributions of a Giant? : PsycCRITIQUES Vol 38 (9), Sep, 1993.
• Stout, W., Nandakumar, R., & Habing, B. (1996). Analysis of latent dimensionality of dichotomously and polytomously scored test data: Behaviormetrika Vol 23(1) Jan 1996, 37-65.
• Sturm, T. (2006). Is there a problem with mathematical psychology in the eighteenth century? A fresh look at Kant's old argument: Journal of the History of the Behavioral Sciences Vol 42(4) Fal
2006, 353-377.
• Suck, R. (1987). Approximation theorems for conjoint measurement models. New York, NY: Elsevier Science.
• Suck, R. (1997). Probabilistic biclassification and random variable representations: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 57-64.
• Suck, R. (2004). Set representations of orders and a structural equivalent of saturation: Journal of Mathematical Psychology Vol 48(3) Jun 2004, 159-166.
• Suppes, P. (2006). All you ever wanted to know about meaningfulness: Journal of Mathematical Psychology Vol 50(4) Aug 2006, 421-425.
• Tanaka, Y., & England, G. W. (1972). Psychology in Japan: Annual Review of Psychology 1972, 695-732.
• Taylor, D. A. (1975). Mathematical Psychology: What Have We Accomplished? : PsycCRITIQUES Vol 20 (6), Jun, 1975.
• Thomas, R. D. (1996). Separability and independence of dimensions within the same-different judgment task: Journal of Mathematical Psychology Vol 40(4) Dec 1996, 318-341.
• Thumin, F. J., & Barclay, A. G. (2002). Philip Hunter DuBois (1903-1998): Obituary: American Psychologist Vol 57(5) May 2002, 368.
• Tinker, M. A. (1944). Review of Lewin's Topological and Vector Psychology: Journal of Educational Psychology Vol 35(2) Feb 1944, 125-126.
• Townsend, J. T. (1990). Lefebvre's human reflexion and its scientific acceptance in psychology. New York, NY: Peter Lang Publishing.
• Townsend, J. T., & Nozawa, G. (1995). Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories: Journal of Mathematical Psychology Vol 39
(4) Dec 1995, 321-359.
• Van Geert, P. (1993). Metabletics of the fig tree: Psychology and nonlinear dynamics: Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden Vol 48(6) Dec 1993, 267-276.
• Varela, J. A. (1990). A general law for psychology: Revista Interamericana de Psicologia Vol 24(2) 1990, 121-137.
• Vigo, R. (2006). A note on the complexity of Boolean concepts: Journal of Mathematical Psychology Vol 50(5) Oct 2006, 501-510.
• Vigo, R. (2009a). Categorical invariance and structural complexity in human concept learning: Journal of Mathematical Psychology Vol 53(4) Aug 2009, 203-221.
• Vigo, R. (2009b). Modal Similarity. Journal of Experimental and Theoretical Artificial Intelligence. DOI: 10.1080/09528130802113422
• Wackermann, J. (2006). On additivity of duration reproduction functions: Journal of Mathematical Psychology Vol 50(5) Oct 2006, 495-500.
• Wakker, P. (1991). Additive representations on rank-ordered sets: I. The algebraic approach: Journal of Mathematical Psychology Vol 35(4) Dec 1991, 501-531.
• Wang, T. (1998). Weights that maximize reliability under a congeneric model: Applied Psychological Measurement Vol 22(2) Jun 1998, 179-187.
• Wheeler, H. (1990). A reflexional model of the mind's hermeneutic processes. New York, NY: Peter Lang Publishing.
• Wheeler, H. (1990). The structure of human reflexion: The reflexional psychology of Vladimir Lefebvre. New York, NY: Peter Lang Publishing.
• Wille, U. (1997). The role of synthetic geometry in representational measurement theory: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 71-78.
• Woltz, D. J., & Shute, V. J. (1995). Time course of forgetting exhibited in repetition priming of semantic comparisons: American Journal of Psychology 108(4) Win 1995, 499-525.
• Yellott, J. I., Jr. (1971). What's (nu) in Math Psych? : PsycCRITIQUES Vol 16 (10), Oct, 1971.
• Yoshino, R. (1989). On some history and the future development of axiomatic measurement theory: Japanese Psychological Review Vol 32(2) 1989, 119-135.
• Zajonc, R. B. (1990). Interpersonal affiliation and the golden section. New York, NY: Peter Lang Publishing.
• Zaus, M. (1987). Hybrid adaptive methods. New York, NY: Elsevier Science.
• Zbrodoff, N. J., & Logan, G. D. (2005). What everyone finds: The problem-size effect. New York, NY: Psychology Press.
• Zhang, J. (2004). Binary choice, subset choice, random utility, and ranking: A unified perspective using the permutahedron: Journal of Mathematical Psychology Vol 48(2) Apr 2004, 107-134.
• Zhang, J. (2004). Dual scaling of comparison and reference stimuli in multi-dimensional psychological space: Journal of Mathematical Psychology Vol 48(6) Dec 2004, 409-412.
• Zhuravlev, G. E. (1976). Towards the definition of mathematical psychology: Voprosy Psychologii No 2 Mar-Apr 1976, 22-31.
External links
|
{"url":"http://psychology.wikia.com/wiki/Mathematical_psychology?direction=prev&oldid=139188","timestamp":"2014-04-23T14:45:55Z","content_type":null,"content_length":"125736","record_id":"<urn:uuid:2d0f93b5-0f6e-4151-82b3-e0d8248d3c18>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
Purdue - CE - 331
Purdue - CE - 331
Purdue - CE - 331
The I>Clicker: What to do1) Get one atBookstore-thePurdueYou will need only one I>clicker for allcourses;It can be I>clicker OR I>clicker2;It can be new OR used;Purdue facilities DO NOT
support theWeb-Clicker app!CE 331 Engineering Materials I
Rutgers - COMPUTER S - 513
CS513: Homework 3 SolutionsInstructor: S. Muthukrishnan, TA: D. DesaiFebruary 28, 20111. A simple greedy algorithm that scans the array A from left to right is optimal. Starting from l0 = 0,lnd the
rightmost point l1 such that j1 Aj W . Keep repeatin
Rutgers - COMPUTER S - 513
CS513: Homework 1 SolutionsInstructor: S. Muthukrishnan, TA: D. DesaiFebruary 28, 20111. The correct order is2 log log nn1000/ log n , 2, log n, 0.001n3 , (n + 1)!, 22n+log nn, n 22 .Here, one can
use the trick x = 2log x . The rst function is
Rutgers - COMPUTER S - 513
CS513: Homework 6 SolutionsInstructor: S. Muthukrishnan, TA: D. DesaiApril 29, 20111. The algorithm chooses a random vector r cfw_0, 1n , where each entry in r is chosen uniformly atrandom to be 0 or
1. We output YES if ABr = Cr and NO otherwise. The
Rutgers - COMPUTER S - 513
CS513: Homework 2 SolutionsInstructor: S. Muthukrishnan, TA: D. DesaiMarch 10, 20111. Let (n) denote the number of primes before n. Then the number of primes in the interval [x, y ] is (y ) (x). So
the probability that a uniformly random number chosen
Rutgers - COMPUTER S - 513
CS513: Homework 4 SolutionsInstructor: S. Muthukrishnan, TA: D. DesaiMarch 21, 20111. The running times of Bellman-Ford (O(|V | |E |) and Floyd Warshall (O(|V |3 ) will not change dueto integer
weights. For Dijkstras algorithm, the running time is det
Rutgers - COMPUTER S - 513
CS513: Homework 5 SolutionsInstructor: S. Muthukrishnan, TA: D. DesaiMarch 17, 20111. The max-ow min-cut theorem states that in any graph, the size of a maximum (s, t)-ow equals thesize of the
smallest (s, t)-cut. If we pick an arbitrary vertex s and
Rutgers - COMPUTER S - 513
CS513: Midterm SolutionsInstructor: S. Muthukrishnan, TA: D. DesaiApril 28, 20111. (a) A naive implementation requires around 2n comparisons. The following procedure is better. Wewill do comparison
only between adjacent pairs (so n/2 comparisons). The
Strayer - BUS - 335
Applying Training ConceptsApplying Training ConceptsProfessor V. Westray-MillerTraining and Development BUS 407May 24, 2012Describe the circumstances under which the organization you selected would
need toconduct a training needs analysis.The compa
Strayer - BUS - 335
Social PerformanceWritten by: Marcus WilliamsProfessor Kimberly CoxBusiness and Society BUS 475December 14, 2012Describe your company and analyze the various primary and secondarystakeholder groups,
their roles, and relationships.The company I deci
Strayer - BUS - 335
Social PerformanceWritten by: Marcus WilliamsProfessor Kimberly CoxBusiness and Society BUS 475December 1, 2012Briefly describe your company and then benchmark the codes of conduct used bysimilar
companies in the industry. Critique the codes of cond
Strayer - BUS - 335
Designing Compensation Systems and Employee BenefitsWritten by: Marcus WilliamsProfessor Lezlie ClayCompensation Management BUS 409May 24, 2012Describe the three main goals of compensation
departmentsCompensation departments are mainly responsible f
Strayer - BUS - 335
Social PerformanceWritten by: Marcus WilliamsProfessor Kimberly CoxBusiness and Society BUS 475December 1, 2012Briefly describe your company and then benchmark the codes of conduct used bysimilar
companies in the industry. Critique the codes of cond
Strayer - BUS - 335
Staffing Organizations Part 2Written by: Marcus WilliamsProfessor Joyce MayfieldStaffing Organizations BUS 335December 1, 2012Formulate a recruitment plan and strategy that will be used to staff the
coffee shop initiallyand throughout the next three
Strayer - BUS - 335
The Value and Importance of Training at ApriaHealthcareThe Value and Importance of Training at Apria HealthcareProfessor V. Westray-MillerTraining and Development BUS 407April 28, 2012Briefly
describe what the organization is and what it doesApria
Strayer - BUS - 335
Staffing Organizations Part 2Written by: Marcus WilliamsProfessor Joyce MayfieldStaffing Organizations BUS 335December 1, 2012Formulate a recruitment plan and strategy that will be used to staff the
coffee shop initiallyand throughout the next three
Strayer - BUS - 335
Recruitment, Training, and CompensationWritten by: Marcus WilliamsProfessor John HonoreCompensation Management BUS 325September 11, 2012Provide a brief description of the enterprise (e.g., their
industry, size, location, number ofemployees).Compens
University of Texas - CH - 310N
Homework Problem Set 1Iverson CH310NNAME (Print): _SIGNATURE: _Please print thefirst three lettersof your last namein the three boxes1Due Wednesday, January 25Chemistry 310NDr. Brent Iverson1st
HomeworkJanuary 20, 2012Homework Problem Set 1
University of Texas - CH - 310N
Homework Problem Set 2Iverson CH310NNAME (Print): _SIGNATURE: _Please print thefirst three lettersof your last namein the three boxes1Due Wednesday, February 1Chemistry 310NDr. Brent Iverson2nd
HomeworkJanuary 25, 2012Homework Problem Set 2
University of Texas - CH - 310N
Homework Problem Set 3Iverson CH310NNAME (Print): _SIGNATURE: _Please print thefirst three lettersof your last nam ein the three boxesDue Wednesday, February 8Chemistry 310NDr. Brent Iverson3rd
HomeworkFebruary 1, 2012Homework Problem Set 3S
University of Texas - CH - 310N
Homework Problem Set 4Iverson CH310NNAME (Print): _SIGNATURE: _Please print thefirst three lettersof your last nam ein the three boxes1Practice, do not turn in!Chemistry 310NDr. Brent Iverson4th H
omeworkFebruary 8, 2012Homework Problem Set
University of Texas - CH - 310N
Homework Problem Set 5Iverson CH310NNAME (Print): _SIGNATURE: _Please print thefirst three lettersof your last namein the three boxes1Due Wednesday, Leap Day!Chemistry 310NDr. Brent Iverson5th Hom
eworkFebruary 22, 2012Homework Problem Set 5
University of Texas - CH - 310N
Homework Problem Set 7Iverson CH310NNAME (Print): _SIGNATURE: _Practice Do Not Turn InChemistry 310NDr. Brent Iverson7th Homework (practice)March 9, 2012Please print thefirst three lettersof your
last nam ein the three boxes1Homework Problem
University of Texas - CH - 310N
Homework Problem Set 8Iverson CH310NNAME (Print): _SIGNATURE: _Due Wednesday, April 4Chemistry 310NDr. Brent Iverson8th HomeworkMarch 28, 2012Please print thefirst three lettersof your last namein
the three boxes1Homework Problem Set 8Ivers
DeVry Chicago - ECET - 100
ECET 100 Introduction to Engineering and Computer TechnologyHomework Assignment #1Name: Dawn E. Davis_1. Which is larger, two million pennies or two hundred single dollar bills?Answer: 2 Million
Pennies = 20,000 single dollar bills2. Complete the fol
DeVry Chicago - ECET - 100
Laboratory Report Cover SheetDeVry UniversityCollege of Engineering and Information SciencesCourse Number: ECET100Professor: Cosmin IorgaLaboratory Number:Laboratory Title: Introduction to Laboratory
Test EquipmentSubmittal Date:Objectives: The ob
DeVry Chicago - ECET - 100
ECET 100 Introduction to Engineering and Computer TechnologyHomework Assignment #2Name:Dawn E. Davis1. The current in a circuit is 1 amp. What will the current be when the voltage is increased by50%?
DeVry Chicago - ECET - 100
Laboratory Report Cover SheetDeVry UniversityCollege of Engineering and Information SciencesCourse Number: ECET100Professor: Cosmin IorgaLaboratory Number:Laboratory Title: Ohms Law, Series Circuit
Simulation, Construction, and MeasurementSubmittal
DeVry Chicago - ECET - 100
ECET 100 Introduction to Engineering and Computer TechnologyHomework Assignment #3Name _Dawn E. Davis_1. A parallel circuit contains four branches with resistors values of 4.7k, 5.6k, 8.1k and
10k.Which branch has the largest current flow?The one wit
DeVry Chicago - ECET - 100
Laboratory Report Cover SheetDeVry UniversityCollege of Engineering and Information SciencesCourse Number: ECET100Professor: Cosmin IorgaLaboratory Number:Laboratory Title: Parallel Circuit
Simulation, Construction and MeasurementSubmittal Date:Ob
DeVry Chicago - ECET - 100
ECET 100 Introduction to Engineering and Computer TechnologyHomework Assignment #4Name _ Dawn E. Davis_1. Name two advantages of digital data as compared to analog data.Since the signal is very
uniform, noise has not severely altered its shape orampl
DeVry Chicago - ECET - 100
Laboratory Report Cover SheetDeVry UniversityCollege of Engineering and Information SciencesCourse Number: ECET100Professor: Cosmin IorgaLaboratory Number:Laboratory Title: Soldering Techniques and
the Electronic Die KitSubmittal Date:Objectives:
DeVry Chicago - ECET - 100
ECET 100 Introduction to Engineering and Computer TechnologyHomework Assignment #5Name _Dawn E. Davis_1. Determine the output X for the 2-input AND gate with the input waveforms shown.The
accompanying diagram above shows a two input AND gate on the le
DeVry Chicago - ECET - 100
Laboratory Report Cover SheetDeVry UniversityCollege of Engineering and Information SciencesCourse Number: ECET100Professor: Cosmin IorgaLaboratory Number:Laboratory Title: Introduction to Digital
Logic GatesSubmittal Date:Objectives:1. To unders
DeVry Chicago - ECET - 100
ECET 100 Introduction to Engineering and Computer TechnologyHomework Assignment #6Name _Dawn E. Davis_1. Draw a logic circuit that performs the following Boolean expression: .2. Determine the Boolean
expression for the circuit shown below.The accompa
DeVry Chicago - ECET - 100
Laboratory Report Cover SheetDeVry UniversityCollege of Engineering and Information SciencesCourse Number: ECET100Professor: Cosmin IorgaLaboratory Number:Laboratory Title: Logic Circuit Design,
Simplification, Simulation, and VerificationSubmittal
DeVry Chicago - ECET - 100
ECET 100 Introduction to Engineering and Computer TechnologyHomework Assignment #7Name _Dawn E. Davis_1. Draw a logic circuit using only NAND gates to implement the following Booleanexpression: Y =AB
+ C.2. Develop a logic circuit, using only NAND ga
DeVry Chicago - ECET - 100
Project Objectives1.To design a digital logic circuit using a truth table and sum-of-product (SOP)formulation.2.To use the MultiSim program to simplify, convert to NAND gate implementation,simulate,
and test the circuit operation.3.To build and te
DeVry Chicago - ECET - 100
Laboratory Report Cover SheetDeVry UniversityCollege of Engineering and Information SciencesCourse Number: ECET100Professor:Laboratory Number:Laboratory Title: Logic Design Using NAND LogicNote that
this is a formal lab and needs to be submitted as
Rutgers - ECON - 322
Documentation for CollegeDistance DataThese data are taken from the HighSchool and Beyond survey conducted by theDepartment of Education in 1980, with a follow-up in 1986. The survey includedstudents
from approximately 1100 high schools.The data used
Rutgers - ECON - 322
Documentation for Sportscards DataSportscards contains data from 148 randomly selected traders who attended trading cardshow in Orlando, Florida in 1998. Traders were randomly given one of two
sportscollectables, say good A or good B, that had approxim
Rutgers - ECON - 322
Documentation for Names DataNames contains resume, call-back and employer information for 4,870 fictitious resumes sent inresponse to employment advertisements in Chicago and Boston in 2001, in a
randomized controlledexperiment conducted by Marianne Be
Rutgers - ECON - 322
Documentation for Fertility and Fertility_Small Data SetsThese data are taken from the 1980 Census. These data were provided by Professor William Evansof the University of Maryland and were used in
his paper with Joshua Angrist: Children and TheirParen
Rutgers - ECON - 322
Documentation for JEC DataJEC contains weekly observations on prices and other factors from 1880-1886, for a totalof n = 326 weeks. These data were provided by Professor Rob Porter of
NorthwesternUniversity and were used in his paper A Study of Cartel
Rutgers - ECON - 322
Documentation for Insurance Data SetThese data are taken from the Medical Expenditure Panel Survey conducted in 1996. These datawere provided by Professor Harvey Rosen of Princeton University and
were used in his paper withCraig Perry: The Self-Employe
Rutgers - ECON - 322
Documentation for Smoking DataSmoking is a cross-sectional data set with observations on 10,000 indoor workers, whichis a subset of a 18,090-observation data set collected as part of the National
HealthInterview Survey in 1991 and then again (with diff
Rutgers - ECON - 322
Documentation for Guns DataGrowth is a balanced panel of data on 50 US states, plus the District of Columbia (for atotal of 51 states), by year for 1977 1999. Each observation is a given state in a
givenyear. There are a total of 51 states 23 years = 1
Rutgers - ECON - 322
Documentation for TeachingRatings DataTeachingRatings contains data on course evaluations, course characteristics, and professorcharacteristics for 463 courses for the academic years 2000-2002 at the
University of Texas atAustin. These data were provid
Rutgers - ECON - 322
Documentation for Growth Data Growth contains data on average growth rates over 1960-1995 for 65 countries, along with variables that are potentially related to growth. These data were provided by
Professor Ross Levine of Brown University and were used in
Georgia State - BUSA - 3000
Chapter 1 Introduction: What is International Business?WHAT IS INTERNATIONAL BUSINESS?International business is primarily carried out by individual companies.WHAT ARE THE KEY CONCEPTS IN
INTERNATIONAL BUSINESS?Exporting is an entry strategy involving
Georgia State - BUSA - 3000
Chapter 2 Globalization of Markets and Internationalization of the FirmGLOBALIZATION IS NOT A NEW PHENOMENONThe initial phase of globalization was triggered in part by the introduction of the
railroad.AN ORGANIZING FRAMEWORK FOR MARKET GLOBALIZATIONFi
Georgia State - BUSA - 3000
Chapter 3 Participants in International BusinessTHREE TYPES OF PARTICIPANTS IN INTERNATIONAL BUSINESSThe business environment created by a nations government partially determines the success of a
focal firm inthat nation.PARTICIPANTS ORGANIZED BY VALU
Georgia State - BUSA - 3000
Chapter 4 Theories of International Trade and InvestmentTHEORIES OF INTERNATIONAL TRADE AND INVESTMENTFarm land, diamond mines, and good climate conditions would all be categorized ascomparative
advantages for a region.WHY NATIONS TRADEIn Adam Smiths
Georgia State - BUSA - 3000
Chapter 5 The Cultural Environment of International BusinessTHE MEANING OF CULTURE: FOUNDATION CONCEPTSSubjective elements of a culture include values, customs, and ideas, while objective elements
includebuildings, roads, and tools.Cultural values and
Georgia State - CSC - 4220
Chapter 1 Notestransmission rate: bandwidth (bits per second)hosts = end systemsprotocols define format, order of msgs sent and received among network entities, andactions taken on msg transmission,
receiptnetwork edge: hosts: clients and serversacc
Georgia State - CSC - 4220
Chapter 2 notesserver: always-on host, permanent IP address, data centers for scalingclients: communicate with server, may be intermittently connected, may have dynamic IPaddresses, do not
communicate directly with each otherPTP: Not always on, compri
Georgia State - CSC - 4220
Chapter 3 notesTransport services and protocols:Provide logical communication between app processes running on different hosts.Transport protocols run in end systemssend side: breaks app messages
into segments, passes to network layerrcv side: reasse
Georgia State - CSC - 4220
15-441: Computer NetworksHomework 2 SolutionAssigned: September 25, 2002.Due: October 7, 2002 in class.In this homework you will test your understanding of the TCP concepts taught in class including
ow control,congestion control, and reliability.You
Georgia State - CSC - 4220
18345 Introduction to Telecommunication NetworksHomework 3Feb 14, 2011Due: Feb 21, 20111.(a) Two hosts communicate with each other using Go-Back-N with transmitter window sizeequal to 2. Suppose
there are three sequence numbers in the system: 0, 1,
|
{"url":"http://www.coursehero.com/file/7374452/HW2-Soln-Composite-Materials/","timestamp":"2014-04-18T13:11:33Z","content_type":null,"content_length":"49248","record_id":"<urn:uuid:e22bebf5-e93d-40e9-a61e-67b4e4b4042e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Time Difference in SQL Server 2008
To obtain a formatted value in hh:mm for time difference between two time values.
PS: the two time field are datatype time and the db is SQLSERVER2008
asked Oct 15 '09 at 08:12 AM in Default
1 ● 1 ● 1 ● 1
select convert(time, dateadd(minute, datediff(minute, tm1, tm2), 0))
answered Oct 15 '09 at 09:08 AM
Squirrel 1
1.6k ● 1 ● 3
I chose to keep the calculation to seconds to preserve accuracy but then CHOP off the unneeded seconds. Also, if you can bend your requirement to include hh:mm:ss then you can eliminate the LEFT
function that I used to drop the seconds.
--here is the answer
CONVERT(varchar, DATEADD(ss, DATEDIFF(ss, [StartTime], [EndTime]),0),108)
,5) --use the LEFT,5 to drop the seconds portion, otherwise remove if hh:mm:ss is okay
AS [TimeDiff (hh:mm)]
FROM [dbo].[Events]
--here is an end-to-end working example
--this example only works with SQL Server 2008 and above
CREATE TABLE [dbo].[Events] (
[EventName] varchar(30) NULL,
[StartTime] time NULL,
[EndTime] time NULL )
--add some values
INSERT INTO [dbo].[Events]
('First Event', '08:00', '08:30'),
('Second Event', '09:00', '10:00'),
('Third Event', '15:00', '16:30')
--view the records with the time difference
CONVERT(varchar, DATEADD(ss, DATEDIFF(ss, [StartTime], [EndTime]),0),108)
,5) --use the LEFT,5 to drop the seconds portion, otherwise remove if hh:mm:ss is okay
AS [TimeDiff (hh:mm)]
FROM [dbo].[Events]
--clean up from our test
DROP TABLE [dbo].[Events]
answered Oct 15 '09 at 01:22 PM
320 ● 2 ● 3 ● 4
I log elapsed processing time for an application's calculation, which allows me to have a ms precision of the 24 day: 86399999ms on a 64-bit machine.
Working with TIME in SQL 2008 is markedly easier than SQL 2000.
-- Declare variables
DECLARE @vTime1 TIME(3), @vTime2 TIME(3)
SELECT @vTime1 = '00:00:00.000', @vTime2 = '23:59:59.997';
SELECT CONVERT(TIME, DATEADD(ms, DATEDIFF(ms, @vTime1, @vTime2), 0));
Because this is up to ms precision the rounding off (last 3ms) sets the largest lower boundary value to 997ms.
To trudge through TIME calculations manually, consider the TSQL script below, which gives you a nice result in either CHAR(12) or TIME to the last ms, i.e. 999. Don't ask me why I had the energy to
write this piece of code...
-- Declare variables
DECLARE @vTime1 TIME(3), @vTime2 TIME(3);
DECLARE @vTotalTimeMs INT
, @vHourMs INT
, @vMinuterMs INT
, @vSecondsMs INT
, @vMsMs INT
, @vHours TINYINT
, @vRemainder INT
, @vSecondsDivisor TINYINT
, @vTimeDiff VARCHAR(16)
, @vMsValue CHAR(3);
-- Initialise variables
SELECT @vTime1 = '00:00:00.000', @vTime2 = '23:59:59.999';
SET @vHourMs = 3600000;
SET @vSecondsDivisor = 60;
SET @vMinuterMs = (@vHourMs/@vSecondsDivisor);
SET @vSecondsMs = (@vMinuterMs/@vSecondsDivisor);
SET @vMsMs = (@vSecondsMs/1000);
SELECT @vTotalTimeMs = DATEDIFF(ms,@vTime1,@vTime2);
SELECT @vHours = DATEDIFF(ms,@vTime1,@vTime2)/@vHourMs;
-- Main Query
-- Hour
SET @vTimeDiff = CASE WHEN @vHours < 10
THEN '0'+ CAST(@vHours AS CHAR(1))
ELSE CAST(@vHours AS CHAR(2))
END +':'
SELECT @vTotalTimeMs,@vRemainder,@vHourMs, @vTimeDiff;
SELECT @vRemainder = @vTotalTimeMs % @vHourMs; --<-- Total time expressed in ms, smallest denominator
-- Minute
SELECT @vTimeDiff = @vTimeDiff + CASE WHEN ((@vRemainder)/(@vMinuterMs)) < 10
THEN '0'+CONVERT(char,(@vRemainder)/(@vMinuterMs))
ELSE CONVERT(char,(@vRemainder)/(@vMinuterMs))
END +':'
SELECT @vTotalTimeMs,@vRemainder,@vMinuterMs, @vTimeDiff;
-- Adjust remaining time
SET @vRemainder = (@vRemainder%@vMinuterMs);
-- Seconds
SELECT @vTimeDiff = @vTimeDiff + CASE WHEN (@vRemainder)/(1000) < 10
THEN '0'+CONVERT(char,(@vRemainder)/(1000))
ELSE CONVERT(char,(@vRemainder)/(1000))
END +'.'
SELECT @vTotalTimeMs,@vRemainder,@vSecondsMs, @vTimeDiff;
-- Adjust remaining time
SET @vRemainder = (@vRemainder%@vSecondsMs);
SELECT @vTotalTimeMs,@vRemainder,@vMsMs, @vTimeDiff;
-- Milli second
SET @vMsValue = CAST((@vRemainder)/(@vMsMs) AS CHAR(3));
SELECT @vTimeDiff = @vTimeDiff + CASE LEN(@vMsValue)
WHEN 1 THEN '00'+@vMsValue
WHEN 2 THEN '00'+@vMsValue
ELSE @vMsValue
SET @vRemainder = (@vRemainder%@vMsMs);
SELECT @vTotalTimeMs,@vRemainder,@vMinuterMs, @vTimeDiff;
-- Result
SELECT CAST(RTRIM(LTRIM(@vTimeDiff)) AS TIME(7)) AS [Time Differential];
answered May 21 '10 at 07:44 AM
|
{"url":"http://ask.sqlservercentral.com/questions/23352/time-difference-in-sql-server-2008.html","timestamp":"2014-04-18T00:16:19Z","content_type":null,"content_length":"58946","record_id":"<urn:uuid:5164b901-2438-45bb-a81d-fa03812e1c5b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: fom: topos theory
Steve Awodey awodey at cmu.edu
Fri Jan 16 10:54:49 EST 1998
Colin McLarty has been doing such a good job of defending topos theory
against Steve Simpson's criticism that there seemed no need to jump in
(except perhaps to counter the impression that Colin is alone in his
opinions). But the recent turn to talking past each other and ad hominem
argument prompts me to finally try to come to Colin's aid.
In particular, with regard to Steve's posting:
>Date: Thu, 15 Jan 1998 20:41:29 -0500 (EST)
>From: Stephen G Simpson <simpson at math.psu.edu>
>Subject: FOM: topos theory qua f.o.m.; topos theory qua pure math
>Colin McLarty writes:
> > >I started the discussion by asking about real analysis in topos
> > >theory. McLarty claimed that there is no problem about this. After a
> > >lot of back and forth, it turned out that the basis of McLarty's claim
> > >is that the topos axioms plus two additional axioms give a theory that
> > >is easily intertranslatable with Zermelo set theory with bounded
> > >comprehension and choice.
> >
> > No, not at all. The basis of my claim was that you can do
> > real analysis in any topos with a natural number object. In that
> > generality the results are far weaker than in ZF (even without
> > the axiom of choice)--and allow many variant extensions with
> > various uses.
>Darn, I thought I had finally pinned you down on this. It sounded for
>all the world as if you were saying that the axiom of choice is useful
>for real analysis in a topos. Now I don't know what the heck you're
>saying. I'm losing patience, but I'll try one more time.
This suggests that Colin is trying to dodge the point - but in
fact, it seems clear to me that it's Steve who's preferring to split hairs
rather than hear what's being said; namely that there's absolutely no
difficulty involved in doing real analysis (or virtually any other modern
mathematics you choose) in a topos. There are (interesting) differences
depending on what kind of topos one is referring to, but there is no
problem with developing analysis in the usual way in any topos with NNO, as
Colin has clearly said several times.
In the same post, Steve Simpsom continues:
> > >The two additional axioms are: (a) "there exists a natural number
> > >object (defined in terms of primitive recursion)"; (b) "every
> > >surjection has a right inverse" (i.e. the axiom of choice), which
> > >implies the law of the excluded middle.
> > >
> > >[ Question: Do (a) and (b) hold in categories of sheaves? ]
> >
> > (a) holds in every category of sheaves (on a topological
> > space or indeed on any Grothendieck site). (b) holds for sheaves
> > on a topological space only for a narrow range of spaces--for
> > Hausdorff spaces it holds only when the space is a a single point.
> > (Even if you look at all Grothendieck sites, you get very few
> > more toposes satisfying (b).)
>That's what I thought. So sheaves may be a good motivation for topos
>theory plus (a), but they provide no motivation for topos theory plus
>(a) plus (b).
This is just plain wrong; any topos of sheaves on a complete boolean
algebra also satisfies (b), as does the topos of G-Sets for any group G
(these are sets equipped with an action by G; G-Sets is also a topos of
sheaves). As Steve himself has admitted, he really doesn't know anything
about topos theory. So why try to pronounce on it's motivation?
Again, from that posting:
> > I think this is a crucial difference between set theory and
> > categorical foundations. Set theory has ONLY a foundational role in
> > mathematics. Category theory is used in many ways.
>Here on the FOM list, the crucial question for topos theorists is
>whether topos theory has ANY foundational role in mathematics. From
>the way this dialog with you is proceeding, I'm beginning to think
>that it doesn't. Or if it does, you aren't able or willing to
>articulate it.
This really seems unfair. Colin has been quite patient about going over
ground that's been well-known for literally decades. How often do we need
to re-ask and re-answer this tired question, whether topos theory has a
foundational role in mathematics? Maybe until the topos theorists (and the
current generation of open-minded students) get tired of the dialog and
decide to spend their energy in more productive ways?
>Topos theorists frequently make far-reaching but vague claims to the
>effect that topos theory is important for f.o.m. I have serious
>doubts, but I don't understand topos theory well enough to know for
>sure that topos theory *isn't* important for f.o.m. I'm giving you a
>chance to articulate why you think topos theory is important for
>f.o.m. So far, you haven't articulated it.
I don't recall this exchange starting by Colin being given a chance to
articulate why he thinks ... it seems to me that he neither needs to be
given such a chance (given his published record), nor has he accepted any
such challenge against which his remarks here should be judged.
In fact, I recall this particular exchange arising in a very
different way, with Steve making the following unwarranted assertion about
analysis in toposes and Colin calling him on it:
>From: Stephen G Simpson <simpson at math.psu.edu>
>Sender: owner-fom at math.psu.edu
>To: fom at math.psu.edu
>Subject: FOM: what are the most basic mathematical concepts?
>Date: Mon, 12 Jan 1998 20:00:18 -0500 (EST)
>Has there ever been a decent
>foundational scheme based on functions rather than sets? I know that
>some category theorists want to claim that topos theory does this, but
>that seems pretty indefensible. For one thing, there doesn't seem to
>be any way to do real analysis in a topos.
>-- Steve
By the way, as for the relationship between toposes in general and
categories of sheaves:
>Date: Wed, 14 Jan 1998 12:44:19 -0500 (EST)
>From: Stephen G Simpson <simpson at math.psu.edu>
>Subject: FOM: toposes vs. categories of sheaves
>Colin Mclarty writes:
> > >Categories of sheaves are toposes, but the notion of topos is much
> > >more general.
> >
> > This is like saying Hermann Weyl's GROUP THEORY AND QUANTUM
> > MECHANICS contains no group theory, since it only concerns the
> > classical transformation groups and the abstract group concept is
> > much more general.
>Huh? This strikes me as a misleading analogy.
>I'm no expert on toposes, so correct me if I'm wrong, but I'm pretty
>sure that the notion of topos is *much much much* more general than
>the category of sheaves on a topological space or even a poset.
You're wrong.
>instance, I seem to remember that if you start with any group G acting
>on any space X, there is a topos of invariant sheaves. So arbitrary
>groups come up right away. And this is just one example.
I don't know what point you're trying to make about "arbitrary groups"
here, but the topos of G-sheaves you mention hardly supports the claim
"that the notion of a topos *much much much* more general than the category
of sheaves on a topological space or even a poset" - the G-sheaves _are_
sheaves on a space, and the topos of G-sheaves sits inside the topos of all
sheaves in the most transparent way. Is it a matter of whether that counts
as *much much much* or just *much*?
>there lots of other toposes that have absolutely nothing to do with
No. In fact - modulo some technicalities that you surely don't care about
- every topos is a subtopos of one of the form G-sheaves for a suitable
(generalized) space and (generalized) group G.
Name: Steve Awodey
Position: Assistant Professor of Philosophy
Institution: Carnegie Mellon University
Research interest: category theory, logic, history and philosophy of math
More information: http://www.andrew.cmu.edu/user/awodey/
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000800.html","timestamp":"2014-04-19T01:51:26Z","content_type":null,"content_length":"11530","record_id":"<urn:uuid:fb87d39d-6644-4404-96ab-5deb38c057ea>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Call Of Code
I just saw this post about unsigned loop problems on Eli Blendersky's blog:
In it he tells a tale of woe with unsigned values not being easy to test when doing negative for loops, but one trick that I've used numerous times for checking ranges applies here too. I've marked
the changes.
Positive loop:
for (size_t i = 0; i < SIZE; ++i) ...
Negative loop:
for (size_t i = SIZE-1; i < SIZE; --i)
What? Ah, unsigned types are unsigned, so -1 is really big. Cool. Go use this in your code. Ranges are handled the same way:
if ( (unsigned_value-RANGE_MIN) < (RANGE_MAX-RANGE_MIN) )
Encoding into bits
If you want to store a number in a computer, it has to be stored as a sequence of bits. These bits can be used to represent large numbers, but the number space represented by whole number counts of
bits is always a power of two.
• 1 bit = 2 options, or a range of 2 values = 2^1
• 2 bits = 4 options, or a range of 4 values = 2^2
• 3 bits = 8 options, or a range of 8 values = 2^3
• 10 bit = 1024 options, or a range of 1024 values = 2^10
As long as you can fit your value in under the number of options, you can encode it in that many bits. You can encode numbers from zero up to 1023 in 10 bits as it is under 1024.
For example :
• to store a 7 or a 4, you could use 3 bits (b111 and b100) as 7 and 4 are both under 8.
• to store a 10, you could use 4 bits (b1010), as 10 is under 16 (2^4=16)
Using your knowledge
You don't have to store your numbers starting at zero. As long as you know the minimum and maximum possible, you can shift the options into your needed range.
For example, I know that the number of legs on any table I own is always three or four. That's only two options, so I can store it in 1 bit. I store the number of legs as 3+(encoded bit)
• for 3 legs, encode a 0
• for 4 legs, encode a 1
I know that the valid temperature range on a piece of equipment ranges from 45 to 55 degrees, so I can encode it as a range of 11 (so, really 16 as that's the smallest power of two above 11), so I
encode the data at 45+encoded value
• for 45 degrees, encode a 0 (b0000)
• for 50 degrees, encode a 5 (b0101)
• for 55 degrees, encode a 10 (b1010)
Wasting space
What's noticable from the temperature example is that the values 12-15 are never used. We're wasting potentially usable bits, except they aren't whole bits, so there is no obvious way of using them.
How many options can you store in half a bit?
Using different bases
The reason for the waste is the number space being used. If we restrict ourselves to the number space of the computer, we make encoding and decoding easy for the CPU, but we waste storage space. If
we use a base other than 2, we can pack the data really tight. In theory, it's possible to pack your data so that you only waste less than one bit of data.
A simple example is in order. Assume you have a list of booleans with an additional unknown state. These tribools come up frequently so this might be of practical use to some of you. Tribools require
three options. How many can you store in a normal 8 bit byte? Using base 2, you have to use two bits per tribool, meaning that in 8 bits, you can store 4 tribools.
But 8 bits can store up to 256 different options, and if you slice it differently, you can pack more tribools in. Tribools (3) stored 5 times has less options than the bools (2) stored 4 times.
• 2^4 = 256
• 3^5 = 243
To store the tribools, we merely need to generate the "option" of that set of 5 tribools, and then encode that value instead. The trivial example is all tribools but the first are zero:
• 0,0,0,0,0 => 0
• 1,0,0,0,0 => 1
• 2,0,0,0,0 => 2
what is interesting is what happens when we add to the other elements
• 0,1,0,0,0 => 3 (1*0 + 3*1)
• 0,0,1,0,0 => 9 (1*0 + 3*0 + 9*1)
• 0,0,0,1,0 => 27 (1*0 + 3*0 + 9*0 +27*1)
• 1,2,1,2,1 => 151 (1*1 + 3*2 + 9*1 + 27*2 + 81*1)
We multiply out the number by it's base. Element 0 * 3^0, element 1 * 3^1, element 2 * 3^2, element 3 * 3^3, and element 4 * 3^4. Now we can store 5 tribools in 8 bits because we are actually storing
a number from 0 to 242 in 8 bits, but that number can be decoded into 5 different 3 option values. Decoding is handled by taking modulus and then dividing to remove that part of the encoded value.
• 151 % 3 = 1
• 151 / 3 => 50
• 50 % 3 = 2
• 50 / 3 = 16
• 16 % 3 = 1
• 16 / 3 = 5
• 5 % 3 = 2
• 5 / 3 => 1
• 1 % 3 = 1
Decoding is just like figuring out the time of day from the number of seconds elapsed.
Probability of a value
Apart from knowing the range, we can also reduce the storage used by using our knowledge of how likely a thing is. This is known as probabilistic compression, where the chance of a value being
selected affects how many bits it uses. The one you learn about first is Huffman coding, but an easier to swallow example might be useful.
Let's say we have a game about sewing seeds in a garden full of plots, and letting them grow. If I am trying to compress the type of seed planted in a plot, but most of the plots are empty, I can say
that the most common type of plant is zero, the NULL plant. I can encode the plot data with only one bit for whether there is a plant, and then the actual plant type if there is. If we assume 8 types
of plant, then:
• no plant => (b0)
• plant of type 4 (b1100) (bit 1 set because there is a plant, then bits 2-4 set as the plant type.
If the garden is empty, then the compressed sequence takes as many bits as there are plots. If all the plots are full, then the compressed sequence takes as many bits as there are plots more than a
naive solution (actually, if there are 8 types of plant, then it's actually the same for a naive solution, as 8 types of plant means 9 options, which would require another bit as the lowest power of
two greater than or equal to 9 is 4)
It's possible to encode like this automatically, generating the probability of each option, then generating a binary tree which defines how you encode and decode values. This is Huffman coding, and
is very useful when you have no domain knowledge.
Compressing to less than 1 bit
If you have probability information, Huffman coding can be great, but it's not quite as efficient as it can be for the same reason why we found that we could save storage by changing the base when
compressing the tribools. Huffman must encode at least one bit for even the most common data. As an example of how we can do better, let's compress a garden with an additional bit of information:
whether there is anything in a whole row of plots.
• no plants in row => (b00)
• no plant in plot => (b01)
• plant of type 4 => (b1100)
What happens here is that the save takes up two bits per row rather than 1 bit per plot. That might be a significant saving. You can probably think of many other ways to mark larger areas of empty
data too. Gridding it out at a lower resolution, then setting bits for all cells that contain any data, for example. With domain knowledge comes much better compression. Huffman coding isn't readily
adaptable to wildly biased data, but it can be if you give it more symbols to work with. Therein lies the tradeoff. If you have to send a large palette of symbols, does the palette cost more to send
than the data? Run length encoding can get down to less than one bit per byte on some data because runs of 128 bytes can be encoded in a single byte, or runs of millions can be encoded in 32 bit RLE
Domain knowledge
Really powerful compression comes from understanding the probabilities and rules of data being compressed. If you have a look at the lossy compression algorithms, this becomes more easy to
understand. In lossy compression, you have domain knowledge of what humans can and regularly perceive. If the computer is able to analyse the data and find the information and only compress that,
then it can reduce a very large raw file down to something significantly smaller.
Let's compress our garden again. This time, we know that the garden cannot have any plant directly next to another plant.
• no plant in plot => (b0)
• plant of type 4 => (b1100)
• plot where plan cannot exist because of following rules and relying on already encoded data => (b)
Yes, zero bits. Zero bits to encode the fact that a cell is empty when it's not possible to have a plant in it. It makes sense because the actual fact that the cell is empty was already encoded by
the plot encoding a plant type. It's not really zero bits, it's 1/5th of a bit really as the plot with a plant in it defines the state for 5 cells. (the one following, and the three on the next row.
There are often many places where rules give you information for free, you just need to look out for them.
Sometimes you just want some simple code you can drop in to do bezier curves:
Templated Bezier function:
static inline float CR40( float t ) { return 0.5f*(2*t*t-t*t*t-t); }
static inline float CR41( float t ) { return 0.5f*(2.0f-5*t*t+3*t*t*t); }
static inline float CR42( float t ) { return 0.5f*(4*t*t+t-3*t*t*t); }
static inline float CR43( float t ) { return 0.5f*(t*t*t-t*t); }
T CRBez4( T a, T b, T c, T d, float t )
return CR40(t) * a + CR41(t) * b + CR42(t) * c + CR43(t) * d;
Well, now you can. Here's some rather old code I found, and recently reused.
Let's say you have a large table of data, lots of rows with a few columns. You know that every row is unique because that's a basic feature of your table, no duplicates. There are only a few
instances where non-unique elements are a good idea in tables, so we're going to ignore that corner case for now.
Let's look at what we have with an example:
Find out if Bob is the leader of team A
│Name │Clan │Rank │
│alice │team A│leader │
│bob │team B│private │
│bob │A team│leader │
│charlie │A team│private │
We can see that we can't just check for any one column and return on success, this is much like checking for a string, we have to check every element before we're sure we have a match. You could
maintain the table's alphabetical order, reducing the string queries down to only the rows that match "bob" or "team A" (dependent on which column you sorted by). If you walk through the table, you
have to string match at least one column.
But, we know that the table is made up of unique elements. There will be only one row returned if at all, so that makes this table a set. It's the set of Name Clan Rank associations that are known to
be true. There are a very large number of potentially true values in this set, the product of the list of all potential names, all potential clans, and all potential ranks. Let's assume 4 billion
names, 1 million clans, and 100 ranks. If we were to use a bit field to identify all the possible permutations, we'd not be able to fit the array into our computers. 400 trillion combinations would
be quite literally, larger than we could handle. But sparse bit fields have better solutions than just simple array lookups.
What we've done is reduced the problem to its essential components. We're not looking for the row that contains the data, we're looking to see if the data exists. We don't want to access the data,
only find out if it's in there. For this task, there are many solutions, but I'll just mention one.
I like hashes. My solution to this is to store the hashes of all the rows in an array of small arrays. I index into those small arrays by a small number of bits, the smallest amount of bits I can
that differentiates the hashes enough that all the associated ones don't overflow the capacity of the cacheline sized mini arrays.
H1[bits0-n] -> index of array it should be in.
With this, I do one cache line load to get at the array of hashes, and if the hash is in there, then I can return that the row does exist. If not, then I can assert that it doesn't. And that's quick.
The static C++ keyword has multiple meanings based on context.
When used inside a function,
void foo() {
static int i;
printf( "%i", i );
it means that the declared variable is global in lifetime scope, even though only local to the function for reading and writing. Normally this is used to implement singletons.
When used inside a class,
class Bar {
static void Baz() { ++foo; }
static int foo;
it means that the declared member variable or function don't require a this pointer, and thus don't require an instance for operation. This can be useful for implementing class specific memory pools,
or other types of class specific global state encapsulation.
When used in front of a variable or function,
static void Baz() { ++foo; }
static int foo;
it means that the global variable or free function are not destined to be used outside the current compilation unit, which reduces the number of symbols during link time, which can lead to better
build times.
There is a missing functionality in that you cannot have a class defined that doesn't offer external linkage. You cannot specify that a class doesn't export its member functions. This is a shame as
it guarantees that the link time of object-oriented C++ code gets worse, and you can't fix it.
Never use equality checking with floats, they are not accurate for checking with an equals operator.
For example, this loop would run forever:
for( float a = 0.0f; a != 1.0f; a += 0.1f )
Not ten times. Try it.
This is because 0.1f is an approximation. The IEEE standard float cannot represent 0.1f, only a very near neighbour. In fact, once you've added all ten, the bit pattern is only off by 1 bit, so it's
pretty accurate, but still not equal.
1.0f = 0x3f700000
the sum of ten 0.1f is 0x3f700001
quite remarkable that it's so close.
So, you agree, floats are not safe for checking equality, right?
floats are great for equality, but just don't use the wrong ones.
for example, 0.125f has an exact representation in floats, so adding together eight of them will in fact, be perfect.
In fact, floats have exact representations for all the integers right up to 16,777,216.0f, after which the mantissa runs out of accuracy.
so, when you see code like this:
for( float a = 0.0f; a < 100.0f; a += 1.0f )
float percent = a * 0.01f
/* do something with a percent */
if( a == 0.0f || a == 25.0f || a == 50.0f || == 75.0f )
/* something to do with quadrants */
then you can trust it just fine. Seriously. Also, this is useful for times when you might want to not suffer the notorious load-hit-store. If you had done this:
for( int a = 0; a < 100; ++a )
float percent = a*0.1f;
/* do something with a percent */
if( a == 0 || a == 25 || a == 50 || == 75 )
/* something to do with quadrants */
you would be converting from int to float on every run through the loop, which is slow on any modern console.
No, Gimbal-lock* is not the answer.
Quaternions provide a different way of representing a transform. The representation is limited to rotations, optionally you can try to use it for mirrorings too, but it get's a little awkward so we
don't normally do that.
Matrices provide a complete solution to representation of the orientation and position of an object, so they can be considered to be the de-facto solution for static scene transform representation,
however, because they are a linear re-combination, they do suffer when used dynamically.
When you create your model matrix, the one for you windmill, you would normally generate a matrix (something like Matrix::MakeXRot( bladeAngle )) then set it into the rendering transform hierarchy.
All well and good. What this is doing is setting a static matrix every time you change the bladeAngle value. If you are rendering a robotic arm, you would have a number of different matrices at
different levels in that hierarchy, each one concatenating with another to finally provide that finger transform.
If you're not provided with all the angles, and instead being told that you have a start and end value for each transform in the hierarchy, then that works fine for the examples so far, you
interpolate the angles and produce new matrices that are in-between the start and end matrices. This works fine because we're still in the world of scalars, interpolating between 0 and 100 for the
windmill just gives us a value that we use to generate a rotation matrix. So all is fine.
But, when we are given the matrices, and not the rotations, suddenly we have a surplus of information, and we're not sure what to do about it. If you have a character end position that is a 180 turn
from its start position (a Y-axis turn of PI), then you cannot tell which way to turn to interpolate, nor can you tell whether or not the real intention was not something more obscure, like a scaling
to -1 in both the x and z axes.
Okay, so that last possible animation is unlikely to be a feature, more likely to be a bug, but that's what happens if you linearly interpolate a matrix. That's what normally happens when you
linearly interpolate a linear transform.
This is where quaternions come in: they aren't a linear representation of orientation, not only are they non-linear, they're also not really rotations. The best explanation I've read / figured out
for myself about what they really are, is that they are a kind of 4D mirroring operation, which when applied twice gives you the rotation you would expect. As far as anyone using them should be
concerned though, they are vectors that give you a space for doing interpolation of 3D rotations.
A quaternion represents a rotation, it can be interpolated to produce a rotation that will be on the shortest path between the two given orientations. Quaternions can be interpolated linearly,
spherically, or used in higher order functions such as beziers or TCB splines.
That is what you need to know. It allows you to interpolate properly.
Spherical linear interpolation, which leads to very smooth results, can often be relegated to trying too hard territory. Most animators and motion capture data already provide enough keys that doing
the interpolation that carefully is not really necessary. Don't forget that artists don't generally see sub frame data, so getting it right is pointless in most cases.
* Gimbal-lock is when you can't interpolate from one rotation to another without doing some long winded route all round the houses. It manifests in Eulers due to the polar representation. Any time
you get near a pole, you have to work overtime to get the transform in the right orientation.
|
{"url":"http://coc.dev77.com/","timestamp":"2014-04-19T10:02:13Z","content_type":null,"content_length":"103192","record_id":"<urn:uuid:ad05f46d-e466-4e2b-bc00-94e40595f3f5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
I have a problem i dont have the solution manual too and i am stumped on how to complete.
This could be a trigonometric substitution?
$\int_1^2 \frac {dx}{x \sqrt {4 + x^2}}$
this is what i came up with but i dont think its right..
$x = 2 tan \theta$
$dx = 2 sec^2 \theta$
$\sqrt {2^2 + ( 2tan \theta)2}}$
$(tan^2\theta -1)2^2$
= $2sec\theta$
substitute back in to the original equation..
$\int_1^2 \frac{2 sec^2 \theta} {2 tan\theta 2 sec\theta}2 sec^2\theta$
cross multiply
$\int_1^2 \frac{2 sec^2 \theta} {2 tan\theta}$
not sure if the 2 can cross out or what to do with them or if i am even right at this point?
|
{"url":"http://mathhelpforum.com/calculus/192341-integral.html","timestamp":"2014-04-17T22:35:44Z","content_type":null,"content_length":"40892","record_id":"<urn:uuid:27ad30f6-c33a-4891-a680-bcac79aa376a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodstock, GA Algebra 2 Tutor
Find a Woodstock, GA Algebra 2 Tutor
...Let's connect these pieces together. "Putting one foot in front of the other", is a simple motion but crucial if one is to move forward from one point to another under their own power.
Algebra, is this step that carries one from a basic math concept to higher math. My unique style gives students the confidence they need to make those steps and move forward.
26 Subjects: including algebra 2, English, reading, geometry
...I never bill for a tutoring session if the student or parent is not completely satisfied. While I have a 24 hour cancellation policy, I often provide make-up sessions. I usually tutor students
in a public library close to their home, however I will travel to another location if that is more convenient for the student.
8 Subjects: including algebra 2, statistics, algebra 1, trigonometry
...I tutor Algebra, Pre-Calculus, Calculus, Geometry,General Chemistry and more from Elementary-Undergrad. I have engaged in purely freelance tutoring, and have worked for Atlanta Tutor Doctor as
well. I'm focused on staying in tune with my clients and try to meet their specific needs.
25 Subjects: including algebra 2, chemistry, calculus, reading
I have a wide array of experience working with and teaching kids grades K-10. I have tutored students in Spanish, Biology, and Mathematics in varying households. I have instructed religious
school for 5 years with different age groups, so I am accustomed to working in multiple settings with a lot of material and different student skill.
16 Subjects: including algebra 2, Spanish, chemistry, calculus
I currently teach Statistics and Physics at a private school in Atlanta and I am very skilled at presenting complex concepts to my students in a very clear and understandable manner. I
successfully tutored well over one hundred students and because of this experience my sessions are very effective....
20 Subjects: including algebra 2, calculus, geometry, physics
Related Woodstock, GA Tutors
Woodstock, GA Accounting Tutors
Woodstock, GA ACT Tutors
Woodstock, GA Algebra Tutors
Woodstock, GA Algebra 2 Tutors
Woodstock, GA Calculus Tutors
Woodstock, GA Geometry Tutors
Woodstock, GA Math Tutors
Woodstock, GA Prealgebra Tutors
Woodstock, GA Precalculus Tutors
Woodstock, GA SAT Tutors
Woodstock, GA SAT Math Tutors
Woodstock, GA Science Tutors
Woodstock, GA Statistics Tutors
Woodstock, GA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Acworth, GA algebra 2 Tutors
Alpharetta algebra 2 Tutors
Canton, GA algebra 2 Tutors
Douglasville algebra 2 Tutors
Dunwoody, GA algebra 2 Tutors
Holly Springs, GA algebra 2 Tutors
Kennesaw algebra 2 Tutors
Lebanon, GA algebra 2 Tutors
Mableton algebra 2 Tutors
Marietta, GA algebra 2 Tutors
Milton, GA algebra 2 Tutors
Roswell, GA algebra 2 Tutors
Sandy Springs, GA algebra 2 Tutors
Smyrna, GA algebra 2 Tutors
Snellville algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Woodstock_GA_Algebra_2_tutors.php","timestamp":"2014-04-19T10:01:51Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:8c731366-53c8-4e30-af33-af7b0b160543>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can Anybody help me out, in converting my code to iterative method.
03-14-2011 #1
Registered User
Join Date
Mar 2011
Can Anybody help me out, in converting my code to iterative method.
I am new on programming, is currently using dev c++. I was asked to write a method to find x^y, where y is a power of x. Using recursive and iterative solution. I already have the recursive one,
but I don't know how to make it to iterative. Here is my recursive solution.
int power (int, int);
int main ()
int base, exp, result;
printf("Please enter a base number\n");
printf("Please enter a exponent number\n");
result = power(base, exp);
printf("\n The result is:%d\n",result);
return 0;
int power (int base, int exp)
if(exp >= 1)
return base * (power(base,exp - 1));
return 1;
Help would highly be appreciated. Thanks in advance..
Last edited by Salem; 03-14-2011 at 09:47 AM. Reason: Added [code][/code] tags
Use a loop to keep multiplying the base exp number of times
Yeah, I tried to used loop and got this code....
int main()
int base, exp, result;
printf("Please enter a base number\n");
printf("Please enter a exponent number\n");
int i = 1;
for (i=1; i < exp; i++)
printf("\n The result is:%d\n",result);
return 0;
The problem is I can't get the exact value when inputting a base of 4 and exponent of 4.. it gives a result of 128, it should be 256.... can somebody help me please... Thanks
Take a look at the loop condition. How many times does it run?
Also, I don't see an initial value for result. You're lucky the result isn't more off than it is.
Also, take at look at your first post (as edited by me), and see how much better the code looks.
Now read the intro threads, and make your latest effort just as presentable.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
Now, I got it... Thanks a lot... clap clap clap...
03-14-2011 #2
Registered User
Join Date
Apr 2008
03-14-2011 #3
Registered User
Join Date
Mar 2011
03-14-2011 #4
Registered User
Join Date
Apr 2008
03-14-2011 #5
03-14-2011 #6
Registered User
Join Date
Mar 2011
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/135748-can-anybody-help-me-out-converting-my-code-iterative-method.html","timestamp":"2014-04-17T20:11:23Z","content_type":null,"content_length":"61942","record_id":"<urn:uuid:0ed42b17-bb57-4c47-9164-a141fd77235a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cryptology ePrint Archive: Report 2011/120
Faster 2-regular information-set decodingDaniel J. Bernstein and Tanja Lange and Christiane Peters and Peter SchwabeAbstract: Fix positive integers B and w. Let C be a linear code over F_2 of length
Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast
syndrome-based) hash function and related proposals. This problem differs from the usual information-set-decoding problems in that (1) the target codeword is required to have a very regular structure
and (2) the target weight can be rather high, so that there are many possible codewords of that weight.
Augot, Finiasz, and Sendrier, in the paper that introduced FSB, pre- sented a variant of information-set decoding tuned for 2-regular decoding. This paper improves the Augot–Finiasz–Sendrier
algorithm in a way that is analogous to Stern's improvement upon basic information-set decoding. The resulting algorithm achieves an exponential speedup over the previous algorithm.
Category / Keywords: secret-key cryptography / Information-set decoding, 2-regular decoding, FSB, binary codesDate: received 9 Mar 2011Contact author: tanja at hyperelliptic orgAvailable format(s):
PDF | BibTeX Citation Version: 20110310:022616 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
|
{"url":"http://eprint.iacr.org/2011/120","timestamp":"2014-04-19T22:16:16Z","content_type":null,"content_length":"2692","record_id":"<urn:uuid:1bdaaf6e-6faf-435a-a4dc-24b13ffb5fbf>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Peter Cramton
The methodology of economics employs mathematical and logical tools to model and analyze markets, national economies, and other situations where people make choices. Understanding of many economic
issues can be enhanced by careful application of the methodology, and this in turn requires an understanding of the various mathematical and logical techniques. The course reviews concepts and
techniques usually covered in algebra, analytical geometry, and the first semester of calculus. It also introduces the components of subsequent calculus and linear algebra courses most relevant to
economic analysis. The reasons why economists use mathematical concepts and techniques to model behavior and outcomes are emphasized throughout. The course will meet three times a week, twice for
lectures and once in discussion section conducted by a teaching assistant. Lectures will expand on material covered in the text, stressing the reasons why economists use math and providing additional
explanation of formal mathematical logic. Discussion sections will demonstrate solutions for problems, answer questions about material presented in the lectures or book, and focus on preparing
students for exams. Students should be prepared to devote 3-4 hours per week outside class meetings, primarily working on problem sets as well as reading and reviewing the text, additional reading
assignments, and class notes. Course Objectives: Each student should be able by the end of the semester to:
• Recognize and use the mathematical terminology and notation typically employed by economists
• Explain how specific mathematical functions can be used to provide formal methods of describing the linkages between key economic variables
• Employ the mathematical techniques covered in the course to solve economic problems and/or predict economic behavior
• Explain how mathematical concepts enable economists to analyze complicated problems and generate testable hypotheses
Wouldn’t life be simple if, in making decisions, we could ignore the interests and actions of others? Simple yes–but boring too. The fact remains that most real-world decisions are not made in
isolation, but involve interaction with others. This course studies the competitive and cooperative behavior that results when several parties with conflicting interests must work together. We will
learn how to use game theory to formally study situations of potential conflict: situations where the eventual outcome depends not just on your decision and chance, but the actions of others as well.
Applications are drawn from economics, business, and political science. Typically there will be no clear cut “answers” to these problems (unlike most single-person decisions). Our analysis can only
suggest what issues are important and provide guidelines for appropriate behavior in certain situations.
Economists increasingly are asked to design markets in restructured industries to promote desirable outcomes. This course studies the new field of market design. The ideas from game theory and
microeconomics are applied to the design of effective market rules. Examples include electricity markets, spectrum auctions, environmental auctions, and auctions of takeoff and landing slots at
congested airports. The emphasis is both on the design of high-stake auction markets and bidding behavior in these markets.
Economics 603 is the first half of the Economics Department’s two-semester core sequence in Microeconomics. This course is taken by all first-year Economics Ph.D. students, as well as by quite a few
Ph.D. students in Agricultural & Resource Economics, the Smith School of Business, and other academic departments. The first half of the semester treats consumer theory and the theory of the firm.
The second half of the semester is an introduction to game theory and its applications in economics.
Presents a formal treatment of game theory. We begin with extensive-form games. A game tree is defined, as well as information sets and pure, mixed and behavioral strategies. Existence of Nash
equilibria is discussed. We then turn to the analysis of dynamic games, covering repeated games, finitely repeated games, the folk theorem for repeated games, subgame perfection, and punishment
strategies. Next, games with incomplete information are studied, including direct revelation games, concepts of efficiency, and information transmission. Several refinements of Nash equilibria are
defined, such as sequential equilibria, stable equilibria, and divine equilibria. The analysis of enduring relationships and reputations is covered. The course concludes with a discussion of two
important applications of game theory: auctions and bargaining. The topics include sealed-bid auctions, open auctions, private valuation and common valuation models, the winner’s curse, auction
design, bargaining with incomplete information, double auctions and oral double auctions.
|
{"url":"http://www.cramton.umd.edu/courses/","timestamp":"2014-04-20T00:44:30Z","content_type":null,"content_length":"59886","record_id":"<urn:uuid:65d0e7c6-307a-4a33-a402-ea06005c1808>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Going Whole Ballhog
If you're one of the tens of readers who follow me, then unless the bottom of your rock doesn't carry ESPN, you've probably heard something about this kid from Grinnell who
dropped 138
on a hapless Faith Baptist Bible College basketball team. Now, granted, this was a Division III basketball game
—hardly the acme of organized basketball. Still, as Kobe Bryant said, "
I mean, I don't care what level you're at, scoring 138 points is pretty insane." Jack Taylor is a household name now, people.
Rather predictably, there was some backlash, with some people claiming that it was rigged, or that it was selfish basketball, or at least not The Way That Basketball Should Be Played (because
anything that portentous has to be written upstyle). I can't say anything as to whether it was rigged, although it didn't look like it to me, and as with any conspiracy theories, it's easy to say
something like that when you don't have to offer any proof. All you have to do is throw out your hands and say, "
It's common sense!
But we can say something about whether it was selfish or bad basketball. Some folks have taken it upon themselves to make a virtue out of evenly distributed teamwork. That's fine as a matter of
personal opinion, but they make a mistake, I think, who believe that it's an
virtue of basketball. It wasn't an intrinsic virtue of basketball when Naismith put up the first peach baskets, and until someone invents a game that makes teamwork an explicit scoring feature, there
a sport where it's an intrinsic virtue. (I also think that some of these folks could benefit from playing with a scoring phenom, just to see what it's like, but that's neither here nor there.)
What makes it a virtue
—when it is a virtue—is that it makes a team more efficient, by and large. On the occasions when a player goes out and consciously attempts to score a bunch, it quite frequently turns out that the
other players on the team are more efficient, and thus the team as a whole would have been more efficient if the offense had been more evenly distributed. This is a basic result from game theory.
But that didn't turn out to be the case here. Taylor scored 138 out of his team's 179 points. That's 77 percent. To get those points, of course, he used up a lot of his team's possessions: 69
percent, according to ESPN. It is a lot, but it shouldn't overshadow the fact that the rest of his team used up the remaining 31 percent of the possessions and ended up scoring only 23 percent of the
Let's see how that stacks up against two other phenomenal scoring performances of the past: Wilt Chamberlain's mythic 100-point night in Hershey, and Kobe's own 81-point barrage at home against the
Toronto Raptors. (Taylor nearly had 81 just in the second half.) I'm going to ignore claims that the Warriors game was a farce in the second half, or that the Toronto Raptors were a defensive sieve;
I'm only interested in the efficiency figures.
Chamberlain's Warriors scored 169 points that night, so Chamberlain scored 59 percent of his team's points, using (again according to ESPN) 47 percent of his team's possessions. Kobe's Lakers scored
122 points, so he contributed 66 percent of his team's points, while using (ESPN again) just 51 percent of the team's possessions.
One way to look at these feats is to consider how much more efficient the individual players were than the rest of the team. So, on a percentage basis, Taylor scored 77 percent of the points on 69
percent of the possessions, whereas the rest of the team scored 23 percent of the points on 31 percent of the possessions. Taylor, therefore, was (77/69) / (23/31) = 1.50 times as efficient as his
teammates. Similarly, Chamberlain was (59/47) / (41/53) = 1.62 times as efficient, and Kobe was (66/51) / (34/49) = 1.87 times as efficient.
However, such a measure can easily be misleading. If someone plays a single minute, puts up a single three-pointer, and makes it, they might (as a normal example) have 3 percent of the team's points
with only 1 percent of its possessions. By the same metric, such a player would be (3/1) / (97/99) = 3.06 times as efficient as his teammates. What's missing is some measure of the magnitude of the
player's impact.
A more representative measure of the player's efficiency impact can be obtained by considering how efficient the team would have been if the other players had managed to use up all of their team's
possessions, at the same efficiency they had been exhibiting. For instance, Taylor's teammates used up 31 percent of the possessions, scoring 23 percent of the points they eventually scored. If they
had continued at that same clip, but used up 100 percent of the possessions, they would have eventually scored 133 points—about 74 percent as much as they actually did. To put it another way, the
team with Taylor was 31/23 = 1.35 times as efficient as they would have been without him.
Using that as our guideline, the Warriors with Chamberlain were 53/41 = 1.29 times as efficient as they would have been without him, and Kobe's Lakers were 1.44 times as efficient as they would have
been without him. Just as a demonstration of how amazing all of these numbers are, if a team averages a true shooting percentage of 50 percent amongst four players, and the remaining player uses up
half the possessions with a true shooting percentage of 70 percent, that team is only 1.20 times as efficient as they would be without that player. To increase their teams' efficiency as much as they
did, these three athletes had to be remarkably efficient and prolific.
No comments:
|
{"url":"http://thenullhypodermic.blogspot.com/2012/11/going-whole-ballhog.html","timestamp":"2014-04-20T21:07:48Z","content_type":null,"content_length":"68603","record_id":"<urn:uuid:6225464d-95e0-48c7-b1bb-17aa09718967>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Capitol Heights Geometry Tutor
...I also give my students homework and provide timely feedback to them on their strengths and weaknesses. I also ask for feedback from my students on my standard of tutoring and how to make the
tutoring process and experience much better. For the convenience of the students, I am very willing to travel to any location that is more convenient for my student.
28 Subjects: including geometry, reading, physics, biology
...I can help you with anything from basic spreadsheets to graphing and charts and macro programming. I can help set up your personal or business finances in Excel and show you shortcuts and
tricks to make you life easier and more efficient. I have been an accounting tutor for over 7 years and hav...
28 Subjects: including geometry, reading, English, writing
...I tutored for 3 years during college ranging from remedial algebra through third semester calculus. I have experience in Java (and other web programming) and Microsoft Excel. I generally have
the student work through (or at least make an attempt at) a problem first and then work with them through step-by-step, discussing strategies for approaching similar problems and concepts.
13 Subjects: including geometry, calculus, GRE, SAT math
...At the tutoring session concepts will be made clear, examples are worked, the learner will be assisted to do similar items until she/he is able to do similar exercises independently. My
approach is flexible to meet the needs of the learner. Depending on their needs, at the end of my tutoring my...
7 Subjects: including geometry, algebra 1, algebra 2, SAT math
...This led me to get a Phd. in linguistics. I have taught English to students from various countries, including all the inhabited continents that don't already speak English. I taught English to
German students in Germany for two years.
22 Subjects: including geometry, reading, Spanish, English
|
{"url":"http://www.purplemath.com/Capitol_Heights_Geometry_tutors.php","timestamp":"2014-04-19T23:28:55Z","content_type":null,"content_length":"24495","record_id":"<urn:uuid:78c00902-e09e-4fad-aa0d-6a5ce143e894>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Palisade, NJ
New York, NY 10028
STAT/MATH/Actuarial Science/CFA/MBA/Fin. - Ivy League Exp & Prof
Holding First Class Honours Degrees in
ematical Sciences and Psychology from Newcastle University, and an MSc. in Applied Statistics from Cambridge University, England, I became a full-time Private
ematics Coach over ten years ago, tutoring students to undergraduate...
Offering 10+ subjects including algebra 1, algebra 2 and calculus
|
{"url":"http://www.wyzant.com/Palisade_NJ_Math_tutors.aspx","timestamp":"2014-04-17T00:50:15Z","content_type":null,"content_length":"60115","record_id":"<urn:uuid:47099b16-e6f5-4aaf-b52c-0aae65304c11>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Springfield, VA Geometry Tutor
Find a West Springfield, VA Geometry Tutor
...I completed a B.S. degree in Applied Mathematics from GWU, graduating summa cum laude, and also received the Ruggles Prize, an award given annually since 1866 for excellence in mathematics. I
minored in economics and went on to study it further in graduate school. My graduate work was completed...
16 Subjects: including geometry, calculus, econometrics, ACT Math
...These days, I have a full-time job where I send upwards of 75-100 emails per day using Microsoft Outlook. I also manage the calendars for three very busy attorneys. With a Bachelor of Arts in
Political Science and Japanese, I feel I have a solid grasp of social studies, both from a social science perspective, but also a comparative cultural focus.
33 Subjects: including geometry, reading, English, writing
...I use traditional classroom methods as well as visual and auditory re-enforcement, real life examples and the occasional field trip. I teach all levels of students and specifically instruct
Geometry, Writing, and Science - Physical, Life, Earth Sciences. Most of my students are ESOL as well so we incorporate learning English as part of the main subject.
11 Subjects: including geometry, writing, ESL/ESOL, TOEFL
...I can help you develop better study skills and strategies to help you be successful, too. I will tutor you in all parts of the Praxis that you need help. I can also help coach you through and
show you some useful techniques to help you overcome test anxiety.
35 Subjects: including geometry, reading, English, chemistry
...NOTE: 1. I don't charge you if you aren't satisfied! 2. Up to 15% discount is possible!
12 Subjects: including geometry, calculus, algebra 1, algebra 2
Related West Springfield, VA Tutors
West Springfield, VA Accounting Tutors
West Springfield, VA ACT Tutors
West Springfield, VA Algebra Tutors
West Springfield, VA Algebra 2 Tutors
West Springfield, VA Calculus Tutors
West Springfield, VA Geometry Tutors
West Springfield, VA Math Tutors
West Springfield, VA Prealgebra Tutors
West Springfield, VA Precalculus Tutors
West Springfield, VA SAT Tutors
West Springfield, VA SAT Math Tutors
West Springfield, VA Science Tutors
West Springfield, VA Statistics Tutors
West Springfield, VA Trigonometry Tutors
Nearby Cities With geometry Tutor
Baileys Crossroads, VA geometry Tutors
Burke, VA geometry Tutors
Cameron Station, VA geometry Tutors
Dale City, VA geometry Tutors
Fort Hunt, VA geometry Tutors
Franconia, VA geometry Tutors
Lake Ridge, VA geometry Tutors
Lincolnia, VA geometry Tutors
North Springfield, VA geometry Tutors
Oak Hill, VA geometry Tutors
Potomac Falls, VA geometry Tutors
Rosslyn, VA geometry Tutors
Saint Charles, MD geometry Tutors
Springfield, VA geometry Tutors
Wheaton, MD geometry Tutors
|
{"url":"http://www.purplemath.com/West_Springfield_VA_Geometry_tutors.php","timestamp":"2014-04-21T15:07:34Z","content_type":null,"content_length":"24455","record_id":"<urn:uuid:842b3517-fa98-43a7-87ba-c415e880cb17>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lamirada, CA Precalculus Tutor
Find a Lamirada, CA Precalculus Tutor
...In my own competitions, I have registered personal bests of 1:49.4 in the 800m and 4:00.38 in the Mile. In the name of good fun, I have won two trail marathons, while setting course records in
each: Buzz Marathon (San Miguel, CA -- 2010); Run the River Marathon (Folson-Sacramento, CA -- 2009). ...
58 Subjects: including precalculus, reading, English, calculus
...I have helped students with some basic linear algebra, such as matrices and systems of equations, in algebra 2 and precalculus courses. I graduated from university in Australia with a Bachelor
of Science degree, with a double major in physics and mathematics. I taught myself logic when a student needed help with it.
11 Subjects: including precalculus, calculus, physics, SAT math
I have taught math for over 5 years! Many of my students are from grade 2 to 12, some are from college. I also have a math tutor certificate for college students from Pasadena City College. I
graduated in 2012 from UCLA.
7 Subjects: including precalculus, geometry, algebra 2, trigonometry
...Often just a few sessions are enough to help students to get back on track, give them confidence, reduce stress and improve their grades. I have a lot of patience and a true passion for
tutoring math. I usually tutor from my home office in Dana Point, but I am willing to drive to a more convenient location.
5 Subjects: including precalculus, algebra 1, algebra 2, trigonometry
...Among these classrooms, I tutored in Advanced Placement (AP) classrooms, Special Day Classes (SDC), regular classrooms, and honors classes. At this same school, I also was an academic coach who
provided individualized instruction for students who attended afterschool tutoring in the library. I ...
16 Subjects: including precalculus, reading, English, writing
|
{"url":"http://www.purplemath.com/lamirada_ca_precalculus_tutors.php","timestamp":"2014-04-19T07:29:34Z","content_type":null,"content_length":"24279","record_id":"<urn:uuid:58975b37-2aff-4df9-9863-ae07ec799360>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One of the shortcomings of regression (both linear and logistic) is that it doesn’t handle categorical variables with a very large number of possible values (for example, postal codes). You can get
around this, of course, by going to another modeling technique, such as Naive Bayes; however, you lose some of the advantages of regression Related posts:
My Favorite Graphs
The important criterion for a graph is not simply how fast we can see a result; rather it is whether through the use of the graph we can see something that would have been harder to see otherwise or
that could not have been seen at all. – William Cleveland, The Elements of Graphing Data,Related posts:
Example 9.14: confidence intervals for logistic regression models
Recently a student asked about the difference between confint() and confint.default() functions, both available in the MASS library to calculate confidence intervals from logistic regression models.
The following example demonstrates that they yield d...
An Intuitive Approach to ROC Curves (with SAS & R)
I developed the following schematic (with annotations) based on supporting documents (link) from the article cited below. The authors used R for their work. The ROC curve in my schematic was output
from PROC LOGISTIC in SAS, the scatterplot with m...
Learn Logistic Regression (and beyond)
One of the current best tools in the machine learning toolbox is the 1930s statistical technique called logistic regression. We explain how to add professional quality logistic regression to your
analytic repertoire and describe a bit beyond that. A statistical analyst working on data tends to deliberately start simple move cautiously to more complicated methods.Related posts:
Example 8.15: Firth logistic regression
In logistic regression, when the outcome has low (or high) prevalence, or when there are several interacted categorical predictors, it can happen that for some combination of the predictors, all the
observations have the same event status. A similar e...
Example 8.8: more Hosmer and Lemeshow
This is a special R-only entry.In Example 8.7, we showed the Hosmer and Lemeshow goodness-of-fit test. Today we demonstrate more advanced computational approaches for the test.If you write a function
for your own use, it hardly matters what it looks l...
Example 8.7: Hosmer and Lemeshow goodness-of-fit
The Hosmer and Lemeshow goodness of fit (GOF) test is a way to assess whether there is evidence for lack of fit in a logistic regression model. Simply put, the test compares the expected and observed
number of events in bins defined by the predicted p...
R Commander – logistic regression
We can use the R Commander GUI to fit logistic regression models with one or more explanatory variables. There are also facilities to plot data and consider model diagnostics. The same series of
menus as for linear models are used to fit a logistic regression model. Fast Tube by Casper The “Statistics” menu provides access to various
Confusing slice sampler
Most embarrassingly, Liaosa Xu from Virginia Tech sent the following email almost a month ago and I forgot to reply: I have a question regarding your example 7.11 in your book Introducing Monte Carlo
Methods with R. To further decompose the uniform simulation by sampling a and b step by step, how you determine the
|
{"url":"http://www.r-bloggers.com/tag/logistic-regression/","timestamp":"2014-04-16T13:41:01Z","content_type":null,"content_length":"39758","record_id":"<urn:uuid:70465e08-ade0-4b43-84f1-02cf8058e875>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
• Index
• » Help Me !
• » Li(x)
Post a reply
Topic review (newest first)
2012-10-23 20:12:41
Series are of different types. For small values of x we use Taylor series. For large values we use Asymptotic expansions and Series that are expanded around infinity.
1) You could try to evaluate the integral using Romberg or Gaussian integration.
2) You truncate the various series expansions producing smaller approximations.
3) You look up the values in tables.
I have used the second one to come up with an example:
When x>=500.
Read this:
http://numbers.computation.free.fr/Cons … rimes.html
now you will understand why there are only approximate answers.
2012-10-23 20:07:52
I just want to know how to find li(x),but you could show an example of your own.
2012-10-23 19:31:47
Yes, but first I need that information from you.
1) Li(x), how big is x?
2)How many digits do you require because the answer is going to be in floating point.
2012-10-23 19:28:15
Could you show me some
2012-10-23 19:20:45
Depends on the arguments or how much accuracy you need. There are several ways for each.
2012-10-23 19:10:39
That series is a little complex,isn't there a easier way?
2012-10-23 05:55:38
You can use the series given at http://en.wikipedia.org/wiki/Logarithmi … esentation
2012-10-23 02:40:17
Hi Mint;
Welcome to the forum.
2012-10-23 02:08:45
2012-10-23 01:21:33
What is li(x)?I know it is logarithmic integral function,but how do i find li(x),for instance,what will be li(5)?
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
Series are of different types. For small values of x we use Taylor series. For large values we use Asymptotic expansions and Series that are expanded around infinity.1) You could try to evaluate the
integral using Romberg or Gaussian integration.2) You truncate the various series expansions producing smaller approximations.3) You look up the values in tables.I have used the second one to come up
with an example:When x>=500.
I just want to know how to find li(x),but you could show an example of your own.
Yes, but first I need that information from you.1) Li(x), how big is x?2)How many digits do you require because the answer is going to be in floating point.
Depends on the arguments or how much accuracy you need. There are several ways for each.
That series is a little complex,isn't there a easier way?
You can use the series given at http://en.wikipedia.org/wiki/Logarithmi … esentation
What is li(x)?I know it is logarithmic integral function,but how do i find li(x),for instance,what will be li(5)?
|
{"url":"http://www.mathisfunforum.com/post.php?tid=18297&qid=236724","timestamp":"2014-04-25T00:14:11Z","content_type":null,"content_length":"20145","record_id":"<urn:uuid:1313d3a4-1b1f-471b-bba9-0c7c08c48f54>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• Login
• Register
• Forget
Challenger of the Day
Azhar Shahid (azharshahid01@gmail.com)
New Delhi
Time: 00:01:16
Placed User Comments (More)
rajesh dolai
2 Days ago
plz send me ibm plaement paper english..
2 Days ago
cleared ibm 2 rounds
thank you m4maths.com
2 Days ago
Thanks to m4maths.I got placed in IBM.Awsome work.Best of luck.
Lekshmi Narasimman MN
8 Days ago
Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u all a good news :)
Thanks to almighty too :) !!
abhinay yadav
13 Days ago
thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same as given here.
bcz of practice i finished my written test 15 minutes before and got it.
thanx allot for such noble work...
18 Days ago
coz of this site i cud clear IBM's apti nd finally got placed in tcs
thanx m4maths...u r a wonderful site :)
20 Days ago
thank u m4maths and all its user for posting gud and sensible answers.
Nilesh singh
23 Days ago
finally selected in TCS. thanks m4maths
25 Days ago
Thank you team m4maths.Successfully placed in TCS.
Deepika Maurya
25 Days ago
Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !!
Rimi Das
1 Month ago
Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site for placement
Stephen raj
1 Month ago
prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it.
Stephen raj
1 Month ago
Thanks to m4maths:)
cracked infosys:)
1 Month ago
i have been Selected in Tech Mahindra.
All the quanti & reasoning questions are common from the placement papers of m4maths.
So a big thanks to m4maths team & the people who shares the placement papers.
Amit Das
1 Month ago
I got selected for interview in TCS.Thank you very much m4maths.com.
1 Month ago
I got placed in TCS :)
Thanks a lot m4maths :)
Syed Ishtiaq
1 Month ago
An Awesome site for TCS.
Cleared the aptitude.
1 Month ago
I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com
plz guide for the technical round.
mounika devi mamidibathula
1 Month ago
got placed in IBM..
this site is very useful, many questions repeated..
thanks alot to m4maths.com
Anisha Lakhmani
1 Month ago
I got placed at infosys.......thanx to m4maths.com.......a awesum site......
Anisha Lakhmani
1 Month ago
I got placed at infosys.......thanx to m4maths.com.......a awesum site......
Kusuma Saddala
1 Month ago
Thanks to m4maths, i have place at IBM on feb 8th of this month
2 Months ago
thanks to m4 maths because of this i clear csc written test
mahima srivastava
2 Months ago
Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site.
Surya Narayana K
2 Months ago
I successfully cleared TCS aptitude test.Thanks a lot m4maths.com.
Surya Narayana K
2 Months ago
I successfully cleared TCS aptitude test.Thanks a lot m4maths.com.
prashant gaurav
2 Months ago
Got Placed In Infosys...
Thanks of m4maths....
3 Months ago
iam not placed in TCS...........bt still m4maths is a good site.
4 Months ago
Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a lotttttt............
4 Months ago
THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even reading the whole
question completely.. gr8 work m4maths.. keep it up.
5 Months ago
m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus drives that happened
in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not only for TCS but for all
other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :)
5 Months ago
THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now.
5 Months ago
Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and 1000000000000s of thanks to this
website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website.
6 Months ago
2 days before i cleared written test just because of m4maths.com.thanks a lot for this community.
6 Months ago
thanks for m4maths!!! bcz of which i cleared apti of infosys today.
6 Months ago
Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths.
No words to praise m4maths.so i simply said thanks a lot.
7 Months ago
I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am very proud to say that I
have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle very easily making people
like us to be successful. Thanks a lotttt
Abhishek Ranjan
7 Months ago
me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :)
do practice n u'll surely succeed :)
Sandhya Pallapu
1 year ago
Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site helped me a lot in achieving
this...thanks to M4MATHS!
vivek singh
2 years ago
I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through TCS......special thanks to
admin for creating such a superb community
Manish Raj
2 years ago
this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE
Asif Neyaz
2 years ago
Thanku M4maths..due to u only, imade to TCS :D test on sep 15.
Harini Reddy
2 years ago
Big thanks to m4maths.com.
I cracked TCS..The solutions given were very helpful!!!
2 years ago
HI everyone ,
me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been able to place in TCS... and thanks
to all the users who clearly solved the problems.. im very greatful to you :)
2 years ago
Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially...
2 years ago
Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that
the success rate of M4Math grow exponentially.
Again Thanx for all support given by M4Math during
my preparation for TCS.
and Best of LUCK for all students for their preparation.
2 years ago
thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :)
2 years ago
thousands of thnx to m4maths...
got selected in tcs for u only...
u were the only guide n i hv nvr done group study for TCS really feeling great...
thnx to all the users n team of m4maths...
3 cheers for m4maths
2 years ago
thousands of thnx to m4maths...
got selected in tcs for u only...
u were the only guide n i hv nvr done group study for TCS really feeling great...
thnx to all the users n team of m4maths...
3 cheers for m4maths
2 years ago
Thank U ...I'm placed in TCS.....
Continue this g8 work
2 years ago
thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus
raghu nandan
2 years ago
thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :)
V.V.Ravi Teja
3 years ago
thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........
Veer Bahadur Gupta
3 years ago
got placed in TCS ...
thanku m4maths...
Amulya Punjabi
3 years ago
Hi All,
Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the help of m4maths.......it's just an
osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students.
Anusha Alva
3 years ago
a big thnks to this site...got placed in TCS!!!!!!
Oindrila Majumder
3 years ago
thanks a lot m4math.. placed in TCS
Pushpesh Kashyap
3 years ago
superb site, i cracked tcs
Saurabh Bamnia
3 years ago
Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME.........
Gautam Kumar
3 years ago
it was really useful 4 me.................n finally i managed to get through TCS
Karthik Sr Sr
3 years ago
i like to thank m4maths, it was very useful and i got placed in tcs
rajesh dolai 2 Days ago plz send me ibm plaement paper english..
venkaiaha 2 Days ago cleared ibm 2 rounds thank you m4maths.com
Triveni 2 Days ago Thanks to m4maths.I got placed in IBM.Awsome work.Best of luck.
Lekshmi Narasimman MN 8 Days ago Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u
all a good news :) Thanks to almighty too :) !!
abhinay yadav 13 Days ago thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same
as given here. bcz of practice i finished my written test 15 minutes before and got it. thanx allot for such noble work...
manasi 18 Days ago coz of this site i cud clear IBM's apti nd finally got placed in tcs thanx m4maths...u r a wonderful site :)
arnold 20 Days ago thank u m4maths and all its user for posting gud and sensible answers.
Nilesh singh 23 Days ago finally selected in TCS. thanks m4maths
MUDIT 25 Days ago Thank you team m4maths.Successfully placed in TCS.
Deepika Maurya 25 Days ago Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !!
Rimi Das 1 Month ago Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site
for placement preparation...
Stephen raj 1 Month ago prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it.
Stephen raj 1 Month ago Thanks to m4maths:) cracked infosys:)
Ranadip 1 Month ago i have been Selected in Tech Mahindra. All the quanti & reasoning questions are common from the placement papers of m4maths. So a big thanks to m4maths team & the people who
shares the placement papers.
Amit Das 1 Month ago I got selected for interview in TCS.Thank you very much m4maths.com.
PRAVEEN K H 1 Month ago I got placed in TCS :) Thanks a lot m4maths :)
Syed Ishtiaq 1 Month ago An Awesome site for TCS. Cleared the aptitude.
sara 1 Month ago I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com plz guide for the technical round.
mounika devi mamidibathula 1 Month ago got placed in IBM.. this site is very useful, many questions repeated.. thanks alot to m4maths.com
Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site......
Kusuma Saddala 1 Month ago Thanks to m4maths, i have place at IBM on feb 8th of this month
sangeetha 2 Months ago thanks to m4 maths because of this i clear csc written test
mahima srivastava 2 Months ago Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site.
Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com.
prashant gaurav 2 Months ago Got Placed In Infosys... Thanks of m4maths....
vishal 3 Months ago iam not placed in TCS...........bt still m4maths is a good site.
sameer 4 Months ago Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a
Sonali 4 Months ago THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even
reading the whole question completely.. gr8 work m4maths.. keep it up.
Kumar 5 Months ago m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus
drives that happened in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not
only for TCS but for all other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :)
YASWANT KUMAR CHAUDHARY 5 Months ago THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now.
ANGELIN ALFRED 5 Months ago Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and
1000000000000s of thanks to this website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website.
MALLIKARJUN ULCHALA 6 Months ago 2 days before i cleared written test just because of m4maths.com.thanks a lot for this community.
Madhuri 6 Months ago thanks for m4maths!!! bcz of which i cleared apti of infosys today.
DEVARAJU 6 Months ago Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths. No words to praise
m4maths.so i simply said thanks a lot.
PRATHYUSHA BSN 7 Months ago I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am
very proud to say that I have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle
very easily making people like us to be successful. Thanks a lotttt
Abhishek Ranjan 7 Months ago me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :) IT'S VERY VERY VERY HELPFUL N IMPORTANT SITE. do practice
n u'll surely succeed :)
Sandhya Pallapu 1 year ago Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site
helped me a lot in achieving this...thanks to M4MATHS!
vivek singh 2 years ago I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through
TCS......special thanks to admin for creating such a superb community
Manish Raj 2 years ago this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE
Asif Neyaz 2 years ago Thanku M4maths..due to u only, imade to TCS :D test on sep 15.
Harini Reddy 2 years ago Big thanks to m4maths.com. I cracked TCS..The solutions given were very helpful!!!
portia 2 years ago HI everyone , me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been
able to place in TCS... and thanks to all the users who clearly solved the problems.. im very greatful to you :)
vasanthi 2 years ago Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially...
vijay 2 years ago Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that the success rate of M4Math grow exponentially. Again Thanx for all support given by
M4Math during my preparation for TCS. and Best of LUCK for all students for their preparation.
maheswari 2 years ago thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :)
GIRISH 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of
m4maths... 3 cheers for m4maths
girish 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of
m4maths... 3 cheers for m4maths
Aswath 2 years ago Thank U ...I'm placed in TCS..... Continue this g8 work
JYOTHI 2 years ago thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus
raghu nandan 2 years ago thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :)
V.V.Ravi Teja 3 years ago thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........
Veer Bahadur Gupta 3 years ago got placed in TCS ... thanku m4maths...
Amulya Punjabi 3 years ago Hi All, Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the
help of m4maths.......it's just an osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students.
Anusha Alva 3 years ago a big thnks to this site...got placed in TCS!!!!!!
Oindrila Majumder 3 years ago thanks a lot m4math.. placed in TCS
Pushpesh Kashyap 3 years ago superb site, i cracked tcs
Saurabh Bamnia 3 years ago Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME.........
Gautam Kumar 3 years ago it was really useful 4 me.................n finally i managed to get through TCS
Karthik Sr Sr 3 years ago i like to thank m4maths, it was very useful and i got placed in tcs
Latest User posts (More)
Maths Quotes (More)
"Pure mathematics is the world's best game. It is more absorbing than chess, more of a gamble than poker, and lasts longer than Monopoly. It's free. It can be played anywhere - Archimedes did it in a
bathtub." Richard J. Trudeau, Dots and Lines
"Mathematics is the heart of the World" SrinivasuluReddy
"Old math teachers never die, they just tend to infinity." Unknown
"GREAT PEOPLE LOVES MATHS . TO BE A GREAT PERSON YOU SHOULD LOVE MATHS " NAREN
"mathematics never dies because it always stays in the minds and souls of people who love mathematics. Those people will not let maths die instead they will die................." D . kalyan kumar
"Don't learn Mathematics just to prove that you are not a mentaly simple person but learn it to prove that you are enough intelligent " LA
"Which is more difficult ? solve a mathematics? or Mathematics to solve?" G. C
Latest Placement Puzzle (More)
"greatest value of 9cos^2x+4sin^2x is"
UnsolvedAsked In: cognizant
"if 9cosx+12sinx=15
find the value of cotx"
UnsolvedAsked In: Campus
"The height of a conical tent is 14 m and its floor
area is 346.5 m2. The length of 1.1 m wide canvas
required to built the tent is"
UnsolvedAsked In: ACIO
"Pure mathematics is the world's best game. It is more absorbing than chess, more of a gamble than poker, and lasts longer than Monopoly. It's free. It can be played anywhere - Archimedes did it in a
bathtub." Richard J. Trudeau, Dots and Lines
"Old math teachers never die, they just tend to infinity." Unknown
"GREAT PEOPLE LOVES MATHS . TO BE A GREAT PERSON YOU SHOULD LOVE MATHS " NAREN
"mathematics never dies because it always stays in the minds and souls of people who love mathematics. Those people will not let maths die instead they will die................." D . kalyan kumar
"Don't learn Mathematics just to prove that you are not a mentaly simple person but learn it to prove that you are enough intelligent " LA
"Which is more difficult ? solve a mathematics? or Mathematics to solve?" G. C
"if 9cosx+12sinx=15 find the value of cotx" UnsolvedAsked In: Campus
"The height of a conical tent is 14 m and its floor area is 346.5 m2. The length of 1.1 m wide canvas required to built the tent is" UnsolvedAsked In: ACIO
HR Interview (163) Ibm (22) Infosys (28) Tcs (21)
Logical Reasoning (1)
Coding Decoding (1)
Numerical Ability (4)
Time and Work (1)
Verbal Ability (8)
Miscellaneous (2)
Here you can share maths puzzles, comments and their answers, which helps you to learn and understand each puzzle's answer in detail. If you have any specific puzzle which is not on the site, use
"Ask Puzzle" (2nd tab on the left side). KEEP AN EYE: By this feature you can bookmark your Favorite Puzzles and trace these puzzles easily in your next visit. Click here to go your Keep an EYE (0)
puzzles. If you face any interview then please submit your interview experience here and also you can read others sucess story and experience InterView Experience(19).
|
{"url":"http://www.m4maths.com/placement-puzzles.php?SOURCE=igate","timestamp":"2014-04-20T15:52:33Z","content_type":null,"content_length":"120568","record_id":"<urn:uuid:b51eb0b0-4e3e-40f8-913f-3434b1b54f04>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PyX —
PyX — Example: graphstyles/density.py
Drawing a density plot
from pyx import *
# Mandelbrot calculation contributed by Stephen Phillips
# Mandelbrot parameters
re_min = -2
re_max = 0.5
im_min = -1.25
im_max = 1.25
gridx = 100
gridy = 100
max_iter = 10
# Set-up
re_step = (re_max - re_min) / gridx
im_step = (im_max - im_min) / gridy
d = []
# Compute fractal
for re_index in range(gridx):
re = re_min + re_step * (re_index + 0.5)
for im_index in range(gridy):
im = im_min + im_step * (im_index + 0.5)
c = complex(re, im)
n = 0
z = complex(0, 0)
while n < max_iter and abs(z) < 2:
z = (z * z) + c
n += 1
d.append([re, im, n])
# Plot graph
g = graph.graphxy(height=8, width=8,
x=graph.axis.linear(min=re_min, max=re_max, title=r"$\Re(c)$"),
y=graph.axis.linear(min=im_min, max=im_max, title=r'$\Im(c)$'))
g.plot(graph.data.points(d, x=1, y=2, color=3, title="iterations"),
2 dimensional plots where the value of each point is represented by a color can be created by the density style. The data points have to be spaced equidistantly in each dimension with the possible
exception of missing data.
For data which is not equidistantly spaced but still arranged in a grid, graph.style.surface can be used, which also provides a smooth representation by means of a color interpolation between the
mesh moints. Finally, for completely unstructured data, graph.style.rect can be used.
The plot is encoded in an efficient way using a bitmap. Unfortunately, this means, that the HSB color space cannot be used (due to limitations of the bitmap color spaces in PostScript and PDF). Some
of the predefined gradients in PyX, e.g. color.gradient.Rainbow, cannot be used here. As a workaround, PyX provides those gradients in other color spaces as shown in the example.
|
{"url":"http://pyx.sourceforge.net/examples/graphstyles/density.html","timestamp":"2014-04-20T13:21:02Z","content_type":null,"content_length":"13861","record_id":"<urn:uuid:697ff1ee-16b3-41d0-8720-12ec4239762a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics and Statistics News
First Endowed Professor Position in Department
2010 Putnam Competition
New Visiting Professor
Third Grant Received to Improve Math Teachers in Grades 7-12
2010 Elections to Phi Beta Kappa and Sigma Xi
Summer 2010 Undergraduate Research Programs
MathFest 2010
Winner of 2010 Mina S. Rees Scholarship in Sciences & Math
2009 Putnam Competition
NSF Grant Awarded to Faculty Member
MathFest 2009
Summer 2009 Industrial Modeling Workshop
Student in NSF Funded Summer 2009 Research Program
Putnam Competition Winner
New NSF Scholarships for Hunter Students
New Computer Lab BioSAM
New Programs in Bioinformatics
Kolchin Seminar in Differential Algebra
Student Research Publication
Two Consecutive NSF Scholarship Grants, 2001-2007
Student at International Conference
Past Visiting Professors
Honorary Degree for Robert P. Moses
Special AMS Session on Occasion of 60th Birthday
Faculty NSA Grant
Chancellor Goldstein Teaches a Course at Hunter
Recent Retirements
First Endowed Professor Position in Department
Through a gift by the Dolciani-Halloran Foundation, the Department of Mathematics and Statistics now has its first endowed faculty position, the Mary P. Dolciani Professor position. After an
international search, Dr. Olga Kharlampovich was selected in Spring 2010 as the Mary P. Dolciani Professor of Mathematics. She is currently a tenured professor at McGill University. O. Kharlampovich
is at the top of her field of research in mathematics – group theory.
M. Dolciani was a Hunter alumna and earned her PhD at Cornell University. She also pursued study at the Institute for Advanced Study at Princeton and at the University of London. She joined the
Hunter faculty in 1955 after teaching at Vassar College for eight years. At Hunter, M. Dolciani served as Chairperson of the Department of Mathematics and as Associate Provost, and in 1974 she became
University Dean for Academic Services at the City University of New York. In 1976 she returned to teach at Hunter, continuing until she became very seriously ill. M. Dolciani developed at Hunter the
first multi-media mathematics learning laboratory in CUNY, a laboratory which over the years has grown extensively in size and in the diversity of its operations. It is now known as the Dolciani
Mathematics Learning Center. Mary P. Dolciani directed many NSF institutes and NY State Education Department institutes for mathematics teachers. She published a series of mathematics textbooks that
have been translated into French and Spanish and have sold more than 50 million copies around the world.
The Mary P. Dolciani Professor, Olga Kharlampovich, combines the ability to solve extremely difficult old problems with an inventiveness and creativity for posing and solving new problems. In
collaboration with Alexei Miasnikov, she has solved the famous Tarski Problem in free group logic, posed in the 1940’s by Alfred Tarski, which remained an open problem for over 50 years. She is one
of the world’s leading specialists in algorithmic problems in algebra and in the theory of action of groups on trees, rather new flourishing fields of research. O. Kharlampovich has also demonstrated
an exceptional skill at organizing international conferences, workshops and special topics semesters. She holds the Canadian equivalent of an NSF grant for 2008-2013, and she is editor of the
International Journal of Algebra and Computation. In summary, she has earned an overwhelming international reputation as a leading researcher in group theory. Over the years, O. Kharlampovich has
been invited numerous times to give lectures throughout the U.S. She has collaborated with many researchers connected with the well-known New York Group Theory Seminar, housed at the Graduate Center
of the City University of New York. O. Kharlampovich will join the Hunter faculty in Fall 2011.
2010 Putnam Competition
On December 4, 2010, five of our mathematics majors participated in the annual William Lowell Putnam Mathematical Competition. They were Thomas Flynn, Sharma Goldson, Lamteng Lei, Alexander Taam, and
Stephanie Zarrindast. The Putnam Competition is a six-hour contest for undergraduates administered by the Mathematical Association of America throughout the United States and Canada since 1938. The
five contestants prepared for the competition by meeting regularly with Professor Clayton Petsche of our Department.
New Visiting Professor
For 2010-2011, the Department is host to Visiting Professor Roger S. Pinkham, Professor Emeritus at Stevens Institute of Technology. R. Pinkham earned his PhD from Harvard University; his research
interests are in statistics, probability, numerical analysis, and analysis. He will be giving a few lectures at Hunter during his stay, as well as possibly teaching a special topics course in Spring
Third Grant Received to Improve Math Teachers in Grades 7-12
Professor Barry Cherkas of our Department is the Project Director for a grant that provides scholarships to 40 NYC mathematics teachers, grades 7-12, specifically targeted to earn an MA in Pure
Mathematics, Applied Mathematics, or Mathematics Education. The grant is a Title IIB Mathematics Science Partnership grant between the NYC Department of Education and Hunter's Department of
Mathematics and Statistics. The grant, awarded by the NY State Education Department, is in the amount of $720,000 for 2010-2013. This is the third in a series of three Title IIB Partnership Grants
received by B. Cherkas. Previous 3-year grants provided scholarships to NYC teachers in grades K-12 to strengthen their math skills. The Grant Administrator, Alan Lichman, was a mathematics staff
developer at the elementary school level. This series of grants is part of the 2001 No Child Left Behind Act of Congress.
2010 Elections to Phi Beta Kappa and Sigma Xi
Three mathematics majors were elected to Phi Beta Kappa in Spring 2010: Kwang Cha, Kathleen McGovern, and Richard Weiss.
Joseph Quinn, a BA/MA mathematics student, was elected to Sigma Xi, The Scientific Research Society.
Summer 2010 Undergraduate Research Programs
Alannah Bennie, a BA/MA mathematics student, was selected to participate in the Summer Math Institute at Cornell University, a residential program whose major focus was an advanced undergraduate
course in analysis. Participants also worked on projects in other areas, in groups of 3–5, in a research-like setting, under the direction of a project supervisor, culminating in a public
Kathleen McGovern, a double major in mathematics and physics with a concentration in quantitative biology, was selected for the Harvard Medical School / MIT joint summer program in biomedical optics
and photonics. Kathleen was placed in Dr. Seok H. (Andy) Yun’s laboratory for the duration of the program. The Yun Lab focuses on developing new optical technologies and applying them to solve
biological questions and medical problems. To realize this goal, expertise in physics, photonics, and various engineering disciplines is integrated with biomedical needs and curiosities.
Alexander Taam, a BA/MA mathematics student, and Rodney Weiss, a June 2010 graduate with a double major in mathematics and computer science, participated in the 2010 Summer Research Program funded by
an NSF Research Training Grant. The topic was Zeta Functions of Quadratic Fields. The program is for undergraduates at Columbia, CUNY, and NYU to explore and investigate open research problems in
algebraic and analytic number theory. Professor Karen Taylor, 2008-2010 Visiting Assistant Professor at Hunter College, mentored a group of four students, including Alex and Rodney from Hunter.
MathFest 2010
Alannah Bennie, a BA/MA mathematics student, went (cost-free) to MathFest 2010, held in Pittsburgh, PA on August 5-7. MathFest is the annual summer meeting of the Mathematical Association of America
(MAA), consisting of lectures, mini courses, and numerous student activities. Alannah was given this opportunity because during 2009-2010, she administered the National Problem Solving Competition
held locally at Hunter College. She posted problems, corrected solutions submitted, and advertised the local competition. The winner of the contest at Hunter was Shi Lin Su, a Teacher Academy BA/MA
mathematics student, but she was unable to attend MathFest.
Winner of 2010 Mina S. Rees Scholarship in Sciences & Math
Joseph Quinn, a BA/MA mathematics student, was selected by Hunter’s Office of the Dean of Arts and Sciences to receive the 2010 Mina S. Rees Scholarship in Sciences and Mathematics. This prestigious
award is given to graduate students, enrolled or about to enter a CUNY Ph.D. program, who show promise of scientific professionalism, potential as a teacher, and breadth of intellectual interest.
Joseph began his doctoral work in mathematics at the CUNY Graduate Center in Fall 2010.
Mina S. Rees (1902 - 1997) was a Hunter alumna with a Ph.D. in mathematics from the University of Chicago who began her teaching career in 1926 as a member of the Hunter College Department of
Mathematics. In addition to her more than 35 years at the City University, M. Rees served with the U.S. Office of Scientific Research and Development during World War II. She was acclaimed for the
important role she played in mobilizing the resources of modern mathematics for the national defense during World War II, for helping to direct the enormous growth and diversification of mathematical
studies after the war, for her influence in initiating federal government support for the development of the earliest computers, and for helping to shape national policy for all basic sciences and
for graduate education. In 1969, she became the first president of the CUNY Graduate Center, serving until her retirement in September 1972. The Graduate Center Library has been named in honor of
Mina S. Rees.
2009 Putnam Competition
Three of our mathematics majors participated in the 70th annual William Lowell Putnam Mathematical Competition on December 5, 2009. They were Sharma Goldson, Ze Li, and Alexander Taam. The Putnam
Competition is a six-hour contest for undergraduates administered by the Mathematical Association of America throughout the United States and Canada since 1938. The three contestants prepared for the
competition by meeting regularly with Professor Clayton Petsche of our Department.
NSF Grant Awarded to Faculty Member
Assistant Professor Clayton Petsche of our Department received a prestigious and competitive NSF grant in the amount of $120,343 for a project entitled "Algebraic Dynamics over Global Fields:
Geometric and Analytic Methods." This award was effective August 1, 2009 and expires July 31, 2012.
MathFest 2009
Jordi Navarrette, a BA/MA mathematics student, participated (cost-free) in MathFest 2009, held in Portland, Oregon on August 6-8. MathFest is the annual summer meeting of the Mathematical Association
of America (MAA), consisting of lectures, mini courses, and numerous student activities. Jordi had this opportunity because he posted problems for the National Problem Solving Competition held
locally at Hunter College during 2008-2009, corrected solutions submitted, and helped publicize the contest. Sharma Goldson, a mathematics major, was the winner of the local competition, but he was
unable to go to MathFest.
Summer 2009 Industrial Modeling Workshop
Mario Morales, one of our graduate statistics students, was accepted at the Summer 2009 Industrial Mathematical & Statistical Modeling (IMSM) Workshop for Graduate Students. The objective is to
expose graduate students in mathematics, engineering, and statistics to challenging and exciting real-world problems arising in industrial and government laboratory research. Students get experience
in the team approach to problem solving. Mario selected the topic "Dosing predictions for the anticoagulant Warfarin.”
Student in NSF Funded Summer 2009 Research Program
Joseph Quinn, one of our mathematics majors, was selected for the undergraduate Summer 2009 program of the recently NSF funded multi-year Research Training Group (RTG) in Number Theory taking place
jointly at the three universities: Columbia, CUNY, and NYU. The program is for undergraduates at Columbia, CUNY, and NYU to explore and investigate open research problems in algebraic and analytic
number theory. The program takes place at the CUNY Graduate Center. The purpose of the RTG in Number Theory is to make the New York metropolitan area a premier world center and model for the study of
number theory.
Putnam Competition Winner
Three mathematics majors participated in the 69th annual William Lowell Putnam Mathematical Competition on December 6, 2008: Arkady Etkin, Sharma Goldson, and Jordi Navarrette. The Putnam Competition
is a six-hour contest for undergraduates administered by the Mathematical Association of America throughout the United States and Canada since 1938. The contestant Arkady Etkin is listed among the
top participants (those ranked 1-473). Arkady is the only student in CUNY with this achievement. The rankings go from 1 to 2771.5. There were 3627 contestants. For several months before the
competition, our three students worked on problems together with Nikita Miasnikov, a doctoral student at the CUNY Graduate Center and an adjunct lecturer in CUNY.
New NSF Scholarships for Hunter Students
The Catalyst Scholarship Program has been established at Hunter with an award from the National Science Foundation, shared among the departments of Geography, Computer Science, Mathematics and
Statistics, and Physics. The main objectives are to recruit, mentor and support talented students majoring in established or emerging fields within science, technology, engineering, and mathematics
(STEM) through degree completion. The program awards each recipient a $6,475 annual scholarship, renewable for a period of two years. The plan is to have a cohort of 20 students for the period Fall
2009-Spring 2011 and another cohort of 20 students for the period Fall 2011-Spring 2013. Information about the program and an application form are available at http://www.hunter.cuny.edu/catalyst.
New Computer Lab BioSAM
The BioStatistics and Applied Mathematics (BioSAM) Computer Lab is located in room 930 HE. While the BioSAM lab was created in connection with Hunter College's unique interdisciplinary program in
quantitative biology (www.hunter.cuny.edu/qubi), it is open for use by all students in our Department, both undergraduate and graduate, as well as by any other member of the Department. The lab is
equipped with six high-end Dell Precision workstations that users may access both locally at the terminals and remotely via a secure shell connection (SSH).
New Programs in Bioinformatics
In the years since the draft of the human genome was published in 2001, biology has increasingly been evolving from a mainly experimental science performed at the bench to one in which large
databases of information, statistical methods and computer models play a significant role. In order to effectively extract, model and analyze this enormous amount of data, various computational tools
and statistical models are taking rapidly expanding roles in biomedical research.
The Department of Mathematics and Statistics at Hunter now has 1) a new concentration within the mathematics major for Bioinformatics, 2) a new Bioinformatics sequence within the statistics major and
3) a new Bioinformatics track in the master's program in statistics and applied mathematics. Information on the curriculum for these new programs is available on the MAJORS and GRADUATE pages of this
web site, as well as at www.hunter.cuny.edu/qubi. The departmental program directors are Professor Dana Draghicescu for the undergraduate concentration and Professor Ronald Neath for the graduate
Kolchin Seminar in Differential Algebra
The Kolchin Seminar in Differential Algebra meets most Fridays at 10:30am at the Graduate Center. For the latest information, please visit the Kolchin Seminar web site at http://www.sci.ccny.cuny.edu
Student Research Publication
Yevgeniy Milman, a math BA/MA graduate of June 2009, while still an undergraduate, co-authored a paper, "A Buckling Problem for Graphene Sheets," resulting from his Summer 2007 Research Experiences
for Undergraduates (REU) program at the University of Akron. The paper appeared in Proceedings of CMDS 11, the 11th International Symposium on Continuum Models and Discrete Systems, 2007. Yevgeniy
gave a lecture on the paper at Hunter in October 2007. (See the LECTURES page.)
Two Consecutive NSF Scholarship Grants, 2001-2007
Hunter College received two consecutive NSF grants for scholarships in computer science and mathematics for 2001-2003 and 2003-2007. Scholarship recipients were awarded a stipend and had the
opportunity to participate in research activities mentored by full-time faculty and present their ongoing work at monthly seminars. The Principal Investigator for both grants was Professor Ada Peluso
of our Department. The grant administrators were Anna Marino and Ronnie Lichman, for the two grants, respectively. Both Anna and Ronnie are Hunter alumnae. For many years, Anna was Director of the
Leona & Marcy Chanin Language Center at Hunter. Ronnie was assistant to several chairpersons of our Department.
Student at International Conference
Joel Dodge, while a graduating senior in Spring 2006, was invited to give a talk at an international conference in Lisbon, Portugal on September 1-4, 2006, co-organized by the Forum for
Interdisciplinary Mathematics, the Department of Mathematics, New University of Lisbon, and the Polytechnic Institute of Tomar, Portugal. Joel was invited because of his participation in Summer 2005
in the Research Experiences for Undergraduates (REU) program sponsored by the department of Mathematical Sciences at the University of North Carolina at Greensboro (UNCG). Joel did research on
algorithmic combinatorics on words.
Past Visiting Professors
Since Spring 2005, the Department has been able to invite professors from other universities to be visiting professors at Hunter for one semester or more. The following have filled this position: 1)
Stuart Margolis, Professor of Mathematics at Bar-Ilan University in Ramat-Gan, Israel, with a Ph.D. from UC Berkeley, and an international reputation for his research that spans the connections among
semigroup theory, geometric group theory, theory of automata and formal languages, logic, and topology; 2) Daniel Pasca, Associate Professor at the University of Bucharest, Romania, whose research is
in differential equations, Hamiltonian systems, and variational methods; 3) Marc A. Scott, Associate Professor in the Department of Humanities and the Social Sciences at New York University, School
of Education, a Hunter alumnus with a Ph.D. in Statistics from New York University; 4) Karen Taylor, Visiting Assistant Professor, with a Ph.D. from Temple University and number theory as her field
of research, who organized a very successful semester-long undergraduate seminar on the Riemann Hypothesis, with the goal of enabling participants to read Riemann’s seminal paper “On the Number of
Prime Numbers less than a Given Quantity.”
Honorary Degree for Robert P. Moses
At Hunter’s January 2005 graduation ceremony, an honorary degree was awarded to Robert P. Moses, the civil rights leader and mathematics educator. As is well-known, Mr. Moses developed the Algebra
Project, which concerns itself with teaching to middle school students a broad range of mathematical skills that are important in gaining access to college and math/science-related careers, as well
as necessary for mathematical literacy for economic access.
Special AMS Session on Occasion of 60th Birthday
At the 2005 Spring Eastern Sectional Meeting of the American Mathematical Society (AMS) there was a Special Session on Homotopy Theory in honor of the 60th birthdays of Martin Bendersky in our
Department and his collaborator Donald M. Davis.
Faculty NSA Grant
Professor Lev Shneerson of our Department has been the recipient of two consecutive National Security Agency (NSA) Mathematical Sciences Research Grants, for 2003-2005 and 2005-2007. His field of
research is semigroup theory. The grant title was “Growth in Semigroup Varieties.”
Chancellor Goldstein Teaches a Course at Hunter
In Fall 2003, the Department was honored to have Dr. Matthew Goldstein, Chancellor of CUNY, as the instructor for a section of MATH 150 Calculus with Analytic Geometry I. The class met on Saturday
Recent Retirements
Professor Alvin Baranchik retired in 2003. Al taught statistics and mentored many graduate students for their master's projects. Ms. Mary Small retired in 2004. Before coming to the Department of
Mathematics and Statistics, Mary was with the Department of Academic Skills/SEEK Program. Professor Jane Matthews retired in 2007. Jane contributed greatly to the development of the MA program in
Adolescent Mathematics Education, a program sponsored jointly by the Department of Mathematics & Statistics and the School of Education. Professor Ada Peluso retired in January 2011 after having
served as department chair for eleven years. Ada is a member of the Board of the Hunter College Foundation.
|
{"url":"http://math.hunter.cuny.edu/news.shtml","timestamp":"2014-04-17T04:27:36Z","content_type":null,"content_length":"37926","record_id":"<urn:uuid:0b51a8e2-43b4-497a-95b7-e7ba000a4b86>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integrating out of the real domain of a function
OK, but what did you do to transform the integral into that form?
First, you need to assume a specific path, P (though for the ln() function it won't matter as long as we don't go around the origin to get there). Assume it's along the negative real axis. So at all
points in the path, z is of the form re
[itex]\int_Pln(z)dz = \int_P(ln(re^{iπ}) dre^{iπ}= e^{iπ}\int_{r=1}^0(ln(r)+iπ) dr = -\int_{r=1}^0(ln(r)+iπ) dr[/itex]
[itex] = \int_{r=0}^1(ln(r)+iπ) = \left[(r ln(r)-r+iπr)\right]_0^1 = -1+iπ [/itex]
|
{"url":"http://www.physicsforums.com/showthread.php?p=4175216","timestamp":"2014-04-19T17:43:24Z","content_type":null,"content_length":"52175","record_id":"<urn:uuid:d2e139d8-2476-4097-ba7a-0f0b3649e10b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Methodology for Generation of Performance Models for the Sizing of Analog High-Level Topologies
VLSI Design
Volume 2011 (2011), Article ID 475952, 17 pages
Research Article
A Methodology for Generation of Performance Models for the Sizing of Analog High-Level Topologies
^1Institute of Radio Physics and Electronics, University of Calcutta, Kolkata 700009, India
^2Indian Institute of Technology, Kharagpur 721302, India
Received 4 March 2011; Accepted 28 May 2011
Academic Editor: Sheldon Tan
Copyright © 2011 Soumya Pandit et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper presents a systematic methodology for the generation of high-level performance models for analog component blocks. The transistor sizes of the circuit-level implementations of the
component blocks along with a set of geometry constraints applied over them define the sample space. A Halton sequence generator is used as a sampling algorithm. Performance data are generated by
simulating each sampled circuit configuration through SPICE. Least squares support vector machine (LS-SVM) is used as a regression function. Optimal values of the model hyper parameters are
determined through a grid search-based technique and a genetic algorithm- (GA-) based technique. The high-level models of the individual component blocks are combined analytically to construct the
high-level model of a complete system. The constructed performance models have been used to implement a GA-based high-level topology sizing process. The advantages of the present methodology are that
the constructed models are accurate with respect to real circuit-level simulation results, fast to evaluate, and have a good generalization ability. In addition, the model construction time is low
and the construction process does not require any detailed knowledge of circuit design. The entire methodology has been demonstrated with a set of numerical results.
1. Introduction
An analog high-level design process is defined as the translation of analog system-level specifications into a proper topology of component blocks, in which the specifications of all the component
blocks are completely determined so that the overall system meets its desired specifications optimally [1–3]. The two important steps of an analog high-level design procedure are high-level topology
generation/selection [4, 5] and high-level specification translation [6]. At the high-level design abstraction, a topology is defined as an interconnection of several analog component blocks such as
amplifier, mixer and filter. The detailed circuit-level implementations of these component blocks are not specified at this level of abstraction. The analog component blocks are represented by their
high-level models.
During the past two decades, many optimization-based approaches have been proposed to handle the task of topology generation/selection [7–11]. These approaches involves the task of topology sizing,
where the specification parameters of all the component blocks of a topology are determined such that the desired system specifications are optimally satisfied. The two important modules for this
type of design methodology are a performance estimation module and an optimization engine. The implementation of the design methodology is based upon the flow of information between these two
The performance models that are used in the high-level design abstraction are referred to as high-level performance models. An analog high-level performance model is a function that estimates the
performance of an analog component block when some high-level design parameters of the block are given as inputs [12, 13]. The important requirements for a good high-level performance model are as
follows. (i) The model needs to be low dimensional. (ii) The predicted results need to be accurate. The model accuracy is measured as the deviation of the model predicted value from the true function
value. The function value in this case is the performance parameter obtained from transistor level simulation [12]. (iii) The evaluation time must be short. This is measured by the CPU time required
to evaluate a model. (iv) The time required to construct an accurate model must be small, so that the design overhead does not become high. As a rough estimate, the construction cost is measured as
where the terms are self-explanatory. There exists a tradeoff between these requirements since a model with lower prediction error generally takes more time for construction and evaluation.
In this work, we have developed the performance models using least squares support vector machine (LS-SVM) as the regressor. The transistor sizes of the circuit-level implementations of the component
blocks along with a set of geometry constraints applied over them define the sample space. Performance data are generated by simulating each sampled circuit configuration through SPICE. The LS-SVM
hyper parameters are determined through formal optimization-based techniques. The constructed performance models have been used to implement a high-level topology sizing process. The advantages of
this methodology are that the constructed models are accurate with respect to real circuit-level simulation results, fast to evaluate and have a good generalization ability. In addition, the model
construction time is low and the construction process does not require any detailed knowledge of circuit design. The entire methodology has been demonstrated with a set of experimental results.
The rest of the paper is organized as follows. Section 2 reviews some related works. Section 3 presents the background concepts on least squares support vector machines. An outline of the methodology
is provided in Section 4. The model generation methodology is described in detail in Section 5. The topology sizing process is described in Section 6. Numerical results are provided in Section 7 and
finally conclusion is drawn in Section 8.
2. Related Work
A fairly complete survey of related works is given in [14]. An analog performance estimation (APE) tool for high-level synthesis of analog integrated circuits is described in [15, 16]. It takes the
design parameters (e.g., transistor sizes, biasing) of an analog circuit as inputs and determines its performance parameters (e.g., power consumption, thermal noise) along with anticipated sizes of
all the circuit elements. The estimator is fast to evaluate but the accuracy of the estimated results with respect to real circuit-level simulation results is not good. This is because the
performance equations are based on simplified MOS models (SPICE level 1 equations). A power estimation model for ADC using empirical formulae is described in [13]. Although this is fast, the accuracy
with respect to real simulation results under all conditions is off by orders of magnitude. The technique for generation of posynomial equation-based performance estimation models for analog circuits
like op-amps, multistage amplifiers, switch capacitor filters, and so forth, is described in [17, 18]. An important advantage of such a modeling approach is that the topology sizing process can be
formulated as a geometric program, which is easy to solve through very fast techniques. However, there are several limitations of this technique. The derivation of performance equations is often a
manual process, based on simple MOS equations. In addition, although many analog circuit characteristics can be cast in posynomial format, this is not true for all characteristics. For such
characteristics, often an approximate representation is used. An automatic procedure for generation of posynomial models using fitting technique is described in [19, 20]. This technique overcomes
several limitations of the handcrafted posynomial modeling techniques. The models are built from a set of data obtained through SPICE simulations. Therefore, full accuracy of SPICE simulation is
achieved through such performance models. A neural network-based tool for automated power and area estimation is described in [21]. Circuit simulation results are used to train a neural network
model, which is subsequently used as an estimator. Fairly recently, support vector machine (SVM) has been used for modeling of performance parameters for RF and analog circuits [22–24]. In [25],
SVM-optimized by GA has been used to develop a soft fault diagnosis method for analog circuits. In [26], GA and SVM has been used in conjunction for developing feasibility model which is then used
within an evolutionary computation-based optimization framework for analog circuit optimization.
2.1. Comparison with Existing Methodologies
The present methodology uses nonparametric regression technique for constructing the high-level performance models. Compared with the other modeling methodologies employing symbolic analysis
technique or simulation-based technique, the advantages of the present methodology are as follows. (i) Full accuracy of SPICE simulations and advanced device models, such as BSIM3v3 are used to
generate the performance models. The models are thus accurate compared to real circuit-level simulation results. (ii) There is no need for any a priori knowledge about the unknown dependency between
the inputs and the outputs of the models to be constructed. (iii) The generalization ability of the models is high. (iv) The model construction time is low and the construction process does not
require any detailed circuit design knowledge.
The EsteMate methodology [21] using artificial neural network (ANN) and the SVM-based methodology discussed in [22, 23] are closely related with the present methodology. The methodology that we have
developed, however, has a number of advantages over them. These are as follows. (1)In the EsteMate methodology, the specification parameters of a component block constitute the sample space for
training data generation. The specification parameters are electrical parameters and there exists strong nonlinear correlations amongst them. Therefore, sophisticated sampling strategies are required
for constructing models with good generalization ability in the EsteMate methodology. On the other hand, in our method, the transistor sizes along with a set of geometry constraints applied over them
define the sample space. Within this sample space, the circuit performance behavior becomes weakly nonlinear. Thus simple sampling strategies are used in our methodology to construct models with good
generalization ability.(2)In EsteMate, for each sample, a complete circuit sizing task using a global optimization algorithm is required for generation of the training data. This is usually
prohibitively time consuming. On the other hand, in our method, simple circuit simulations using the sampled transistor sizes are required for data generation. Therefore, the cost of training data
generation in our method is much less compared to that in the EsteMate methodology [21]. With the EsteMate methodology, the training sample points are so generated that performances such as power is
optimized. On the other hand, in our methodology, the task of performance optimization has been considered as a separate issue, isolated from the performance model generation procedure. Our strategy
is actually followed in all practical optimization-based high-level design procedures [1, 27].(3)The generalization ability of the models constructed with our methodology is better than that
generated through the EsteMate methodology. This is because the latter uses ANN regression technique. Neural network-based approaches suffer from difficulties with generalization, producing models
that can overfit the data. This is a consequence of the optimization algorithms used for parameter selection and the statistical measures used to select the “best” model. SVM formulation, on the
other hand, is based upon structural risk minimization (SRM) principle [28], which has been shown to be superior to traditional empirical risk minimization (ERM) principle, employed by the
conventional neural networks. SRM minimizes an upper bound on the expected risk, as opposed to ERM that minimizes the error on the training data. Therefore an SVM has greater generalization
capability.(4)The SVM-based methodology, as presented in [23], uses heuristic knowledge to determine the model hyper parameters. The present methodology uses optimization techniques to determine
optimal values for them. GA-based methodology for determination of optimal values for the model hyper parameters is found to be faster compared to the grid search technique employed in [22].
3. Background: Least Squares Support Vector Regression
In recent years, the support vector machine (SVM), as a powerful new tool for data classification and function estimation, has been developed [28]. Suykens and Vandewalle [29] proposed a modified
version of SVM called least squares SVM. In this subsection, we briefly outline the theory behind the LS-SVM as function regressor.
Consider a given set of training samples where is the input value and is the corresponding target value for the th sample. With an SVR, the relationship between the input vector and the target vector
is given as where is the mapping of the vector to some (probably high-dimensional) feature space, is the bias, and is the weight vector of the same dimension as the feature space. The mapping is
generally nonlinear which makes it possible to approximate nonlinear functions. The approximation error for the th sample is defined as The minimization of the error together with the regression is
given as with equality constraint where denotes the total number of training datasets and the suffix denotes the index of the training set, that is, th training data, is the regularization parameter.
The optimization problem (4) is considered to be a constrained optimization problem and a Lagrange function is used to solve it. Instead of minimizing the primary objective (4), a dual objective, the
so-called Lagrangian, is formed of which the saddle point is the optimum. The Lagrangian for this problem is given as where s’ are called the Lagrangian multipliers. The saddle point is found out by
setting the derivatives equal to zero: By eliminating and through substitution, the final model is expressed as a weighted linear combination of the inner product between the training points and a
new test object. The output is given as where is the kernel function. The elegance of using the kernel function lies in the fact that one can deal with feature spaces of arbitrary dimensionality
without having to compute the map explicitly. Any function that satisfies Mercer's condition can be used as the kernel function. The Gaussian kernel function used in the present work is defined as
and is commonly used, where denotes the kernel bandwidth. The two important parameters, kernel parameter , and the regulation parameter as defined in (4) are referred to as hyper parameters. The
values of these parameters have to determined critically in order to make the network efficient.
4. An Outline of the Methodology
The high-level performance model of an analog component block is mathematically represented as where is a set of performance parameters and is a set of specification parameters. The input
specification parameters are referred to as the high-level design parameters. It is to be noted that out of various possible specification parameters, only the dominant parameters are to be
considered as inputs. The selection of these is based upon the designer's knowledge [12]. These high-level design parameters describe a space referred to as the sample space. This sample space is
explored to extract sample points through suitable algorithms. The numerical values of the sample points (both inputs and outputs of the performance model to be constructed) are generated through
SPICE simulations. The data points so generated are divided into two sets, referred to as the training set and the test set. A least squares SVM network approximating a performance model is
constructed by training the network with the training set. The test dataset is used to validate the SVM model. Suitable kernel functions are selected for constructing the SVM. An initial SVM model is
constructed through some initial values of the hyper parameters. An iterative process is then executed to contruct the final LS-SVM so as to maximize its efficiency through optimal determination of
the hyper parameters. An outline of the process for constructing the performance model of a single component block is illustrated in Figure 1(a).
For a complex system, consisting of many component blocks, the high-level performance model of the complete system is constructed at the second level of hierarchy, where the high-level models of the
individual component blocks are combined analytically (see Figure 1(b)). The constructed performance models are used to implement a high-level topology sizing process. For a given unsized high-level
topology of an analog system, the topology parameters (which are the specification parameters of the individual blocks of the high-level topology) are determined such that the desired design goals
are satisfied. The entire operation is performed within an optimization procedure, which in the present work is implemented through GA. The constructed LS-SVM models are used within the GA loop. An
outline of the sizing methodology is shown in Figure 1(c).
The following two important points may be noted in connection with the present methodology. First, the high-level performance model of a complete system is generated in a hierarchical manner. The
major advantage of this hierarchical approach is reusability of the high-level model of the individual component blocks. The high-level model of the component blocks can be utilized whenever the
corresponding component blocks are part of a system, provided the functionality and performance constraints are identical. This generally happens. The issue of reusability of the component block
level high-level models is demonstrated in Experiment 3, provided later. However, this advantage comes at the cost of reduced accuracy of the model of the complete system. This tradeoff is a general
phenomenon in analog design automation process. It may, however, be noted that it is possible to construct the high-level performance model of a complete system using the regression technique
discussed here. For some customized applications, this may be done. Second, the requirement of low dimensionality of the models must be carefully taken care of. The scalability of our approach of
model generation is not high, compared to analytical approach. However, compared to other black-box approaches like ANN-based, the scalability of our SVM-based approach is high. In addition, many of
the global optimization algorithms suffer from the problem of “curse of dimensionality.” For a topology sizing procedure, employing high-dimensional model the design space in which to search for
optimal design points becomes too large to be handled by simple optimization algorithms. Therefore, while selecting the inputs of the model, only the dominant specification parameters need to be
The detailed operations of each of the steps outlined above are discussed in the following sections and subsections.
5. High-Level Performance Model Generation
In this section, we describe the various steps of the performance model generation procedure in detail.
5.1. Sample Space Definition, Data Generation, and Scaling
In (10), both and are taken to be functions of a set of geometry parameters (transistor sizes) of a component block, expressed as and represents the mapping of the geometry parameters to electrical
parameters. This is illustrated in Figure 2. The multidimensional space spanned by the elements of the set is defined as circuit-level design space . The sample space is a subspace within (see Figure
3), defined through a set of geometry constraints. These geometry constraints include equality constraints as well as inequality constraints. For example, for matching purpose, the sizes of a
differential pair transistors are equal. The inequality constraints are determined by the feature size of a technology and conditions that the transistors are not excessively large. With elementary
algebraic transformations, all the geometry constraints are combined into a single nonlinear vector inequality, which is interpreted element wise as Within this sample space, the circuit performance
behavior becomes weakly nonlinear [27, 30]. Therefore, simple sampling strategies are used to construct models with good generalization ability. In the present work, the sample points are extracted
through Halton sequence generation. This is a quasirandom number generator which generates a set of uniformly distributed random points in the sample space [31]. This ensures a uniform and unbiased
representation of the sample space. The number of sample data plays an important role in determining the efficiency of the constructed LS-SVM model. Utilizing a separate algorithm, it is possible to
determine an optimum size of the training sample data such that models built with smaller training set than this optimum value will have lower accuracy than the models built with optimum number of
training sample and models built with larger training data than the optimum number will have no significant higher accuracy. However, in the present work, in order to make the sampling procedure
simple, the number of sample data is fixed which is determined through a trial and error method.
The training data generation process is outlined in Figure 4. For each input sample (transistor sizes) extracted from the sample space , the chosen circuit topology of a component block is simulated
using SPICE through Cadence Spectre tool using the BSIM3v3 model. Depending upon the selected input-output parameters of an estimation function, it is necessary to construct a set of test benches
that would provide sufficient data to facilitate automatic extraction of these parameters via postprocessing of SPICE output files. A set of constraints, referred to as feasibility constraints are
then applied over the generated data to ensure that only feasible data are taken for training.
The generated input-output data are considered to be feasible, if either they themselves satisfy a set of constraints or the mapping procedures through which they are generated satisfy a set of
constraints. The constraints are as follows [30]. (1)Functionality constraints : these constraints are applied on the measured node voltages and currents. They ensure correct functionality of the
circuit and are expressed as For example, the transistors of a differential pair must work in saturation. (2)Performance constraints : these are applied directly on the input-output parameters,
depending upon an application system. These are expressed as For example, the phase margin of an op-amp must be greater than 45°.
The total set of constraints for feasibility checking is thus . It is to be noted that through the process of feasibily checking, various simulation data are discarded. This at a glance may give an
impression about wastage of costly simulation time. However, for an analog designer (who is a user of the model), this is an important advantage. This is because, the infeasible data points will
never appear as solution whenever the model is used for design characterization/optimization. Even from the model developer's perspective, this is not a serious matter considering the fact that the
construction process is in general a onetime process [24]. The feasibility constraints remain invariant if the performance objectives are changed. Even if the design migrates by a small amount, these
constraints usually do not change [27]. This, however, demands an efficient determination of the feasibility constraints.
Data scaling is an essential step to improve the learning/training process of SVMs. The data of the input and/or output parameters are scaled. The commonly suggested scaling schemes are linear
scaling, log scaling, and two-sided log scaling. The present methodology employs both linear scaling as well as logarithmic scaling depending upon the parameters chosen. The following formula are
used for linear and logarithmic scaling within an interval [32]: where is the unscaled th data of any parameter bounded within the interval . Linear scaling of data balances the ranges of different
inputs or outputs. Applying log scale to data with large variations balances large and small magnitudes of the same parameter in different regions of the model.
5.2. LS-SVM Construction
In this subsection, we discuss the various issues related to the construction of the LS-SVM regressor.
5.2.1. Choice of Kernel Function
The first step of construction of an LS-SVM model is the selection of an appropriate kernel function. For the choice of kernel function , there are several alternatives. Some of the commonly used
functions are listed in Table 1, where , , , and are constants, referred to as hyper parameters. In general, in any classification or regression problem, if the hyper parameters of the model are not
well selected, the predicted results will not be good enough. Optimum values for these parameters therefore need to be determined through proper tuning method. Note that the Mercer condition holds
for all and values in the radial basis function (RBF) and the polynomial case, but not for all possible choices of and in the multilayer perceptron (MLP) case. Therefore, the MLP kernel will not be
considered in this work.
5.2.2. Tuning of Hyper Parameters
As mentioned earlier, when designing an effective LS-SVM model, the hyper parameter values have to be chosen carefully. The regularization parameter , determines the tradeoff cost between minimizing
the training error and minimizing the model error. The kernel parameter or defines the nonlinear mapping from the input space to some high-dimensional feature space [33].
Optimal values of the hyper parameters are usually determined by minimizing the estimated generalization error. The generalization error is a function that measures the generalization ability of the
constructed models, that is, the ability to predict correctly the performance of an unknown sample. The techniques used for estimating the generalization error in the present methodology are as
follows. (1)Hold-out method: this is a simple technique for estimating the generalization error. The dataset is separated into two sets, called the training set and the test set. The SVM is
constructed using the training set only. Then it is tested using the test dataset. The test data are completely unknown to the estimator. The errors it makes are accumulated to give the mean test set
error, which is used to evaluate the model. This method is very fast. However, its evaluation can have a high variance. The evaluation may depend heavily on the data points that end up in the
training set and on those which end up in the test set, and thus the evaluation may be significantly different depending on how the division is made.(2) “”-fold cross-validation method: in this
method, the training data is randomly split into mutually exclusive subsets (the folds) of approximately equal size [33]. The SVM is constructed using of the subsets and then tested on the subset
left out. This procedure is repeated times. Averaging the test error over the trials gives an estimate of the expected generalization error. The advantage of this method is that the accuracy of the
constructed SVM does not depends upon how the data gets divided. The variance of the resulting estimate is reduced as is increased. The disadvantage of this method is that it is time consuming.
Primarily there are three different approaches for optimal determination of the SVM hyper parameters: heuristic method, local search method and global search method. The value is related to the
distance between training points and the smoothness of the interpolation of the model. A heuristic rule has been discussed in [34] for estimating the value as where is the minimum distance (non-zero)
between two training points and is the maximum distance between two training points. The regularization parameter is determined based upon the tradeoff between the smoothness of the model and its
accuracy. The bigger its value the more importance is given to the error of the model in the minimization process. Choosing a low value is not suggested while using exponential RBF to model
performances which are often approximately linear or weakly quadratic in most input variables. While constructing LS-SVM-based analog performance model, heuristic method has been applied for
determining the hyper parameters in [23]. The hyper parameters generated through heuristic method are often found to be suboptimal as demonstrated in [12]. Therefore, determination of hyper
parameters through formal optimization procedure is suggested [33].
The present methodology employs two techniques for selecting optimal values of the model hyper parameters. The first one is a grid search technique and the other one is a genetic algorithm-based
technique. These are explained below considering the RBF as the kernel function. For other kernels, the techniques are accordingly used.
(1) Grid Search Technique
The basic steps of the grid search-based technique is outlined below.
(1)Consider a grid space of , defined by and , where and define the boundary of the grid space. (2)For each pair within the grid space, estimate the generalization error through hold-out/-fold
cross-validation technique. (3)Choose the pair that leads to the lowest error. (4)Use the best parameter to create the SVM model as predictor.
The grid search technique is simple. However, this is computationally expensive since this is an exhaustive search technique. The accuracy and time cost of the grid method are tradeoff depending on
the grid density. In general, with the increase in grid density, the computational process becomes expensive. On the other hand, sparse density lowers the accuracy. The grid search technique is
therefore performed in two stages. In the first stage, a coarse grid search is performed. After identifying a better region on the grid, a finer grid search on that region is conducted in the second
stage. In addition, the grid search process is a tricky task since a suitable sampling step varies from kernel to kernel and the grid interval may not be easy to locate without prior knowledge of the
problem. In the present work, these parameters are determined through trial and error method.
(2) Genetic Algorithm-Based Technique
In order to reduce the computational time required to determine the optimal hyper parameter values without sacrificing the accuracy, numerical gradient-based optimization technique can be used.
However, it has been found that often the SVM model selection criteria have multiple local optima with respect to the hyper parameter values [28]. In such cases, the gradient-based method have
chances to be trapped in bad local optima. Considering this fact, we use a genetic algorithm-based global optimization technique for determining the hyper parameter values.
In the GA-based technique, the task of selection of the hyper parameters is same as an optima searching task, and each point in the search space represents one feasible solution (specific hyper
parameters). Each feasible solution is marked by its estimated generalization ability, and the determination of a solution is equal to determination of some extreme point in the search space.
An outline of a simple GA-based process is shown in Figure 5. The chromosomes consist of two parts, and . The encoding of the hyper parameters into a chromosome is a key issues. A realcoded scheme is
used as the representation of the parameters in this work. Therefore, the solution space coincides with the chromosome space. In order to produce the initial population, the initial values of the
designed parameters are distributed in the solution space evenly. The selection of population size, is one of the factors that affects the performance of GA. The GA evaluation duration is
proportional to the population size. If the population size is too large, a prohibitive amount of time for optimization will be required. On the other hand, if the population size is too small, the
GA can prematurely converge to a suboptimal solution, thereby reducing the final solution quality. There is no generally accepted theory for determining optimal population size. Usually, it is
determined by experimentation or experience.
During the evolutionary process of GA, a model is trained with the current hyper parameter values. The hold-out method as well as the -fold cross-validation method are used for estimating the
generalization error. The fitness function is an important factor for estimation and evolution of SVMs providing satisfactory and stable results. The fitness function expresses the users' objective
and favours SVMs with satisfactory generalization ability. The fitness of the chromosomes in the present work is determined by the average relative error (ARE) calculated over the test samples. The
fitness function is defined as Thus, maximizing the fitness value corresponds to minimizing the predicted error. The ARE function is defined as Here , , and are the number of test data, the SVM
estimator output, and the corresponding SPICE simulated value, respectively. The fitness of each chromosome is taken to be the average of five repetitions. This reduces the stochastic variability of
the model training process in GA-based LS-SVM.
The genetic operator includes the three basic operators such as selection, crossover, and mutation. Roulette wheel selection technique is used for the selection operation. The probability of
selecting the th solution is given by where is the size of the population. Besides, in order to keep the best chromosome in every generation, the idea of elitism is adopted. The use of a pair of
real-parameter decision variable vectors to create a new pair of offspring vectors is done by the crossover operator. For two parent solutions and , the offspring is determined through a blend
crossover operator. For two parent solutions and , such that , the blend crossover operator (BLX-) randomly picks a solution in the range . Thus, if be a random number in the range (0,1) and , then
the following is an offspring: If is zero, this crossover creates a random solution in the range . It has been reported for a number of test cases that BLX-0.5 (with ) performs better than BLX
operators with any other value. The mutation operator is used with a low probability to alter the solutions locally to hopefully create better solutions. The need for mutation is to maintain a good
diversity of the population. The normally distributed mutation operator is used in this work. A zero mean Gaussian probability distribution with standard deviation for the th solution is used. The
new solution is given as The parameter is user-defined and dependent upon the problem. Also, it must be ensured that the new solution lies within the specified upper and lower limits. When the
difference between the estimated error of the child population and that of the parent population is less than a predefined threshold over certain fixed generations, the whole process is terminated
and the corresponding hyper parameter pair is taken as the output.
It may be mentioned here that there is no fixed method for defining the GA parameters, which are all empirical in nature. However, the optimality of the hyper parameter values is dependent upon the
values of the GA parameters. In the present work, the values of the GA parameters are selected primarily by trial and error method over several runs.
5.3. Quality Measures
Statistical functions are generally used to assess the quality of the generated estimator. The ARE function defined in (17) is one such measure. Another commonly used measure is the correlation
coefficient (). This is defined as follows: The correlation coefficient is a measure of how closely the LS-SVM outputs fit with the target values. It is a number between 0 and 1. If there is no
linear relationship between the estimated values and the actual targets, then the correlation coefficient is 0. If the number is equal to 1.0, then there is a perfect fit between the targets and the
outputs. Thus, the higher the correlation coefficient, the better it is.
6. Topology Sizing Methodology Using GA
The topology sizing process is defined as the task of determining the topology parameters (specification parameters of the constituent component blocks) of a high-level topology such that the desired
specifications of the system are satisfied with optimized performances. In this section, we discuss a genetic algorithm-based methodology for a topology sizing process employing the constructed
LS-SVM performance models.
An outline of the flow is shown in Figure 6. A high-level topology is regarded as a multidimensional space, in which the topology parameters are the dimensions. The valid design space for a
particular application consists of those points which satisfy the design constraints. The optimization algorithm searches in this valid design space for the point which optimizes a cost function. The
optimization targets, that is, the performance parameters to be optimized and system specifications to be satisfied are specified by the user. The GA optimizer generates a set of chromosomes, each
representing a combination of topology parameters in the given design space. Performance estimation models for estimating the performances of a topology of the entire system are constructed by
combining the LS-SVM models of the individual component blocks through analytical formulae. The performance estimation models take each combination of topology parameters and produce an estimation of
the desired performance cost of the topology as the output. A cost function is computed using these estimated performance values. The chromosomes are updated according to their fitness, related to
the cost function. This process continues until a desired cost function objective is achieved or a maximum number of iterations are executed.
7. Numerical Results
In this section, we provide experimental results demonstrating the methodologies described above. The entire methodology has been implemented in MATLAB environment and the training of the LS-SVM has
been done using MATLAB toolbox [35].
7.1. Experiment 1
A two-stage CMOS operational transconductance amplifier (OTA) is shown in Figure 7. The technology is 0.18μm CMOS process, with a supply voltage of 1.8V. The transistor level parameters along with
the various feasibility constraints are listed in Table 2. The functional constraints ensure that all the transistors are on and are in the saturation region with some user-defined margin. We
consider the problem of modeling input referred thermal noise , power consumption , and output impedance as functions of DC gain , bandwidth , and slew rate . From the sample space defined by the
transistor sizes, a set of 5000 samples is generated using a Halton sequence generator. These are simulated through AC analysis, operating point analysis, noise analysis, and transient analysis using
SPICE program. Out of all samples, only 1027 samples are found to satisfy the functional and performance constraints listed in Table 2.
The estimation functions are generated using LS-SVM technique. The generalization errors are estimated through the hold-out method and the 5-fold cross-validation method. The hyper parameters are
computed through the grid search and the GA-based technique. In the grid search technique, the hyper parameters are restricted within the range and . The grid search algorithm is performed with a
step size of 0.6 in and 10 in . These parameters are fixed based on heuristic estimations and repeated trials. The determined hyper parameter values along with the quality measures and the training
time are reported in Tables 3 and 4 for the hold-out method and the cross-validation method, respectively. From the results, we observe that the average relative errors for the test samples are low
(i.e., the generalization ability of the models is high) when the errors are estimated using the cross-validation method. However, the cross-validation method is much slower compared to the hold-out
For GA, the population size is taken to be ten-times the number of the optimization variables. The crossover probability and the mutation probability are taken as 0.8 and 0.05, respectively. These
are determined through a trial and error process. The hyper parameter values and the quality measures are reported in Tables 5 and 6. From the results the above observations are also noted.
A comparison between the grid-search technique and the GA-based technique with respect to accuracy (ARE), correlation coefficient , and required training time is made in Table 7. All the experiments
are performed on a PC with PIV 3.00GHz processor and 512MB RAM. We observe from the comparison that the accuracy of SVM models constructed using the grid search technique and the GA-based technique
are almost the same. However, the GA-based technique is at least ten-times faster than the grid search method. From (1), we conclude that the construction cost of the GA-based method is much lower
than the grid search-based method, since the data generation time is same for both the methods.
The scatter plots of SPICE-simulated and LS-SVM estimated values for normalized test data of the three models are shown in Figures 8(a), 8(b), and 8(c), respectively. These scatter plots illustrate
the correlation between the SPICE simulated and the LS-SVM-estimated test data. The correlation coefficients are very close to unity. Perfect accuracy would result in the data points forming a
straight line along the diagonal axis.
7.2. Experiment 2
The objective of this experimentation is to quantitatively compare between our methodology and the EsteMate [21]. The power consumption model is reconstructed using the EsteMate technique. The
specification parameter space is sampled randomly. A set of 5000 samples is considered. For each selected sample, an optimal sizing is performed and the resulting power consumption is measured. The
sizing is done with a simulated annealing-based optimization procedure and standard analytical equations relating transistor sizes to the specification parameters [36] following the EsteMate
procedure. Of these, 3205 samples are accepted and the rest are rejected. The determination of the training set took 10 hours of CPU time. The training is done through an artificial neural network
structure with two hidden layers. The number of neurons for the first layer is 9, the number of neurons for the second layer is 6. The hold-out method is used for estimating the generalization
A comparison between the two methodologies is reported in Table 8. From the results, we find that the data generation time is much less in our method compared to the EsteMate method. In addition, we
find that the accuracy of our method is better than the EsteMate method. The experimental observations verify the theoretical arguments given in Section 2.1.
7.3. Experiment 3
The objective of this experimentation is to demonstrate the process of constructing high-level performance model of a complete system and the task of topology sizing.
System Considerations
We choose a complete analog system—interface electronics for MEMS capacitive sensor system as shown in Figure 9(a). In this configuration, a half-bridge consisting of the sense capacitors is formed
and driven by two pulse signals with 180° phase difference. The amplitude of the bridge output is proportional to the capacitance change and is amplified by a voltage amplifier. The final output
voltage is given by where is the nominal capacitance value, is the parasitic capacitance value at the sensor node, is the amplitude of the applied ac signal, and is the gain of the system, depending
upon the desired output voltage sensitivity. The topology employs a chopper modulation technique for low noise purpose.
The desired functional specifications to be satisfied are (i) output voltage sensitivity (i.e., the total gain, since the input sensitivity is known) and (ii) cutoff frequency of the filter. The
performance parameters to be optimized are (i) input-referred thermal noise, (ii) total power consumption, and (iii) parasitic capacitance at the sensor node . The functional specifications and
design constraints for the system are based on [37] and are listed in Table 9.
Identification of the Component Blocks and the Corresponding Performance Models
The synthesizable component blocks are the preamplifier (PA), inverter (IN) of the phase demodulator, low-pass filter (LF), and the output amplifier (OA). These are constructed using OTAs and
capacitors. Figure 9(b) shows the implementations of the amplifier and the filter blocks using OTAs and capacitor [38, 39].
High-level performance models for the synthesizable component blocks corresponding to the performance parameters—(i) input referred thermal noise, (ii) power consumption, and (iii) sensor node
parasitics are constructed. The specification parameters which have dominant influence on the first two performances as well as on the functional specification, that is, the output voltage
sensitivity and the cutoff frequency are the transconductance values of all the OTAs involved. On the other hand, for the last performance parameter, that is, sensor node parasitics, transconductance
value of the first OTA of the preamplifier block is the single design parameter. Thus the values of the OTAs are considered as high-level design parameters. In summary, we construct three performance
models, input referred thermal noise, power consumption, and sensor node parasitics as functions of the values of the OTAs.
Construction of Performance Models for the PA Block
The geometry constraints and the feasibility constraints for the PA block of the topology are tabulated in Table 10. Similar types of constraints are considered for the other component blocks also.
The input-output parameters of the models to be constructed are extracted through techniques discussed earlier. The sensor node parasitic capacitance is measured utilizing the half-bridge circuit
shown in Figure 9(a), with only one amplifier block. Considering fF, fF, a square wave signal with amplitude mV is applied and transient analysis is performed. Measuring the signal at the node ,
is calculated using (22).
Table 11 shows the hyper parameter values, percentage average relative error, and correlation coefficient of the constructed performance models for the preamplifier, with respect to SPICE-simulated
Reusability of Models and Construction of High-Level Model for the Complete System
The performance models corresponding to the noise and the power consumption for the PA block are reused for the other component blocks. This is because all the component blocks have topological
similarities and each of them is constructed from OTA circuits, as demonstrated in Figure 9(b). The issue of reusability of individual high-level models in a complete system is thus applied here.
The high-level models of the PA, IN, LF and OA blocks are combined analytically to construct the model of the complete system. The input referred noise and power consumption of the total system is
given by is the gain of the preamplifier. is the thermal noise model for the PA block, is that for the IN block of the phase demodulator, and so on. It is to be noted that need not be constructed
again. It is same as . This is true for and . This reusability principle is applied for the power consumption model of all the blocks. The sensor node parasitics is the same as the input parasitics
of the preamplifier. It is to be noted that while constructing the high-level performance model of a complete system, the interactions between the transistors are taken care of while constructing the
component-level performance model utilizing SPICE simulation data and the coupling between the blocks are considered through analytical equations.
Optimization Problem Formulation and Results
With these, the optimization problem for the topology sizing task is formulated as follows: where are the associated weights.
The target output voltage sensitivity of the system (i.e., the total gain of the system) is taken as 145mV/g and the cutoff frequency is taken as 35KHz. The synthesis procedure took 181 seconds on
a PIV, 3.00GHz processor PC with 512MB RAM. The crossover and the mutation probability are taken as 0.85 and 0.05, respectively. These are determined through a trial and error process. Table 12
lists the synthesized values of the topology parameters, as obtained from the synthesis procedure.
To validate the synthesis procedure, we simulate the entire system at the circuit-level using SPICE. Exact values of are not achievable often. In such cases, the nearest neighbouring values are
realized. An approximate idea about the transistor sizes required to implement the synthesized values are made from the large set of data gathered during the estimator construction. A comparison
between the predicted performances and simulated values is presented in Table 13. We observe that the relative error between predicted performances and simulated performances in each case is
acceptable. However, for the output sensitivity and the cutoff frequency, the error is high. This is because the circuit-level nonideal effects have not been considered in the topology sizing process
while formulating the final cost function and constraint functions. Following conventional procedure, this has been done purposefully in order to make the functions simple and the process converge
smoothly [1, 27]. The acceptability and feasibility of the results are ensured to a large extent, since the utilized model is based on SPICE simulation results. The robustness of the results,
however, could be verified by process corner analysis [27].
8. Conclusion
This paper presents a methodology for generation of high-level performance models for analog component blocks using nonparametric regression technique. The transistor sizes of the circuit-level
implementations of the component blocks along with a set of geometry constraints applied over them define the sample space. Performance data are generated by simulating each sampled circuit
configuration through SPICE. Least squares support vector machine (LS-SVM) is used as a regression function. The generalization ability of the constructed models has been estimated through a hold-out
method and a 5-fold cross-validation method. Optimal values of the model hyper parameters are determined through a grid search-based technique and a GA-based technique. The high-level models of the
individual component blocks are combined analytically to construct the high-level model of a complete system. The entire methodology has been implemented under MATLAB environment. The methodology has
been demonstrated with a set of experiments. The advantages of the present methodology are that the constructed models are accurate with respect to real circuit-level simulation results, fast to
evaluate and have a good generalization ability. In addition, the model construction time is low and the construction process does not require any detailed knowledge of circuit design. The
constructed performance models have been used to implement a GA-based topology sizing process. The process has been demonstrated by considering the interface electronics for an MEMS capacitive
accelerometer sensor as an example. It may be noted that multiobjective optimization algorithms [40] can also be used in the proposed approach for solving (25).
The first author likes to thank the Department of Science and Technology, Government of India for partial financial support of the present paper through Fast Track Young Scientist Scheme, no. SR/FTP/
1. E. S. J. Martens and G. G. E. Gielen, High-Level Modeling and Synthesis of Analog Integrated Systems, Springer, New York, NY, USA, 2008.
2. G. G. E. Gielen, “CAD tools for embedded analogue circuits in mixed-signal integrated systems on chip,” IEE Proceedings: Computers and Digital Techniques, vol. 152, no. 3, pp. 317–332, 2005. View
at Publisher · View at Google Scholar
3. S. Y. Lee, C. Y. Chen, J. H. Hong, R. G. Chang, and M. P.-H. Lin, “Automated synthesis of discrete-time sigma-delta modulators from system architecture to circuit netlist,” Microelectronics
Journal, vol. 42, pp. 347–357, 2010. View at Publisher · View at Google Scholar · View at Scopus
4. E. Martens and G. G. E. Gielen, “Classification of analog synthesis tools based on their architecture selection mechanisms,” Integration, the VLSI Journal, vol. 41, no. 2, pp. 238–252, 2008. View
at Publisher · View at Google Scholar · View at Scopus
5. S. Pandit, C. Mandal, and A. Patra, “An automated high-level topology generation procedure for continuous-time ΣΔ modulator,” Integration, the VLSI Journal, vol. 43, no. 3, pp. 289–304, 2010.
View at Publisher · View at Google Scholar · View at Scopus
6. S. Pandit, S. K. Bhattacharya, C. Mandal, and A. Patra, “A fast exploration procedure for analog high-level specification translation,” IEEE Transactions on Computer-Aided Design of Integrated
Circuits and Systems, vol. 27, no. 8, pp. 1493–1497, 2008. View at Publisher · View at Google Scholar · View at Scopus
7. G. G. E. Gielen, “Modeling and analysis techniques for system-level architectural design of telecom front-ends,” IEEE Transactions on Microwave Theory and Techniques, vol. 50, no. 1, pp. 360–368,
2002. View at Publisher · View at Google Scholar
8. J. Crols, S. Donnay, M. Steyaert, and G. G. E. Gielen, “High-level design and optimization tool for analog RF receiver front-ends,” in Proceedings of the IEEE/ACM International Conference on
Computer-Aided Design (ICCAD '95), pp. 550–553, November 1995. View at Scopus
9. F. Medeiro, B. Perez-Verdu, A. Rodriguez-Vazquez, and J. L. Huertas, “Vertically integrated tool for automated design of ΣΔ modulators,” IEEE Journal of Solid-State Circuits, vol. 30, no. 7, pp.
762–772, 1995. View at Scopus
10. H. Tang and A. Doboli, “High-level synthesis of ΔΣ modulator topologies optimized for complexity, sensitivity, and power consumption,” IEEE Transactions on Computer-Aided Design of Integrated
Circuits and Systems, vol. 25, no. 3, pp. 597–607, 2006. View at Publisher · View at Google Scholar
11. Y. Wei, A. Doboli, and H. Tang, “Systematic methodology for designing reconfigurable ΔΣ modulator topologies for multimode communication systems,” IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems, vol. 26, no. 3, pp. 480–495, 2007. View at Publisher · View at Google Scholar · View at Scopus
12. S. Pandit, C. Mandal, and A. Patra, “Systematic methodology for high-level performance modeling of analog systems,” in Proceedings of the 22nd International Conference on VLSI Design—Held Jointly
with the 7th International Conference on Embedded Systems (VLSID '09), pp. 361–366, January 2009. View at Publisher · View at Google Scholar · View at Scopus
13. E. Lauwers and G. G. E. Gielen, “Power estimation methods for analog circuits for architectural exploration of integrated systems,” IEEE Transactions on Very Large Scale Integration (VLSI)
Systems, vol. 10, no. 2, pp. 155–162, 2002. View at Publisher · View at Google Scholar · View at Scopus
14. R. A. Rutenbar, G. G. E. Gielen, and J. Roychowdhury, “Hierarchical modeling, optimization, and synthesis for system-level analog and RF designs,” Proceedings of the IEEE, vol. 95, no. 3, pp.
640–669, 2007. View at Publisher · View at Google Scholar · View at Scopus
15. A. Nunez-Aldana and R. Vemuri, “An analog performance estimator for improving the effectiveness of CMOS analog systems circuit synthesis,” in Proceedings of Design, Automation and Test in Europe
Conference and Exhibition (DATE '99), pp. 406–411, Munich, Germany, 1999.
16. A. Doboli, N. Dhanwada, A. Nunez-Aldana, and R. Vemuri, “A two-layer library-based approach to synthesis of analog systems from VHDL-AMS specifications,” ACM Transactions on Design Automation of
Electronic Systems, vol. 9, no. 2, pp. 238–271, 2004. View at Publisher · View at Google Scholar · View at Scopus
17. M. Del Mar Hershenson, S. P. Boyd, and T. H. Lee, “Optimal design of a CMOS op-amp via geometric programming,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol.
20, no. 1, pp. 1–21, 2001. View at Publisher · View at Google Scholar · View at Scopus
18. P. Mandal and V. Visvanathan, “CMOS op-amp sizing using a geometric programming formulation,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 20, no. 1, pp.
22–38, 2001. View at Publisher · View at Google Scholar · View at Scopus
19. W. Daems, G. G. E. Gielen, and W. Sansen, “Simulation-based generation of posynomial performance models for the sizing of analog integrated circuits,” IEEE Transactions on Computer-Aided Design
of Integrated Circuits and Systems, vol. 22, no. 5, pp. 517–534, 2003. View at Publisher · View at Google Scholar · View at Scopus
20. X. Li, P. Gopalakrishnan, Y. Xu, and L. T. Pileggi, “Robust analog/RF circuit design with projection-based performance modeling,” IEEE Transactions on Computer-Aided Design of Integrated Circuits
and Systems, vol. 26, no. 1, pp. 2–15, 2007. View at Publisher · View at Google Scholar · View at Scopus
21. G. van der Plas, J. Vandenbussche, G. G. E. Gielen, and W. Sansen, “EsteMate: a tool for automated power and area estimation in analog top-down design and synthesis,” in Proceedings of the IEEE
Custom Integrated Circuits Conference (CICC '97), pp. 139–142, May 1997. View at Scopus
22. X. Ren and T. Kazmierski, “Performance modelling and optimisation of RF circuits using support vector machines,” in Proceedings of the 14th International Conference on Mixed Design of Integrated
Circuits and Systems (MIXDES '07), pp. 317–321, June 2007. View at Publisher · View at Google Scholar · View at Scopus
23. T. Kiely and G. G. E. Gielen, “Performance modeling of analog integrated circuits using least-squares support vector machines,” in Proceedings of Design, Automation and Test in Europe Conference
and Exhibition (DATE '04), pp. 448–453, February 2004. View at Publisher · View at Google Scholar · View at Scopus
24. M. Ding and R. Vemuri, “A combined feasibility and performance macromodel for analog circuits,” in Proceedings of the 42nd Design Automation Conference (DAC '05), pp. 63–68, June 2005. View at
25. H. Li and Y. Zhang, “An algorithm of soft fault diagnosis for analog circuit based on the optimized SVM by GA,” in Proceedings of the 9th International Conference on Electronic Measurement and
Instruments (ICEMI '09), pp. 41023–41027, August 2009. View at Publisher · View at Google Scholar · View at Scopus
26. M. Barros, J. Guilherme, and N. Horta, “Analog circuits optimization based on evolutionary computation techniques,” Integration, the VLSI Journal, vol. 43, no. 1, pp. 136–155, 2010. View at
Publisher · View at Google Scholar · View at Scopus
27. H. E. Graeb, Analog Design Centering and Sizing, Springer, New York, NY, USA, 2007.
28. V. Vapnik, Statistical Learning Theory, Springer, New York, NY, USA, 1998.
29. J. A. K. Suykens, T. V. Gestel, J. D. Brabanter, B. D. Moor, and V. J. Vandewalle, Least Squares Support Vector Machines, World Scientific, 2002.
30. H. Graeb, S. Zizala, J. Eckmueller, and K. Antreich, “The sizing rules method for analog integrated circuit design,” in Proceedings of the IEEE/ACM International Conference on Computer-Aided
Design (ICCAD '01), pp. 343–349, San Jose, Calif, USA, November 2001. View at Publisher · View at Google Scholar · View at Scopus
31. G. P. Box, W. G. Hunter, and J. S. Hunter, Statistics for Experimenters: An Introduction to Design, Analysis and Model Building, Wiley, New York, NY, USA, 1978.
32. Q. J. Zhang, K. C. Gupta, and V. K. Devabhaktuni, “Artificial neural networks for RF and microwave design—from theory to practice,” IEEE Transactions on Microwave Theory and Techniques, vol. 51,
no. 4, pp. 1339–1350, 2003. View at Publisher · View at Google Scholar · View at Scopus
33. K. Duan, S. S. Keerthi, and A. N. Poo, “Evaluation of simple performance measures for tuning SVM hyperparameters,” Neurocomputing, vol. 51, pp. 41–59, 2003. View at Publisher · View at Google
34. G. Rubio, H. Pomares, I. Rojas, and L. J. Herrera, “A heuristic method for parameter selection in LS-SVM: application to time series prediction,” International Journal of Forecasting, vol. 27,
pp. 725–739, 2010. View at Publisher · View at Google Scholar · View at Scopus
35. “LS-SVM Toolbox,” February 2003, http://www.esat.kuleuven.ac.be/sista/lssvmlab/.
36. P. E. Allen and D. R. Holberg, CMOS Analog Circuit Design, Oxford University Press, 2004.
37. J. Wu, G. K. Feeder, and L. R. Carley, “A low-noise low-offset capacitive sensing amplifier for a 50-μg/$\sqrt{Hz}$ monolithic CMOS MEMS accelerometer,” IEEE Journal of Solid-State Circuits, vol.
39, no. 5, pp. 722–730, 2004. View at Publisher · View at Google Scholar · View at Scopus
38. E. Sánchez-Sinencio and J. Silva-Martínez, “CMOS transconductance amplifiers, architectures and active filters: a tutorial,” IEE Proceedings: Circuits, Devices and Systems, vol. 147, no. 1, pp.
3–12, 2000. View at Publisher · View at Google Scholar · View at Scopus
39. R. Schaumann and M. E. Van Valkenburg, Design of Analog Filters, Oxford University Press, 2004.
40. I. Guerra-Gómez, E. Tlelo-Cuautle, T. McConaghy et al., “Sizing mixed-mode circuits by multi-objective evolutionary algorithms,” in Proceedings of the 53rd IEEE International Midwest Symposium on
Circuits and Systems (MWSCAS '10), pp. 813–816, August 2010. View at Publisher · View at Google Scholar · View at Scopus
|
{"url":"http://www.hindawi.com/journals/vlsi/2011/475952/","timestamp":"2014-04-17T05:52:59Z","content_type":null,"content_length":"292709","record_id":"<urn:uuid:ead2e079-2688-495d-b0a4-8ddc13f1f196>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for: Author/Editor=(Gaglione_Anthony)
Contemporary Mathematics
1990; 191 pp; softcover
Volume: 109
ISBN-10: 0-8218-5116-0
ISBN-13: 978-0-8218-5116-6
List Price: US$46
Member Price: US$36.80
Order Code: CONM/109
The AMS Special Session on Combinatorial Group Theory--Infinite Groups, held at the University of Maryland in April 1988, was designed to draw together researchers in various areas of infinite group
theory, especially combinatorial group theory, to share methods and results. The session reflected the vitality and interests in infinite group theory, with eighteen speakers presenting lectures
covering a wide range of group-theoretic topics, from purely logical questions to geometric methods. The heightened interest in classical combinatorial group theory was reflected in the sheer volume
of work presented during the session.
This book consists of eighteen papers presented during the session. Comprising a mix of pure research and exposition, the papers should be sufficiently understandable to the nonspecialist to convey a
sense of the direction of this field. However, the volume will be of special interest to researchers in infinite group theory and combinatorial group theory, as well as to those interested in
low-dimensional (especially three-manifold) topology.
• G. Baumslag -- Some reflections on finitely generated metabelian groups
• B. Fine and G. Rosenberger -- Conjugacy separability of Fuchsian groups and related questions
• B. Fine, F. Rohl, and G. Rosenberger -- Two-generator subgroups of certain HNN groups
• T. Fournelle and K. Weston -- A geometric approach to some group presentations
• A. Gaglione and D. Spellman -- \(\gamma_n+1(F)\) and \(F/\gamma_n+1(F)\) revisited
• A. Gaglione and H. Waldinger -- The commutator collection process
• L. Kappe and R. Morse -- Levi-properties in metabelian groups
• K. Kuiken and J. Masterson -- Monodromy groups of differential equations on Riemann surfaces of genus \(1\)
• J. Labute -- The Lie algebra associated to the lower central series of a free product of cyclic groups of prime order \(p\)
• J. Levine -- Algebraic closure of groups
• A. Macbeath -- Automorphisms of Riemann surfaces
• I. Macdonald and B. H. Neumann -- On commutator laws in groups, \(2\)
• J. McCool -- Two-dimensional linear characters and automorphisms of free groups
• J. Ratcliffe -- On the uniqueness of amalgamated product decompositions of a group
• L. Ribes -- The Cartesian subgroup of a free product of profinite groups
• D. Robinson -- A note on local coboundaries for locally nilpotent groups
• C. Y. Tang -- Some results on \(1\)-relator quotients of free products
• M. Tretkoff -- Covering spaces, subgroup separability, and the generalized M. Hall property
|
{"url":"http://www.ams.org/bookstore?arg9=Anthony_Gaglione&fn=100&l=20&pg1=CN&r=1&s1=Gaglione_Anthony","timestamp":"2014-04-19T01:53:27Z","content_type":null,"content_length":"16495","record_id":"<urn:uuid:14fc8414-f0fb-4a99-9a6a-8ea6ec2b8276>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
converting function to include vectors
08-09-2011 #1
Registered User
Join Date
Apr 2008
converting function to include vectors
I have the following function that is being called from an algorithm.
double rosen(double x[])
return (100*(x[1]-x[0]*x[0])*(x[1]-x[0]*x[0])+(1.0-x[0])*(1.0-x[0]));
however I have tried to adjust the function so that it is mine:
double rosen(double x[])
double c[5];
double Fitted_Curve[5];
int i;
double Actual_Output[5]={1.2, 2.693, 4.325, 6.131, 8.125};
double Input[5]={1, 2, 3, 4, 5};
for (i = 1; i < 5; i++)
c[i] = -1/atanh((exp(x[0]*2*pi)-cosh(x[0]*x[1]*Actual_Output[i]))/sinh(x[0]*x[1]*Actual_Output[i]));
Fitted_Curve[i] = (1/(x[0]*x[1]))*(c[i]-asinh(sinh(c[i])*exp(x[0]*Input[i]*2*pi)));
I am trying to minimize the parameters x[0], x[1] and x[2]. i need to tell it the vectors Actual_Output and Input.
the original code had no vectors except the ones it was solving for, so i am not sure how to return my function.
Well, for sure you're not going to return an array... C can't do that.
What exactly are you trying to return?
double rosen(double x[])
double c[5];
double Fitted_Curve[5];
int i;
double Actual_Output[5]={1.2, 2.693, 4.325, 6.131, 8.125};
double Input[5]={1, 2, 3, 4, 5};
for (i = 1; i < 5; i++){
c[i] = -1/atanh((exp(x[0]*2*pi)-cosh(x[0]*x[1]*Actual_Output[i]))/sinh(x[0]*x[1]*Actual_Output[i]));
Fitted_Curve[i] = (1/(x[0]*x[1]))*(c[i]-asinh(sinh(c[i])*exp(x[0]*Input[i]*2*pi)));
Everything in red is local to your function and ceases to exist once your function exits. Perhaps you want to pass your solution array to your function as an argument and then adjust it in your
What are you saying here? What do you mean by "minimize parameters..."?
Warning: Some or all of my posted code may be non-standard and as such should not be used and in no case looked at.
I am trying to return the error so since posting that i have
for (i = 0; i <= 4; i++)
c[i] = -1/atanh((exp(x[0]*2*pi)-cosh(x[0]*x[1]*Actual_Output[i]))/sinh(x[0]*x[1]*Actual_Output[i]));
Fitted_Curve[i] = (1/(x[0]*x[1]))*(c[i]-asinh(sinh(c[i])*exp(x[0]*Input[i]*2*pi)));
Error_Vector[i] = Actual_Output[i]-Fitted_Curve[i];
for (i = 0; i <= 4; i++)
sum = sum + Error_Vector[i]*Error_Vector[i];
return sum;
i think this code is calculating a c value, then using that to calculate a Fitted_Curve Value, then and Error Value, it calculates all of them and then calculate the sum of the errors. it then
returns the sum.
the code runs however but doesnt manage to produce real values, but #nans...
Last edited by a.mlw.walker; 08-09-2011 at 01:24 PM.
Ok, so what is the question? The only thing I don't see between your posts is you initializing sum to a value before using it in your final for loop, e.g. double sum=0; at the top of your
Warning: Some or all of my posted code may be non-standard and as such should not be used and in no case looked at.
Define 'produce real values'. The only thing this function returns is sum. Did you initialize sum like I pointed out? What value of sum are you getting? Note: You cannot access any of your arrays
you defined in your function outside of your function.
Warning: Some or all of my posted code may be non-standard and as such should not be used and in no case looked at.
If you're getting #nan, it might help to make sure that none of your x's are zero, since you are dividing by them.
Yeah i had inititalised it.
the problem is the function i have shown you is sent by main to another function to do a downhill simplex method on.
The function i have been showing you is in full
double rosen(double x[])
double c[5];
double Fitted_Curve[5];
double Error_Vector[5];
int i;
double Actual_Output[5]={1.2, 2.693, 4.325, 6.131, 8.125};
double Input[5]={1, 2, 3, 4, 5};
double sum = 0;
for (i = 0; i <= 4; i++)
c[i] = -1/atanh((exp(x[0]*2*pi)-cosh(x[0]*x[1]*Actual_Output[i]))/sinh(x[0]*x[1]*Actual_Output[i]));
Fitted_Curve[i] = (1/(x[0]*x[1]))*(c[i]-asinh(sinh(c[i])*exp(x[0]*Input[i]*2*pi)));
Error_Vector[i] = Actual_Output[i]-Fitted_Curve[i];
for (i = 0; i <= 4; i++)
sum = sum + Error_Vector[i]*Error_Vector[i];
return sum;
it is sent to the function simplex in main such that:
int main()
double start[] = {1,1.0,1};
double min;
int i;
for (i=0;i<3;i++) {
return 0;
my_constraints is just another function sent to simplex
and when i say nan, i mean 1.#QNAN0
Last edited by a.mlw.walker; 08-09-2011 at 03:16 PM.
Ok, well this is telling you that your math is resulting in an error. It could be anything from:
1. the values in your x[] array being 0 and then dividing by them
2. any of your math that uses exponents resulting in undefined exponent operations.
3. Anything where you are using your trig functions that would result in a value out of bounds.
You are going to need to go through and debug your loops to find out where your math operations are failing. I would recommend using a debugger so you could step through your loop and see the
variables of your arrays. Additionally, you could just add printf statements to watch your output as the loops cycle through.
Warning: Some or all of my posted code may be non-standard and as such should not be used and in no case looked at.
OK thanks, I'll do that, however could you please confirm whether the functions i pasted above are doing what i think they are doing. My job is programming in Matlab however for this task i am
required to write it in C. Therefore my understanding of programming with matrices does cut the mustard in C. Ideally I would like to pass the function rosen the vectors Actual_Output and Input
from main, but I cant get the pass to work as it goes through simplex. Could you help me with that? I think I just include double[] everytime I pass an array to a new function, but how do you
send it from a function to a function through a function??
Cheers Andrew
Edit: After doing what you said the variable c is not being calculated at all, it is coming up as #QNAN0, therefore nothing else can be calculated.
Edit2: So i have narrowed it down by seperating the formulation of the variable c into two parts. Trying it in matlab the values given by x_coeff (below) cannot me arc-tanh-ed. Therefore there is
something wrong with my equation for x_coeff:
double rosen(double x[])
double c, x_coeff;
double Fitted_Curve[5];
double Error_Vector[5];
int i;
double Actual_Output[5]={1.2, 2.693, 4.325, 6.131, 8.125};
double Input[5]={1, 2, 3, 4, 5};
double sum = 0;
x_coeff = (exp(x[0]*2*pi)-cosh(x[0]*x[1]*Actual_Output[0]))/sinh(x[0]*x[1]*Actual_Output[0]);
c = ((-1)/atanh(x_coeff));
printf("variable c = %f variable x_coeff = %f\n",c,x_coeff);
printf("variable x[0] = %f x[1] = %f AO[0] = %f\n",x[0],x[1],Actual_Output[0]);
for (i = 0; i <= 4; i++)
//stick c back in here if doesnt work, and remember to change c's below to arrays plus double above.
Fitted_Curve[i] = (1/(x[0]*x[1]))*(c-asinh(sinh(c)*exp(x[0]*Input[i]*2*pi)));
Error_Vector[i] = Actual_Output[i]-Fitted_Curve[i];
for (i = 0; i <= 4; i++)
sum = sum + Error_Vector[i]*Error_Vector[i];
printf("variable Fitted_Curve = %f\n",Fitted_Curve[0]);
return sum;
Last edited by a.mlw.walker; 08-09-2011 at 07:44 PM.
The same way you got it there in the first place:
void foo( type array[] )
... do stuff on array
void bar( type array[] )
... do stuff on array
foo( array ); /* pass it to foo */
int main( void )
type array[ SOMESIZE ];
bar( array );
return 0;
Where type is whatever type you want your array to hold.
Hope is the first step on the road to disappointment.
You can pass your array through multiple functions, since arrays just decay to pointers when passed if that is what you asking. Something like:
#include <stdio.h>
void foo(int [], int);
void foo1(int [], int);
void foo2(int [], int);
int main (void) {
int myArray[3] = {0,1,2};
foo(myArray, 3);
printf("\nIn main()");
for(int i=0; i < 3; i++)
printf("%d ", myArray[i]);
return 0;
void foo(int Array[], int size){
for(int i=0; i < size; i++)
foo1(Array, 3);
void foo1(int Array[], int size){
for(int i=0; i < size; i++)
foo2(Array, 3);
void foo2(int Array[], int size){
printf("\nIn foo2()");
for(int i=0; i < size; i++)
printf("%d ", Array[i]);
EDIT: Too slow
Last edited by AndrewHunter; 08-09-2011 at 07:30 PM.
Warning: Some or all of my posted code may be non-standard and as such should not be used and in no case looked at.
And a rather obvious missed opportunity...
void foot (int [], int);
void fool (int [], int);
void food (int [], int);
But then, I probably shouldn't give up my day job either...
Warning: Some or all of my posted code may be non-standard and as such should not be used and in no case looked at.
The only thing I don't see between your posts is you initializing sum to a value before using it in your final for loop, e.g. double sum=0; at the top of your function.
08-09-2011 #2
Join Date
Aug 2010
Ontario Canada
08-09-2011 #3
Registered User
Join Date
May 2011
Around 8.3 light-minutes from the Sun
08-09-2011 #4
Registered User
Join Date
Apr 2008
08-09-2011 #5
Registered User
Join Date
May 2011
Around 8.3 light-minutes from the Sun
08-09-2011 #6
Registered User
Join Date
May 2011
Around 8.3 light-minutes from the Sun
08-09-2011 #7
08-09-2011 #8
Registered User
Join Date
Apr 2008
08-09-2011 #9
Registered User
Join Date
May 2011
Around 8.3 light-minutes from the Sun
08-09-2011 #10
Registered User
Join Date
Apr 2008
08-09-2011 #11
08-09-2011 #12
Registered User
Join Date
May 2011
Around 8.3 light-minutes from the Sun
08-09-2011 #13
Join Date
Aug 2010
Ontario Canada
08-10-2011 #14
Registered User
Join Date
May 2011
Around 8.3 light-minutes from the Sun
08-10-2011 #15
Registered User
Join Date
Aug 2011
|
{"url":"http://cboard.cprogramming.com/c-programming/140167-converting-function-include-vectors.html","timestamp":"2014-04-21T11:17:02Z","content_type":null,"content_length":"117436","record_id":"<urn:uuid:9ae09e75-50d4-4ecb-b4d9-05d0ef7f9c47>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Identification problem in SPECT: uniqueness, non-uniqueness and stability
Seminar Room 1, Newton Institute
We study the problem of recovery both the attenuation $a$ and the source $f$ in the attenuated X-ray transform in the plane. We study the linearization as well. It turns out that there is a natural
Hamiltonian flow that determines which singularities we can recover. If the perturbation $\delta a$ is supported in a compact set that is non-trapping for that flow, then the problem is well posed.
Otherwise, it may not be, and at least in the case of radial $a$, $f$, it is not. We present uniqueness and non-uniqueness results both for the linearized and the non-linear problem; as well as a H\
"older stability estimate.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/INV/seminars/2011080411151.html","timestamp":"2014-04-18T15:44:22Z","content_type":null,"content_length":"6240","record_id":"<urn:uuid:378f7fd3-7d85-414a-9dee-21e13cd8de7f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: Integers Up: Numbers Previous: Natural, or Counting Numbers Contents
It is easy to see that there is no largest natural number. Suppose there was one, call it . Now add one to it, forming . We know that , contradicting our assertion that was the largest. This lack of
a largest object, lack of a boundary, lack of termination in series, is of enormous importance in mathematics and physics. If there is no largest number, if there is no ``edge'' to space or time,
then it in some sense they run on forever, without termination.
In spite of the fact that there is no actual largest natural number, we have learned that it is highly advantageous in many context to invent a pretend one. This pretend number doesn't actually exist
as a number, but rather stands for a certain reasoning process.
In fact, there are a number of properties of numbers (and formulas, and integrals) that we can only understand or evaluate if we imagine a very large number used as a boundary or limit in some
computation, and then let that number mentally increase without bound. Note well that this is a mental trick, encoding the observation that there is no largest number and so we can increase a number
parameter without bound, no more. However, we use this mental trick all of the time - it becomes a way for our finite minds to encompass the idea of unboundedness. To facilitate this process, we
invent a symbol for this unreachable limit to the counting process and give it a name.
We call this unboundedness infinity^4 - the lack of a finite boundary - and give it the symbol in mathematics.
In many contexts we will treat as a number in all of the number systems mentioned below. We will talk blithely about ``infinite numbers of digits'' in number representations, which means that the
digits simply keep on going without bound. However, it is very important to bear in mind that is not a number, it is a concept. Or at the very least, it is a highly special number, one that doesn't
satisfy the axioms or participate in the usual operations of ordinary arithmetic. For example:
These are certainly ``odd'' rules compared to ordinary arithmetic!
For a bit longer than a century now (since Cantor organized set theory and discussed the various ways sets could become infinite and set theory was subsequently axiomatized) there has been an axiom
of infinity in mathematics postulating its formal existence as a ``number'' with these and other odd properties.
Our principal use for infinity will be as a limit in calculus and in series expansions. We will use it to describe both the very large (but never the largest) and reciprocally, the very small (but
never quite zero). We will use infinity to name the process of taking a small quantity and making it ``infinitely small'' (but nonzero) - the idea of the infinitesimal, or the complementary operation
of taking a large (finite) quantity (such as a limit in a finite sum) and making it ``infinitely large''. These operations do not always make arithmetical sense, but when they do they are extremely
Next: Integers Up: Numbers Previous: Natural, or Counting Numbers Contents Robert G. Brown 2011-04-19
|
{"url":"http://phy.duke.edu/~rgb/Class/math_for_intro_physics/math_for_intro_physics/node7.html","timestamp":"2014-04-18T18:45:29Z","content_type":null,"content_length":"9302","record_id":"<urn:uuid:7af5f696-a1e4-49f0-a59c-3497c86dd36a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Global convergence of a non-monotone trust-region filter algorithm for nonlinear programming.
(English) Zbl 1130.90399
Hager, William W. (ed.) et al., Multiscale optimization methods and applications. Selected papers based on the presentation at the conference, Gainesville, FL, USA, February 26–28, 2004. New York,
NY: Springer (ISBN 0-387-29549-6/hbk). Nonconvex Optimization and Its Applications 82, 125-150 (2006).
A non-monotone variant of the trust-region SQP-filter algorithm analyzed in
D. F. Fletcher, B. Guo
T.A.G. Langrish
[Appl. Math. Modelling 25, No. 8, 635–653 (2001;
Zbl 1076.76535
)] is defined, that directly uses the dominated area of the filter as an acceptability criterion for trial points. It is proved that, under reasonable assumptions and for all possible choices of the
starting point, the algorithm generates at least a subsequence converging to a first-order critical point.
90C30 Nonlinear programming
90C55 Methods of successive quadratic programming type
|
{"url":"http://zbmath.org/?q=an:1130.90399","timestamp":"2014-04-18T00:26:19Z","content_type":null,"content_length":"20571","record_id":"<urn:uuid:45a22c72-dbab-4299-8cf2-e07f801a1356>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATH T102 ALL Mathematics for Elementary School Teachers II
Mathematics | Mathematics for Elementary School Teachers II
T102 | ALL | tba
T102 Mathematics for Elementary School Teachers II (3 credits)
P: An appropriate score on the mathematics department’s Mathematical
Skills Assessment Test, X018, or consent of the department.
This course will provide a brief survey of topics from algebra and an
introduction to probability and statistics. I Sem., II Sem.
|
{"url":"http://www.indiana.edu/~deanfac/blspr05/math/math_t102_ALL.html","timestamp":"2014-04-18T05:33:46Z","content_type":null,"content_length":"910","record_id":"<urn:uuid:fb1e5098-870c-48b5-9977-5484079bda93>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
relation, in logic, a set of ordered pairs, triples, quadruples, and so on. A set of ordered pairs is called a two-place (or dyadic) relation; a set of ordered triples is a three-place (or triadic)
relation; and so on. In general, a relation is any set of ordered n-tuples of objects. Important properties of relations include symmetry, transitivity, and reflexivity. Consider a two-place (or
dyadic) relation R. R can be said to be symmetrical if, whenever R holds between x and y, it also holds between y and x (symbolically, (∀x) (∀y) [Rxy ⊃ Ryx]); an example of a symmetrical relation is
“x is parallel to y.” R is transitive if, whenever it holds between one object and a second and also between that second object and a third, it holds between the first and the third (symbolically,
(∀x) (∀y) (∀z ) [(Rxy ∧ Ryz) ⊃ Rxz]); an example is “x is greater than y.” R is reflexive if it always holds between any object and itself (symbolically, (∀x) Rxx); an example is “x is at least as
tall as y” since x is always also “at least as tall” as itself.
|
{"url":"http://www.britannica.com/print/topic/496799","timestamp":"2014-04-18T04:36:09Z","content_type":null,"content_length":"7225","record_id":"<urn:uuid:178c485d-a074-4a18-be80-549dcfb546f4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modest Age
A teacher, attempting to avoid revealing his real age, made the following statement:
"I'm only about twenty-three years old if you don't count weekends."
Can you work out how old the teacher really is?
If weekends are not counted, the given age only represents ^5/[7] of his actual age.
Using this information:
^5/[7] of 31 = 22.142...
^5/[7] of 32 = 22.857...
^5/[7] of 33 = 23.571...
So we can deduce that the teacher is about 32 years old.
Be careful, though. You may be thinking, "Why can't I just work out ^23/[5]
Suppose that the teacher claimed to be about 24 years old; ^24/[5]
^5/[7] of 33 = 23.571...
^5/[7] of 34 = 24.285...
In other words, the ages 33 and 34 both map onto the reduced age of 24, therefore it is impossible to solve the problem in this particular case.
By investigating real ages mapped onto "reduced" ages, can you discover which ages produce unique values and which ages have more than one value?
Problem ID: 130 (Nov 2003) Difficulty: 2 Star
|
{"url":"http://mathschallenge.net/full/modest_age","timestamp":"2014-04-19T17:02:34Z","content_type":null,"content_length":"6269","record_id":"<urn:uuid:e4604758-20d7-42bf-82f7-57c987c59985>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Non-Uniform Random Variate Generation
Results 1 - 10 of 532
, 1993
"... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difculties arise, however, because probabilistic models
with the necessary realism and flexibility lead to complex distributions over high-dimensional spaces. Rel ..."
Cited by 567 (20 self)
Add to MetaCart
Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difculties arise, however, because probabilistic models with
the necessary realism and flexibility lead to complex distributions over high-dimensional spaces. Related problems in other fields have been tackled using Monte Carlo methods based on sampling using
Markov chains, providing a rich array of techniques that can be applied to problems in artificial intelligence. The "Metropolis algorithm" has been used to solve difficult problems in statistical
physics for over forty years, and, in the last few years, the related method of "Gibbs sampling" has been applied to problems of statistical inference. Concurrently, an alternative method for solving
problems in statistical physics by means of dynamical simulation has been developed as well, and has recently been unified with the Metropolis algorithm to produce the "hybrid Monte Carlo" method. In
computer science, Markov chain sampling is the basis of the heuristic optimization technique of "simulated annealing", and has recently been used in randomized algorithms for approximate counting of
large sets. In this review, I outline the role of probabilistic inference in artificial intelligence, and present the theory of Markov chains, and describe various Markov chain Monte Carlo
algorithms, along with a number of supporting techniques. I try to present a comprehensive picture of the range of methods that have been developed, including techniques from the varied literature
that have not yet seen wide application in artificial intelligence, but which appear relevant. As illustrative examples, I use the problems of probabilitic inference in expert systems, discovery of
latent classes from data, and Bayesian learning for neural networks.
- IN BAYESIAN STATISTICS , 1992
"... Data augmentation and Gibbs sampling are two closely related, sampling-based approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are
neither independent nor identically distributed complicates the assessment of convergence and numerical accurac ..."
Cited by 269 (11 self)
Add to MetaCart
Data augmentation and Gibbs sampling are two closely related, sampling-based approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither
independent nor identically distributed complicates the assessment of convergence and numerical accuracy of the approximations to the expected value of functions of interest under the posterior. In
this paper methods from spectral analysis are used to evaluate numerical accuracy formally and construct diagnostics for convergence. These methods are illustrated in the normal linear model with
informative priors, and in the Tobit-censored regression model.
- IEEE Transactions on Evolutionary Computation , 1999
"... Evolutionary programming (EP) has been applied with success to many numerical and combinatorial optimization problems in recent years. EP has rather slow convergence rates, however, on some
function optimization problems. In this paper, a "fast EP" (FEP) is proposed which uses a Cauchy instead of Ga ..."
Cited by 206 (36 self)
Add to MetaCart
Evolutionary programming (EP) has been applied with success to many numerical and combinatorial optimization problems in recent years. EP has rather slow convergence rates, however, on some function
optimization problems. In this paper, a "fast EP" (FEP) is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator. The relationship between FEP and classical EP
(CEP) is similar to that between fast simulated annealing and the classical version. Both analytical and empirical studies have been carried out to evaluate the performance of FEP and CEP for
different function optimization problems. This paper shows that FEP is very good at search in a large neighborhood while CEP is better at search in a small local neighborhood. For a suite of 23
benchmark problems, FEP performs much better than CEP for multimodal functions with many local minima while being comparable to CEP in performance for unimodal and multimodal functions with only a
few local minima. This paper also shows the relationship between the search step size and the probability of finding a global optimum and thus explains why FEP performs better than CEP on some
functions but not on others. In addition, the importance of the neighborhood size and its relationship to the probability of finding a near-optimum is investigated. Based on these analyses, an
improved FEP (IFEP) is proposed and tested empirically. This technique mixes different search operators (mutations). The experimental results show that IFEP performs better than or as well as the
better of FEP and CEP for most benchmark problems tested.
- RISK MANAGEMENT: VALUE AT RISK AND BEYOND , 1999
"... Modern risk management calls for an understanding of stochastic dependence going beyond simple linear correlation. This paper deals with the static (non-time-dependent) case and emphasizes the
copula representation of dependence for a random vector. Linear correlation is a natural dependence measure ..."
Cited by 195 (30 self)
Add to MetaCart
Modern risk management calls for an understanding of stochastic dependence going beyond simple linear correlation. This paper deals with the static (non-time-dependent) case and emphasizes the copula
representation of dependence for a random vector. Linear correlation is a natural dependence measure for multivariate normally and, more generally, elliptically distributed risks but other dependence
concepts like comonotonicity and rank correlation should also be understood by the risk management practitioner. Using counterexamples the falsity of some commonly held views on correlation is
demonstrated; in general, these fallacies arise from the naive assumption that dependence properties of the elliptical world also hold in the non-elliptical world. In particular, the problem of
finding multivariate models which are consistent with prespecified marginal distributions and correlations is addressed. Pitfalls are highlighted and simulation algorithms avoiding these problems are
constructed. ...
, 1994
"... The views and conclusions in this document are those of the authors and do not necessarily represent the official policies or endorsements of any of the research sponsors. How do we build
distributed systems that are secure? Cryptographic techniques can be used to secure the communications between p ..."
Cited by 150 (8 self)
Add to MetaCart
The views and conclusions in this document are those of the authors and do not necessarily represent the official policies or endorsements of any of the research sponsors. How do we build distributed
systems that are secure? Cryptographic techniques can be used to secure the communications between physically separated systems, but this is not enough: we must be able to guarantee the privacy of
the cryptographic keys and the integrity of the cryptographic functions, in addition to the integrity of the security kernel and access control databases we have on the machines. Physical security is
a central assumption upon which secure distributed systems are built; without this foundation even the best cryptosystem or the most secure kernel will crumble. In this thesis, I address the
distributed security problem by proposing the addition of a small, physically secure hardware module, a secure coprocessor, to standard workstations and PCs. My central axiom is that secure
coprocessors are able to maintain the privacy of the data they process. This thesis attacks the distributed security problem from multiple sides. First, I analyze the security properties of existing
system components, both at the hardware and
- IEEE Trans. Image Processing , 2002
"... We present a statistical view of the texture retrieval problem by combining the two related tasks, namely feature extraction (FE) and similarity measurement (SM), into a joint modeling and
classification scheme. We show that using a consistent estimator of texture model parameters for the FE step fo ..."
Cited by 145 (4 self)
Add to MetaCart
We present a statistical view of the texture retrieval problem by combining the two related tasks, namely feature extraction (FE) and similarity measurement (SM), into a joint modeling and
classification scheme. We show that using a consistent estimator of texture model parameters for the FE step followed by computing the Kullback--Leibler distance (KLD) between estimated models for
the SM step is asymptotically optimal in term of retrieval error probability. The statistical scheme leads to a new wavelet-based texture retrieval method that is based on the accurate modeling of
the marginal distribution of wavelet coefficients using generalized Gaussian density (GGD) and on the existence a closed form for the KLD between GGDs. The proposed method provides greater accuracy
and flexibility in capturing texture information, while its simplified form has a close resemblance with the existing methods which uses energy distribution in the frequency domain to identify
textures. Experimental results on a database of 640 texture images indicate that the new method significantly improves retrieval rates, e.g., from 65% to 77%, compared with traditional approaches,
while it retains comparable levels of computational complexity.
"... Random numbers are the nuts and bolts of simulation. Typically, all the randomness required by the model is simulated by a random number generator whose output is assumed to be a sequence of
independent and identically distributed (IID) U(0, 1) random variables (i.e., continuous random variables dis ..."
Cited by 136 (30 self)
Add to MetaCart
Random numbers are the nuts and bolts of simulation. Typically, all the randomness required by the model is simulated by a random number generator whose output is assumed to be a sequence of
independent and identically distributed (IID) U(0, 1) random variables (i.e., continuous random variables distributed uniformly over the interval
, 1991
"... The construction and implementation of a Gibbs sampler for efficient simulation from the truncated multivariate normal and Student-t distributions is described. It is shown how the accuracy and
convergence of integrals based on the Gibbs sample may be constructed, and how an estimate of the probabil ..."
Cited by 128 (8 self)
Add to MetaCart
The construction and implementation of a Gibbs sampler for efficient simulation from the truncated multivariate normal and Student-t distributions is described. It is shown how the accuracy and
convergence of integrals based on the Gibbs sample may be constructed, and how an estimate of the probability of the constraint set under the unrestricted distribution may be produced. Keywords:
Bayesian inference; Gibbs sampler; Monte Carlo; multiple integration; truncated normal This paper was prepared for a presentation at the meeting Computing Science and Statistics: the Twenty-Third
Symposium on the Interface, Seattle, April 22-24, 1991. Research assistance from Zhenyu Wang and financial support from National Science Foundation Grant SES-8908365 are gratefully acknowledged. The
software for the examples may be requested by electronic mail, and will be returned by that medium. 2 1. Introduction The generation of random samples from a truncated multivariate normal
distribution, that is, a ...
, 1992
"... this article then is to develop methodology for modeling the nonnormality of the ut, the vt, or both. A second departure from the model specification ( 1 ) is to allow for unknown variances in
the state or observational equation, as well as for unknown parameters in the transition matrices Ft and Ht ..."
Cited by 126 (14 self)
Add to MetaCart
this article then is to develop methodology for modeling the nonnormality of the ut, the vt, or both. A second departure from the model specification ( 1 ) is to allow for unknown variances in the
state or observational equation, as well as for unknown parameters in the transition matrices Ft and Ht. As a third generalization we allow for nonlinear model structures; that is, X t = ft(Xt-l) q-
Ut, and Yt = ht(xt) + vt, t = 1, ..., n, (2) whereft( ) and ht(. ) are given, but perhaps also depend on some unknown parameters. The experimenter may wish to entertain a variety of error
distributions. Our goal throughout the article is an analysis for general state-space models that does not resort to convenient assumptions at the expense of model adequacy
- Philosophical Transactions of the Royal Society B , 1997
"... We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down
and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has b ..."
Cited by 120 (5 self)
Add to MetaCart
We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and
lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that
only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.85.8760","timestamp":"2014-04-18T19:20:25Z","content_type":null,"content_length":"40540","record_id":"<urn:uuid:66dbd175-620a-4367-891c-92cdee29e519>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
x and y intercepts
May 19th 2008, 09:47 AM #1
May 2008
x and y intercepts
Can someone help me find x intercept for
f(x) = 1 - 3(-x-2)2
(the last 2 means square, not sure how to do upper case here)
I have been stuck with this one for couple of hours.
Thanks everyone. This is my first post I am glad I found some on-line help.
An y intercept is a point where x=0.
$f(0)=1-3(-0-2)^2=1-3(-2)^2=1-3 \cdot 4=-11$
So the point $(0~,~-11)$ is an y intercept.
An x intercept is a point where y=0.
$(x+2)^2=\frac 13$
$\implies x+2=\pm \frac{1}{\sqrt{3}}$
thank you that looks great. I got stuck on the last line!!!
May 19th 2008, 10:23 AM #2
May 19th 2008, 10:31 AM #3
May 2008
|
{"url":"http://mathhelpforum.com/algebra/38866-x-y-intercepts.html","timestamp":"2014-04-19T12:58:37Z","content_type":null,"content_length":"35700","record_id":"<urn:uuid:50e06b09-9ab4-467b-ad5e-c8b2312dac8b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alviso Prealgebra Tutor
Find an Alviso Prealgebra Tutor
...I have taught pre-algebra to college students for many quarters and understand well how to help them to overcome their difficulties in this subject. I have aced my SAT Math test as well as SAT
Math subject test. I believe the way to improve on the SAT Math performance is to systematically work on and improve the underlying math and logic in working the problems.
23 Subjects: including prealgebra, calculus, statistics, physics
...It was my favorite class that I took in my Math major, and I would feel very comfortable tutoring it. I tutored this subject informally with peers in the math center on campus. I took symbolic
logic in college and received an A.
35 Subjects: including prealgebra, reading, calculus, geometry
...I also believe the confidence gained by working hard and being successful is the most valuable reward of all!I have a CA multiple subjects teaching credential. I taught 4th grade for 3 years,
middle school for 6 years, and currently teach high school algebra 1 and geometry. I taught 4th grade s...
14 Subjects: including prealgebra, reading, English, algebra 1
...F. is consumed by his addictions to history, backpacking, and playing jazz, classical, blues, and old rock ‘n’ roll guitar and bass. He has been known to offer extra credit points to any
student who can either do more bar pushups or bench press more than he can but, so far, hasn’t had to award a...
14 Subjects: including prealgebra, chemistry, writing, English
...I love teaching math, English, and Psychology. I enjoy encouraging students to accept and enjoy math and understand its many benefits. I'm very skilled at making difficult information seem very
clear and simple.
18 Subjects: including prealgebra, English, reading, writing
|
{"url":"http://www.purplemath.com/Alviso_Prealgebra_tutors.php","timestamp":"2014-04-17T07:30:40Z","content_type":null,"content_length":"23887","record_id":"<urn:uuid:0ef19a37-19c1-4fc1-9bf5-8b3cf6a2a556>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Professor Bartlett teaches a class of 11 students. She has a visually impaired student, Louise, who must sit in the front row next to her tutor, who is also a member of this class. Assume that the
front row has eight chairs, and the tutor must be seated to the right of Louise. How many different ways can professor Bartlett assign students to sit in the first row?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@jim_thompson5910 help? :)
Best Response
You've already chosen the best response.
A big key phrase here is that "the tutor must be seated to the right of Louise", so if T is the tutor and L is Louise, then you can only have LT and NOT TL So basically LT is one person since you
can't a) separate the two AND b) you can't reorder it
Best Response
You've already chosen the best response.
so 6 spaces left right
Best Response
You've already chosen the best response.
So instead of 11 students to order, you really have 11-2+1 = 10 students to order since you're combining two students to form one "student'
Best Response
You've already chosen the best response.
10P6 maybe?
Best Response
You've already chosen the best response.
There are 8 chairs in the front, but 2 are taken up and combined into one "chair" so to speak. So there are really 8-2+1 = 7 "chairs" in the front row.
Best Response
You've already chosen the best response.
So it's really 10 P 7
Best Response
You've already chosen the best response.
so 10P7?
Best Response
You've already chosen the best response.
you got it
Best Response
You've already chosen the best response.
5040 ways?
Best Response
You've already chosen the best response.
5040 is 7! or 7 P 7, so no
Best Response
You've already chosen the best response.
whats 10P7 then?
Best Response
You've already chosen the best response.
10 P 7 = (10!)/((10-7)!)
Best Response
You've already chosen the best response.
604800= (10!)/(3!)?
Best Response
You've already chosen the best response.
you got it
Best Response
You've already chosen the best response.
finally thank you!
Best Response
You've already chosen the best response.
you're welcome
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fda76a0e4b0f2662fd0f78c","timestamp":"2014-04-20T16:26:32Z","content_type":null,"content_length":"65938","record_id":"<urn:uuid:a8010e28-84d3-4de4-8a40-f11733352108>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westside, GA Prealgebra Tutor
Find a Westside, GA Prealgebra Tutor
...I have a bachelor's in Biology and Biochemistry from the Georgia Institute of Technology and a master's in Infectious Diseases from the University of Georgia. I am a scientist by trade but I
consider myself more of a renaissance man in the world of academia. While I specialize in the math and sciences, I am more than equipped to tutor subjects such as English, Writing, and History.
22 Subjects: including prealgebra, English, chemistry, biology
...She even studied abroad in Ireland during those three years! She's been tutoring for over 5 years in many different environments that include one-on-one tutoring in-person and online, as well
as tutoring in a group environment. She can adapt to many learning styles.
22 Subjects: including prealgebra, reading, physics, calculus
I am a junior Mathematics major at LaGrange College. I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school.
9 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...MS Word is the international standard for producing text. Mastery of Word makes it easy to produce tip-top professional-looking documents with more resources available than most of us can
imagine. It's a VERY powerful tool TOO powerful for most - but can be mastered relatively quickly with a little one-on-one training.
32 Subjects: including prealgebra, reading, calculus, physics
...I have also prepared Spanish lessons to teach to a 2nd grade class in the past. I graduated as a Summa Cum Laude Honors graduate with a Science Bachelor's degree. I have taught myself how to
efficiently study and make mostly A's while in High School and in college.
29 Subjects: including prealgebra, Spanish, chemistry, reading
Related Westside, GA Tutors
Westside, GA Accounting Tutors
Westside, GA ACT Tutors
Westside, GA Algebra Tutors
Westside, GA Algebra 2 Tutors
Westside, GA Calculus Tutors
Westside, GA Geometry Tutors
Westside, GA Math Tutors
Westside, GA Prealgebra Tutors
Westside, GA Precalculus Tutors
Westside, GA SAT Tutors
Westside, GA SAT Math Tutors
Westside, GA Science Tutors
Westside, GA Statistics Tutors
Westside, GA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Barrett Parkway, GA prealgebra Tutors
Chatt Hills, GA prealgebra Tutors
Embry Hls, GA prealgebra Tutors
Fort Gillem, GA prealgebra Tutors
Fry, GA prealgebra Tutors
Gainesville, GA prealgebra Tutors
Green Way, GA prealgebra Tutors
Madison, SC prealgebra Tutors
Marble Hill, GA prealgebra Tutors
North Metro prealgebra Tutors
Penfield, GA prealgebra Tutors
Penfld, GA prealgebra Tutors
Philomath, GA prealgebra Tutors
Rockbridge, GA prealgebra Tutors
White Stone, GA prealgebra Tutors
|
{"url":"http://www.purplemath.com/Westside_GA_Prealgebra_tutors.php","timestamp":"2014-04-20T19:13:57Z","content_type":null,"content_length":"24237","record_id":"<urn:uuid:f2496afd-0614-4312-8f4f-60ea04ba6ff0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of mandelbrot set
, the
Mandelbrot set
, named after
Benoît Mandelbrot
, is a set of
in the
complex plane
, the
of which forms a
. Mathematically, the Mandelbrot set can be defined as the set of complex
-values for which the
of 0 under
of the
complex quadratic polynomial x[n+1]
. That is, a complex number,
, is in the Mandelbrot set if, when starting with
=0 and applying the iteration repeatedly, the
absolute value
never exceeds a certain number (that number depends on
) however large
Eg. c = 1 gives the sequence 0, 1, 2, 5, 26… which leads to infinity. As this sequence is unbounded, 1 is not an element of the Mandelbrot set.
On the other hand, c = i gives the sequence 0, i, (−1 + i), −i, (−1 + i), −i…, which is bounded, and so it belongs to the Mandelbrot set.
When computed and graphed on the complex plane, the Mandelbrot Set is seen to have an elaborate boundary, which does not simplify at any given magnification. This qualifies the boundary as a fractal.
The Mandelbrot set has become popular outside mathematics both for its aesthetic appeal and for being a complicated structure arising from a simple definition. Benoît Mandelbrot and others worked
hard to communicate this area of mathematics to the public.
The Mandelbrot set has its place in
complex dynamics
, a field first investigated by the French mathematicians
Pierre Fatou
Gaston Julia
at the beginning of the 20th century. The first pictures of it were drawn in 1978 by Robert Brooks and Peter Matelski as part of a study of
Kleinian Groups
Mandelbrot studied the parameter space of quadratic polynomials in an article that appeared in 1980. The mathematical study of the Mandelbrot set really began with work by the mathematicians Adrien
Douady and John H. Hubbard, who established many of its fundamental properties and named the set in honour of Mandelbrot.
The mathematicians Heinz-Otto Peitgen and Peter Richter became well-known for promoting the set with stunning photographs, books , and an internationally touring exhibit of the German
The cover article of the August 1985 Scientific American introduced the algorithm for computing the Mandelbrot set to a wide audience. The cover featured an image created by Peitgen, et. al.
The work of Douady and Hubbard coincided with a huge increase in interest in complex dynamics and abstract mathematics, and the study of the Mandelbrot set has been a centerpiece of this field ever
since. An exhaustive list of all the mathematicians who have contributed to the understanding of this set since then is beyond the scope of this article, but such a list would notably include Mikhail
Lyubich,, Curt McMullen, John Milnor, Mitsuhiro Shishikura, and Jean-Christophe Yoccoz.
Formal definition
The Mandelbrot set
is defined by a family of
complex quadratic polynomials
$P_c:mathbb Ctomathbb C$
given by
$P_c:zmapsto z^2 + c$,
where $c$ is a complex parameter. For each $c$, one considers the behavior of the sequence $\left(0, P_c\left(0\right), P_c\left(P_c\left(0\right)\right), P_c\left(P_c\left(P_c\left(0\right)\right)\
right), ldots\right)$ obtained by iterating $P_c\left(z\right)$ starting at critical point $z = 0,$, which either escapes to infinity or stays within a disk of some finite radius. The Mandelbrot set
is defined as the set of all points $c$ such that the above sequence does not escape to infinity.
More formally, if $P_c^\left\{circ n\right\}\left(z\right)$ denotes the nth iterate of $P_c\left(z\right)$ (i.e. $P_c\left(z\right)$composed with itself n times), the Mandelbrot set is the subset of
the complex plane given by
$M = left\left\{cin mathbb C : sup_\left\{nin mathbb N\right\}|P_c^\left\{circ n\right\}\left(0\right)| < infinright\right\}.$
Mathematically, the Mandelbrot set is just a set of complex numbers. A given complex number c either belongs to M or it does not. A picture of the Mandelbrot set can be made by colouring all the
points $c$ which belong to M black, and all other points white. The more colourful pictures usually seen are generated by colouring points not in the set according to how quickly or slowly the
sequence $|P_c^\left\{circ n\right\}\left(0\right)|$ diverges to infinity. See the section on computer drawings below for more details.
The Mandelbrot set can also be defined as the connectedness locus of the family of polynomials $P_c\left(z\right)$. That is, it is the subset of the complex plane consisting of those parameters $c$
for which the Julia set of $P_c$ is connected.
Basic properties
The Mandelbrot set is a
compact set
, contained in the
closed disk
of radius 2 around the origin. In fact, a point
belongs to the Mandelbrot set if and only if
$|P_c^\left\{circ n\right\}\left(0\right)|leq 2$
for all
$ngeq 0$
. In other words, if the
absolute value
$P_c^\left\{circ n\right\}\left(0\right)$
ever becomes larger than 2, the sequence will escape to infinity.
The intersection of $M$ with the real axis is precisely the interval $\left[-2 , 0.25\right],$. The parameters along this interval can be put in one-to-one correspondence with those of the real
logistic family,
$zmapsto lambda z\left(1-z\right),quad lambdain\left[1,4\right].,$
The correspondence is given by
$c = frac\left\{1-\left(lambda-1\right)^2\right\}\left\{4\right\}.$
In fact, this gives a correspondence between the entire parameter space of the logistic family and that of the Mandelbrot set.
The area of the Mandelbrot set is estimated to be 1.506 591 77 ± 0.000 000 08.
Douady and Hubbard have shown that the Mandelbrot set is connected. In fact, they constructed an explicit conformal isomorphism between the complement of the Mandelbrot set and the complement of the
closed unit disk. Mandelbrot had originally conjectured that the Mandelbrot set is disconnected. This conjecture was based on computer pictures generated by programs which are unable to detect the
thin filaments connecting different parts of $M$. Upon further experiments, he revised his conjecture, deciding that $M$ should be connected.
The dynamical formula for the uniformisation of the complement of the Mandelbrot set, arising from Douady and Hubbard's proof of the connectedness of $M$, gives rise to external rays of the
Mandelbrot set. These rays can be used to study the Mandelbrot set in combinatorial terms and form the backbone of the Yoccoz parapuzzle.
The boundary of the Mandelbrot set is exactly the bifurcation locus of the quadratic family; that is, the set of parameters $c$ for which the dynamics changes abruptly under small changes of $c.$ It
can be constructed as the limit set of a sequence of plane algebraic curves, the Mandelbrot curves, of the general type known as polynomial lemniscates. The Mandelbrot curves are defined by setting p
[0]=z, p[n]=p[n-1]^2+z, and then interpreting the set of points |p[n](z)|=1 in the complex plane as a curve in the real Cartesian plane of degree 2^n+1 in x and y.
Other properties
The main cardioid and period bulbs
Upon looking at a picture of the Mandelbrot set, one immediately notices the large cardioid-shaped region in the center. This main cardioid is the region of parameters $c$ for which $P_c$ has an
attracting fixed point. It consists of all parameters of the form
$c = frac\left\{1-\left(mu-1\right)^2\right\}\left\{4\right\}$
for some
in the
open unit disk
To the left of the main cardioid, attached to it at the point $c=-3/4$, a circular-shaped bulb is visible. This bulb consists of those parameters $c,$ for which $P_c$ has an attracting cycle of
period 2. This set of parameters is an actual circle, namely that of radius 1/4 around -1.
There are infinitely many other bulbs tangent to the main cardioid: for every rational number $frac\left\{p\right\}\left\{q\right\}$, with p and q coprime, there is such a bulb that is tangent at the
$c_\left\{frac\left\{p\right\}\left\{q\right\}\right\} = frac\left\{1 - left\left(e^\left\{2pi i frac\left\{p\right\}\left\{q\right\}\right\}-1right\right)^2\right\}\left\{4\right\}.$
This bulb is called the $frac\left\{p\right\}\left\{q\right\}$-bulb of the Mandelbrot set. It consists of parameters which have an attracting cycle of period $q$ and combinatorial rotation number
$frac\left\{p\right\}\left\{q\right\}$. More precisely, the $q$ periodic Fatou components containing the attracting cycle all touch at a common point (commonly called the $alpha,$-fixed point). If we
label these components $U_0,dots,U_\left\{q-1\right\}$ in counterclockwise orientation, then $P_c$ maps the component $U_j$ to the component $U_\left\{j+p,\left(operatorname\left\{mod\right\} q\
The change of behavior occurring at $c_\left\{frac\left\{p\right\}\left\{q\right\}\right\}$ is known as a bifurcation: the attracting fixed point "collides" with a repelling period q-cycle. As we
pass through the bifurcation parameter into the $frac\left\{p\right\}\left\{q\right\}$-bulb, the attracting fixed point turns into a repelling fixed point (the $alpha$-fixed point), and the period q
-cycle becomes attracting.
Hyperbolic components
All the bulbs we encountered in the previous section were interior components of the Mandelbrot set in which the maps
have an attracting periodic cycle. Such components are called
hyperbolic components
It is conjectured that these are the only interior regions of $M$. This problem, known as density of hyperbolicity, may be the most important open problem in the field of complex dynamics.
Hypothetical non-hyperbolic components of the Mandelbrot set are often referred to as "queer" components.
For real quadratic polynomials, this question was answered positively in the 1990s independently by Lyubich and by Graczyk and Świątek. (Note that hyperbolic components intersecting the real axis
correspond exactly to periodic windows in the Feigenbaum diagram. So this result states that such windows exist near every parameter in the diagram.)
Not every hyperbolic component can be reached by a sequence of direct bifurcations from the main cardioid of the Mandelbrot set. However, such a component can be reached by a sequence of direct
bifurcations from the main cardioid of a little Mandelbrot copy (see below).
Local connectivity
It is conjectured that the Mandelbrot set is
locally connected
. This famous conjecture is known as
Mandelbrot Locally Connected
). By the work of
Adrien Douady
John H. Hubbard
, this conjecture would result in a simple abstract "pinched disk" model of the Mandelbrot set. In particular, it would imply the important
hyperbolicity conjecture
mentioned above.
The celebrated work of Jean-Christophe Yoccoz established local connectivity of the Mandelbrot set at all finitely-renormalizable parameters; that is, roughly speaking those which are contained only
in finitely many small Mandelbrot copies. Since then, local connectivity has been proved at many other points of $M$, but the full conjecture is still open.
The Mandelbrot set is self-similar under magnification in the neighborhoods of the Misiurewicz points. It is also conjectured to be self-similar around generalized Feigenbaum points (e.g. −1.401155
or −0.1528 + 1.0397i), in the sense of converging to a limit set. The Mandelbrot set in general is not strictly self-similar but it is quasi-self-similar, as small slightly different versions of
itself can be found at arbitrarily small scales.
The little copies of the Mandelbrot set are all slightly different, mostly because of the thin threads connecting them to the main body of the set.
Further results
Hausdorff dimension
of the
of the Mandelbrot set equals 2 as determined by a result of
Mitsuhiro Shishikura
. It is not known whether the boundary of the Mandelbrot set has positive planar
Lebesgue measure
In the Blum-Shub-Smale model of real computation, the Mandelbrot set is not computable, but its complement is computably enumerable. However, many simple objects (e.g., the graph of exponentiation)
are also not computable in the BSS model. At present it is unknown whether the Mandelbrot set is computable in models of real computation based on computable analysis, which correspond more closely
to the intuitive notion of "plotting the set by a computer." Hertling has shown that the Mandelbrot set is computable in this model if the hyperbolicity conjecture is true.
Relationship with Julia sets
As a consequence of the definition of the Mandelbrot set, there is a close correspondence between the geometry of the Mandelbrot set at a given point and the structure of the corresponding Julia set.
This principle is exploited in virtually all deep results on the Mandelbrot set. For example, Shishikura proves that, for a dense set of parameters in the boundary of the Mandelbrot set, the Julia
set has Hausdorff dimension two, and then transfers this information to the parameter plane. Similarly, Yoccoz first proves the local connectivity of Julia sets, before establishing it for the
Mandelbrot set at the corresponding parameters. Adrien Douady phrases this principle as
Plough in the dynamical plane, and harvest in parameter space.
Recall that, for every rational number
, where
relatively prime
, there is a hyperbolic component of period
bifurcating from the main cardioid. The part of the Mandelbrot set connected to the main cardioid at this bifurcation point is called the
. Computer experiments suggest that the
of the limb tends to zero like
. The best current estimate known is the famous
, which states that the size tends to zero like 1 /
A period-q limb will have q − 1 "antennae" at the top of its limb. We can thus determine the period of a given bulb by counting these antennas.
Image gallery of a zoom sequence
The Mandelbrot set shows more intricate detail the closer one looks or magnifies the image, usually called "zooming in". The following example of an image sequence zooming to a selected
value gives an impression of the infinite richness of different geometrical structures, and explains some of their typical rules.
The magnification of the last image relative to the first one is about 10,000,000,000 to 1. Relating to an ordinary monitor, it represents a section of a Mandelbrot set with a diameter of 4 million
kilometres. Its border would show an inconceivable number of different fractal structures.
Start Step 1 Step 2 Step 3 Step 4
Step 5 Step 6 Step 7 Step 8 Step 9
Step 10 Step 11 Step 12 Step 13 Step 14
Start: Mandelbrot set with continuously coloured environment.
1. Gap between the "head" and the "body" also called the "seahorse valley".
2. On the left double-spirals, on the right "seahorses".
3. "Seahorse" upside down, its "body" is composed by 25 "spokes" consisting of 2 groups of 12 "spokes" each and one "spoke" connecting to the main cardioid; these 2 groups can be attributed by some
kind of metamorphosis to the 2 "fingers" of the "upper hand" of the Mandelbrot set, therefore, the number of "spokes" increases from one "seahorse" to the next by 2; the "hub" is a so-called
Misiurewicz point; between the "upper part of the body" and the "tail" a distorted small copy of the Mandelbrot set called satellite may be recognized.
4. The central endpoint of the "seahorse tail" is also a Misiurewicz point.
5. Part of the "tail" — there is only one path consisting of the thin structures that leads through the whole "tail"; this zigzag path passes the "hubs" of the large objects with 25 "spokes" at the
inner and outer border of the "tail"; it makes sure that the Mandelbrot set is a so-called simply connected set, which means there are no islands and no loop roads around a hole.
6. Satellite. The two "seahorse tails" are the beginning of a series of concentric crowns with the satellite in the center.
7. Each of these crowns consists of similar "seahorse tails"; their number increases with powers of 2, a typical phenomenon in the environment of satellites, the unique path to the spiral center
mentioned in zoom step 5 passes the satellite from the groove of the cardioid to the top of the "antenna" on the "head".
8. "Antenna" of the satellite. Several satellites of second order may be recognized.
9. The "seahorse valley" of the satellite. All the structures from the image of zoom step 1 reappear.
10. Double-spirals and "seahorses" - unlike the image of zoom step 2 they have appendices consisting of structures like "seahorse tails"; this demonstrates the typical linking of n+1 different
structures in the environment of satellites of the order n, here for the simplest case n=1.
11. Double-spirals with satellites of second order - analog to the "seahorses" the double-spirals may be interpreted as a metamorphosis of the "antenna".
12. In the outer part of the appendices islands of structures may be recognized; they have a shape like Julia sets J[c]; the largest of them may be found in the center of the "double-hook" on the
right side.
13. Part of the "double-hook".
14. At first sight, these islands seem to consist of infinitely many parts like Cantor sets, as is actually the case for the corresponding Julia set J[c]. Here they are connected by tiny structures
so that the whole represents a simply connected set. These tiny structures meet each other at a satellite in the center that is too small to be recognized at this magnification. The value of c
for the corresponding J[c] is not that of the image center but, relative to the main body of the Mandelbrot set, has the same position as the center of this image relative to the satellite shown
in zoom step 7.
For general families of holomorphic functions, the boundary of the Mandelbrot set generalizes to the bifurcation locus, which is a natural object to study even when the connectedness locus is not
Multibrot sets are bounded sets found in the complex plane for members of the general monic univariate polynomial family of recursions
$z mapsto z^d + c.$
Other non-analytic mappings
Of particular interest is the
fractal, the connectedness locus of the anti-holomorphic family
$z mapsto bar\left\{z\right\}^2 + c, .$
The tricorn (also sometimes called the Mandelbar set) was encountered by Milnor in his study of parameter slices of real cubic polynomials. It is not locally connected. This property is inherited by
the connectedness locus of real cubic polynomials.
Another non-analytic generalization is the Burning Ship fractal which is obtained by iterating the mapping
$z mapsto \left(|Re left\left(zright\right)|+i|Im left\left(zright\right)|\right)^2 + c, .$
Computer drawings
Algorithms :
□ boolean version (draws M-set and its exterior using 2 colours ) = Mandelbrot algorithm
□ discrete (integer) version = level set method (LSM/M ); draws Mandelbrot set and colour bands in its exterior
□ level curves version = draws lemniscates of Mandelbrot set = boundaries of Level Sets
□ decomposition of exterior of Mandelbrot set
• complex potential
□ Hubbard-Douady (real) potential of Mandelbrot set (CPM/M) - radial part of complex potential
□ external angle of Mandelbrot set - angular part of complex potential
□ abstract M-set
• Distance estimation method for Mandelbrot set
• algorithm used to explore interior of Mandelbrot set
□ period of hyperbolic components
□ multiplier of periodic orbit (internal rays(angle) and internal radius )
□ bof61 and bof60
Every algorithm can be implemented in sequential or parallel version. Mirror symmetry can be used to speed-up calculations.
Escape time algorithm
The simplest algorithm for generating a representation of the Mandelbrot set is known as the "escape time" algorithm. A repeating calculation is performed for each
point in the plot area and based on the behaviour of that calculation, a colour is chosen for that pixel.
The x and y locations of each point are used as starting values in a repeating, or iterating calculation (described in detail below). The result of each iteration is used as the starting values for
the next. The values are checked during each iteration to see if they have reached a critical 'escape' condition or 'bailout'. If that condition is reached, the calculation is stopped, the pixel is
drawn, and the next x, y point is examined. For some starting values, escape occurs quickly, after only a small number of iterations. For starting values very close to but not in the set, it may take
hundreds or thousands of iterations to escape. For values within the Mandelbrot set, escape will never occur. The programmer or user must choose how much iteration, or 'depth,' they wish to examine.
The higher the maximum number of iterations, the more detail and subtlety emerge in the final image, but the longer time it will take to calculate the fractal image.
Escape conditions can be simple or complex. Because no complex number with a real or imaginary part greater than 2 can be part of the set, a common bailout is to escape when either coefficient
exceeds 2. A more computationally complex method, but which detects escapes sooner, is to compute the distance from the origin using the Pythagorean theorem, and if this distance exceeds two, the
point has reached escape. More computationally-intensive rendering variations such as Buddhabrot detect an escape, then use values iterated along the way
The colour of each point represents how quickly the values reached the escape point. Often black is used to show values that fail to escape before the iteration limit, and gradually brighter colours
are used for points that escape. This gives a visual representation of how many cycles were required before reaching the escape condition.
For programmers
The definition of the Mandelbrot set, together with its basic properties, suggests a simple algorithm for drawing a picture of the Mandelbrot set. The region of the complex plane we are considering
is subdivided into a certain number of
. To colour any such pixel, let
be the midpoint of that pixel. We now iterate the critical value
, checking at each step whether the orbit point has modulus larger than 2.
If this is the case, we know that the midpoint does not belong to the Mandelbrot set, and we colour our pixel. (Either we colour it white to get the simple mathematical image or colour it according
to the number of iterations used to get the well-known colourful images). Otherwise, we keep iterating for a certain (large, but fixed) number of steps, after which we decide that our parameter is
"probably" in the Mandelbrot set, or at least very close to it, and colour the pixel black.
In pseudocode, this algorithm would look as follows.
For each pixel on the screen do:
x = x0 = x co-ordinate of pixel
y = y0 = y co-ordinate of pixel
iteration = 0
max_iteration = 1000
while (x*x + y*y <= (2*2) AND iteration < max_iteration )
xtemp = x*x - y*y + x0
y = 2*x*y + y0
x = xtemp
iteration = iteration + 1
if (iteration == max_iteration )
colour = black
colour = iteration
where, relating the pseudocode to
$c, , z,$
• $z = x + iy$
• $z^2 = x^2 +i2xy - y^2$
• $c = x0 + iy0$
and so, as can be seen in the pseudocode in the computation of x and y:
• $x = Re\left(z^2+c\right) = x^2-y^2+x0$ and y = $Im\left(z^2+c\right) = 2xy+y0$
To get colourful images of the set, the assignment of a colour to each value of the number of executed iterations can be made using one of a variety of functions (linear, exponential, etc). One
practical way to do it, without slowing down the calculations, is to use the number of executed iterations as an entry to a look-up colour palette table initialized at startup. If the colour table
has, for instance, 500 entries, then you can use n mod 500, where n is the number of iterations, to select the colour to use. You can initialize the colour palette matrix in various different ways,
depending on what special feature of the escape behavior you want to emphasize graphically.
Continuous (smooth) coloring
The Escape Time Algorithm is popular for its simplicity. However, it creates bands of color, which, as a type of aliasing can detract from an image's aesthetic value. This can be improved using the
Normalized Iteration Count Algorithm, which provides a smooth transition of colors between iterations. The algorithm associates a real number with each value of z using the formula
$n+frac\left\{ln\left(2ln\left(B\right)\right)-ln\left(ln\left(|z|\right)\right)\right\}\left\{ln\left(P\right)\right\}, ,$
where n is the number of iterations for z, B is the bailout radius (normally 2 for a Mandelbrot set, but it can be changed), and P is the power for which z is raised to in the Mandelbrot set equation
(z[n+1] = z[]n^P + c, P is generally 2). Another formula for this is
$n+1-frac\left\{ln\left(ln\left(|z|\right)\right)\right\}\left\{ln\left(2\right)\right\}, .$
Note that this new formula is simpler than the first, but it only works for Mandelbrot sets with a bailout radius of 2 and a power of 2.
While this algorithm is relatively simple to implement (using either formula), there are a few things that need to be taken into consideration. First, the two formulae return a continuous stream of
numbers. However, it is up to the programmer to decide on how the return values will be converted into a color. Some type of method for casting these numbers onto a gradient should be developed.
Second, it is recommended that a few extra iterations are done so that z can grow. If iterations cease as soon as z escapes, there is the possibility that the smoothing algorithm will not work.
Distance estimates
One can compute
from point
) to nearest point on the
of Mandelbrot set.
Exterior distance estimation
The proof of the
of the Mandelbrot set in fact gives a formula for the
uniformizing map
of the
(and the
of this map). By the
Koebe 1/4 theorem
, one can then estimate the distance between the mid-point of our
and the Mandelbrot set up to a factor of 4.
In other words, provided that the maximal number of iterations is sufficiently high, one obtains a picture of the Mandelbrot set with the following properties:
1. Every pixel which contains a point of the Mandelbrot set is colored black.
2. Every pixel which is colored black is close to the Mandelbrot set.
The distance estimate of a pixel c (a complex number) from the Mandelbrot set is given by
$b=lim_\left\{n to infty\right\} 2cdotln\left(mid\left\{P_c^\left\{circ n\right\}\left(c\right)\right\}mid\right)cdotfrac\left\{mid\left\{P_c^\left\{circ n\right\}\left(c\right)\right\}mid\right
\}\left\{midfrac\left\{partial\right\}\left\{partial\left\{c\right\}\right\} P_c^\left\{circ n\right\}\left(c\right)mid\right\}$
$P_c\left(z\right) ,$ stands for complex quadratic polynomial
$P_c^\left\{circ n\right\}\left(c\right)$ stands for n iterations of $P_c\left(z\right) to z$ or $z^2 + c to z$, starting with z=c: $P_c^\left\{circ 0\right\}\left(c\right) = c$, $P_c^\left\{circ n+1
\right\}\left(c\right) = P_c^\left\{circ n\right\}\left(c\right)^2 + c$;
$frac\left\{partial\right\}\left\{partial\left\{c\right\}\right\} P_c^\left\{circ n\right\}\left(c\right)$ is the derivative of $P_c^\left\{circ n\right\}\left(c\right)$ with respect to c. This
derivative can be found by starting with $frac\left\{partial\right\}\left\{partial\left\{c\right\}\right\} P_c^\left\{circ 0\right\}\left(c\right) = 1$ and then $frac\left\{partial\right\}\left\
{partial\left\{c\right\}\right\} P_c^\left\{circ n+1\right\}\left(c\right) = 2cdot\left\{\right\}P_c^\left\{circ n\right\}\left(c\right)cdotfrac\left\{partial\right\}\left\{partial\left\{c\right\}\
right\} P_c^\left\{circ n\right\}\left(c\right) + 1$. This can easily be verified by using the chain rule for the derivative. From a mathematician's point of view, this formula only works in limit
where n goes to infinity, but very reasonable estimates can be found with just a few additional iterations after the main loop exits.
Once b is found, by the Koebe 1/4-theorem, we know there's no point of the Mandelbrot set with distance from c smaller than b/4.
Interior distance estimation
It is also possible to estimate the distance of a limitly periodic (i.e., inner) point to the boundary of the Mandelbrot set. The estimate is given by
$b=frac\left\{1-mid\left\{frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ p\right\}\left(z_0\right)\right\}mid^2\right\}$
{mid{frac{partial}{partial{c}}frac{partial}{partial{z}}P_c^{circ p}(z_0) + frac{partial}{partial{z}}frac{partial}{partial{z}}P_c^{circ p}(z_0) frac{frac{partial}{partial{c}}P_c^{circ p}(z_0)} {1-frac
{partial}{partial{z}}P_c^{circ p}(z_0)}}mid} where
$p$ is the period,
$c$ is the point to be estimated,
$P_c\left(z\right)$ is the complex quadratic polynomial $P_c\left(z\right)=z^2 + c$
$P_c^\left\{circ p\right\}\left(z_0\right)$ is $p$ compositions of $P_c\left(z\right) to z$, starting with $P_c^\left\{circ 0\right\}\left(z\right) = z_0$
$z_0$ is any of the $p$ points that make the attractor of the iterations of $P_c\left(z\right) to z$ starting with $P_c^\left\{circ 0\right\}\left(z\right) = c$; $z_0$ satisfies $z_0 = P_c^\left\
{circ p\right\}\left(z_0\right)$,
$frac\left\{partial\right\}\left\{partial\left\{c\right\}\right\}frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ p\right\}\left(z_0\right)$, $frac\left\{partial\right
\}\left\{partial\left\{z\right\}\right\}frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ p\right\}\left(z_0\right)$, $frac\left\{partial\right\}\left\{partial\left\{c\
right\}\right\}P_c^\left\{circ p\right\}\left(z_0\right)$ and $frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ p\right\}\left(z_0\right)$ are various derivatives of
$P_c^\left\{circ p\right\}\left(z\right)$, evaluated at $z_0$. Given $p$ and $z_0$, $P_c^\left\{circ p\right\}\left(z_0\right)$ and its derivatives can be evaluated by:
• begin\left\{align\right\} frac\left\{partial\right\}\left\{partial\left\{c\right\}\right\}frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ 0\right\}\left(z_0\right)
= & 0 frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ 0\right\}\left(z_0\right) = & 0 frac\left\
{partial\right\}\left\{partial\left\{c\right\}\right\}P_c^\left\{circ 0\right\}\left(z_0\right) = & 0 frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ 0\right\}\
left(z_0\right) = & 1 P_c^\left\{circ 0\right\}\left(z_0\right) = & z_0end\left\{align\right\}
• begin\left\{align\right\} frac\left\{partial\right\}\left\{partial\left\{c\right\}\right\}frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ n+1\right\}\left(z_0\
right) & = 2cdot\left(frac\left\{partial\right\}\left\{partial\left\{c\right\}\right\}P_c^\left\{circ n\right\}\left(z_0\right) cdotfrac\left\{partial\right\}\left\{partial\left\{z\right\}\right
\}P_c^\left\{circ n\right\}\left(z_0\right) + P_c^\left\{circ n\right\}\left(z_0\right)cdotfrac\left\{partial\right\}\left\{partial\left\{c\right\}\right\}frac\left\{partial\right\}\left\{partial
\left\{z\right\}\right\}P_c^\left\{circ n\right\}\left(z_0\right)\right)frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}frac\left\{partial\right\}\left\{partial\left\{z\right\}\
right\}P_c^\left\{circ n+1\right\}\left(z_0\right) & = 2cdot\left(frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ n\right\}\left(z_0\right)^2 + P_c^\left\{circ n\
right\}\left(z_0\right)cdot frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ n\right\}\left(z_0\
right)\right)frac\left\{partial\right\}\left\{partial\left\{c\right\}\right\}P_c^\left\{circ n+1\right\}\left(z_0\right) & = 2cdot P_c^\left\{circ n\right\}\left(z_0\right)cdotfrac\left\{partial\
right\}\left\{partial\left\{c\right\}\right\}P_c^\left\{circ n\right\}\left(z_0\right) + 1frac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ n+1\right\}\left(z_0\
right) & = 2cdot P_c^\left\{circ n\right\}\left(z_0\right)cdotfrac\left\{partial\right\}\left\{partial\left\{z\right\}\right\}P_c^\left\{circ n\right\}\left(z_0\right)P_c^\left\{circ n+1\right\}\
left(z_0\right) & = P_c^\left\{circ n\right\}\left(z_0\right)^2 + cend\left\{align\right\}.
Analogous to the exterior case, once b is found, we know that all points within the distance of b/4 from c are inside the Mandelbrot set.
There are two practical problems with the interior distance estimate: first, we need to find $z_0$ precisely, and second, we need to find $p$ precisely. The problem with $z_0$ is that the convergence
to $z_0$ by iterating $P_c\left(z\right)$ requires, theoretically, an infinite number of operations. The problem with period is that, sometimes, due to rounding errors, a period is falsely identified
to be an integer multiple of the real period (e.g., a period of 86 is detected, while the real period is only 43=86/2). In such case, the distance is overestimated, i.e., the reported radius could
contain points outside the Mandelbrot set.
One way to improve calculations is to find out beforehand whether the given point lies within the cardioid or in the period-2 bulb.
To prevent having to do huge numbers of iterations for other points in the set, one can do "periodicity checking"—which means check if a point reached in iterating a pixel has been reached before. If
so, the pixel cannot diverge, and must be in the set. This is most relevant for fixed-point calculations, where there is a relatively high chance of such periodicity—a full floating-point (or
higher-accuracy) implementation would rarely go into such a period.
Periodicity checking is, of course, a trade-off: The need to remember points costs memory and data management instructions, whereas it saves computational instructions.
See also
Further reading
External links
|
{"url":"http://www.reference.com/browse/mandelbrot+set","timestamp":"2014-04-17T11:11:05Z","content_type":null,"content_length":"128118","record_id":"<urn:uuid:c5d06ee4-3111-4b5c-9b89-9c785778f0bc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Voltage Dividers
Resistors, Capacitors, RC combinations
A general method for measuring high voltages is to use a voltage divider composed of two impedances in series. The ratio of impedance is such that the voltage across one of the elements is some
convenient fraction (like 1/1000) of the voltage across the combination.
[figure here]
To make the power consumption of the divider as low as possible, the impedances are quite large: 10's of Gigaohm (1e9 ohms) might be used for measuring megavolt level signals (resulting in a current
of a few tens of microamps). In an ideal world, the impedances would be pure resistors. The physically large size and the high impedances of high voltage equipment means that parasitic inductances
and capacitances can be significant. Even at 60 Hz, a 10pF parasitic C has an impedance of 260 Megohm. 10 pF is roughly the capacitance of a 10 cm radius sphere (8" diameter). If the resistor string
is 2 meters long, it's inductance is probably several microhenries, not particularly significant at power line frequencies, but a signficant concern at the higher frequencies encountered in fast
impulse work. Measuring voltages or potentials with any AC component is greatly affected by these parasitic reactances, and much of high quality divider design goes to minimizing or compensating
their effect.
[figure here]
For making AC measurements, purely capacitive dividers are popular. A fairly small capacitor forms the upper arm of the divider, and a larger, lower voltage capacitor forms the bottom. High pressure
gas capacitors are popular for the high voltage arm. A high pressure gas capacitor can provide a reasonable capacitance with a high voltage rating in a physically small package, which is important
for measurements on fast transients.
Thermal effects
Small as the current is through most high value resistive dividers, it may consititute a significant amount of power, which goes into heating yup the resistive elements. This heating will cause a
change in the value of the resistor, changing the overall ratio of the divider.
Classic standards work, as reported in Craggs & Meek, used maganin resistors. Manganin has an extremely low temperature coefficient of resistance (1.5 ppm/deg C) (see Resistance Wire Table) compared
to Nichrome ( 13 ppm) or Copper ( ppm).
Immersing the entire resistive divider in oil or rapidly circulating dielectric gas (e.g. SF6 or dry air) also ensures that all components are at the same temperature, so that, while the absolute
values might change, the ratios will remain constant, for DC at least. Resistance value changes will change the parasitic RC time constants, changing the frequency response.
Voltage Coefficient of Resistance
Some resistive materials show a change in the resistivity as a function of the impressed electric field strength. This would manifest itself as a change in the resistance as the voltage changes. A
long string of individual resistors, each run at a relatively low voltage, should not show this effect.
Safety Considerations
In the classic series resistor method for measuring voltage, the high value resistor string is in series with a sensistive current measuring meter (typically a d'Arsonval meter). If the resistor were
to fail shorted, or flash over, the high voltage would appear across the meter, possibly producing a personnel safety hazard, as well as destroying the meter. A simple safety precaution is a spark
gap across the meter, set for a kilovolt or so, that will arc over in case of a series resistor failure.
Another means is to measure current through the high value resistor by measuring the voltage across a resistor with a high impedance meter.
Good, reliable ground connections for the low end of the divider are essential. Consider the circuit shown below:
[figure here]
If the ground connection at A is broken, the full high voltage will appear on the measuring device, limited by the inevitable internal arcing or failure. Fortunately, the power will most likely be
limited by the high series resistance of the divider.
Physical construction of voltage dividers
Copyright 1998, Jim Lux / vdiv.htm / 8 March 1998 / Back to HV Home / Back to home page / Mail to Jim
|
{"url":"http://home.earthlink.net/~jimlux/hv/vdiv.htm","timestamp":"2014-04-16T05:10:42Z","content_type":null,"content_length":"5168","record_id":"<urn:uuid:2db08985-8364-40b9-ad94-a85f168819d2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Permuting sparse arrays
Edward C. Jones edcjones@comcast....
Wed Jan 25 09:12:23 CST 2012
I have a vector of bits where there are many more zeros than one. I
store the array as a sorted list of the indexes where the bit is one.
If the bit array is (0, 1, 0, 0, 0, 1, 1), it is stored as (1, 5, 6).
If the bit array, b, has length n, and p is a random permutation of
arange(n), then I can permute the bit array using fancy indexing: b[p].
Is there some neat trick I can use to permute an array while leaving it
in the list-of-indexes form? Currently I am doing it with a Python loop
but I am looking for a faster way.
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-January/059991.html","timestamp":"2014-04-17T12:53:38Z","content_type":null,"content_length":"3002","record_id":"<urn:uuid:96313287-2d5c-4be0-a540-7e23874b1a6f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer science
From HaskellWiki
Wikipedia's Computer science.
Martín Escardó maintains a Computer science page, being both detailed and comprehensive.
Structure and Interpretation of Computer Programs (by Harold Abelson and Gerald Jay Sussman with Julie Sussman, foreword by Alan J. Perlis).
Computability theory
Wikipedia's Computability theory.
An interesting area related to computabilty theory: Exact real arithmetic. For me, it was surprising, how it connected problems in mathematical analysis, arithmetic and computability theory.
|
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Computer_science&oldid=3756","timestamp":"2014-04-19T00:02:53Z","content_type":null,"content_length":"14834","record_id":"<urn:uuid:6d01d402-0845-4fef-9338-bc218c3e032e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
time reversal invariance
If the physical laws of a system are time reversal invariant that means essentially that if you were watching a video tape of the system then you would not be able to tell whether it was being played
forward or backward. We can represent that concept more mathematically by saying that if a particular motion q(t) obeys the physical laws (e.g. is a solution to the equation of motion) then q(-t)
will also obey the physical laws. You could think of this geometrically as reflecting the whole problem across the axis t = 0. If a something is invariant under time reversal it is said to have or
obey time reversal symmetry, often denoted T. A system that breaks time reversal symmetry is sometimes said to have an "arrow of time".
An Example of Time Reversal Invariance
A simple example of a system with time reversal invariance is two (idealized) billiards balls colliding on a pool table. Imagine you're playing a game of 8 ball and just the 8 ball is left. You shoot
the cue ball at the 8 ball and you aim it dead center, they collide, the cue ball comes to a stop and the 8 ball goes sailing off. Good, now freeze that picture in you mind, and then set it going in
reverse. Notice now the 8 ball comes in, collides with the cue ball, comes to a stop, and the cue ball goes sailing off. The time reversed picture looks just as reasonable as the one for forward
moving time. In fact, if you didn't know anything about the game and were just looking at the balls, you might assume someone had shot the 8 ball at the cue ball. There are many other examples one
can think of that are time reversal invariant, like a mass on a spring or a (perfectly elastic) ball bouncing up and down off the ground.
An Example of Time Reversal Symmetry Breaking
Probably the simplest example I can think of for a system that seems to break time reversal invariance is a box sliding along a level floor. As the box slides, friction slows it down and it gradually
comes to a stop. Good, now freeze that picture and play it in reverse. The box is just sitting there and then suddenly it starts moving and keeps speeding up. Now that clearly "just ain't natural".
Put more mathematically, that's not a solution to the equation of motion. Similarly, in the example of the bouncing ball, if it's not perfectly elastic then it will bounce to a lower and lower height
each time. Again, playing that backward would not look right.
The Role of Time Reversal Invariance in Physics
In the days of classical physics, before Planck, Einstein, and the rest, it was generally thought that microscopic physics was time reversal invariant, and that this was one of the fundamental
properties of nature. Elastic collisions, Newtonian gravity, and classical electromagnetism are all time reversal invariant, and these were generally the types of mechanisms that they thought
governed the world at the fundamental level. The subsequent development of the standard model of particle physics revealed that nature is not time reversal invariant. Indeed, experiments have been
done to measure CP violation, which is equivalent to the violation of time reversal symmetry, T, in the standard model. In that theory CPT symmetry is still upheld, though there are currently
experiments looking for violations of that, which would indicate physics beyond the standard model.
If you have been reading closely, you will notice that earlier I gave several examples of simple, everyday situations that seem to violate time reversal invariance, no particle physics necessary. As
I said it was thought that microscopic physics was time invariant, but when you talk about macroscopic objects you get into statistical physics and thermodynamics. It turns out that if you talk about
a large system and you look with a sort of fuzzy lens that can only really tell what the bulk is doing and not each individual piece, then you lose time reversal symmetry. That is to say that even if
the individual pieces obey time reversal invariance, when you look at the whole group in this "fuzzy" way and try to work out its statistical properties, you will get rules that break time reversal
Going back to the pool analogy, suppose you are going to break at the beginning of the game. You shoot the pool ball into the large, ordered, triangular group that the rest of the balls make up and
they scatter everywhere. Now, if you ran the video tape backward you would see them all coming together, but each collision would obey Newton's laws, nothing fishy yet. On the other hand, if you were
looking at this with your "fuzzy" glasses and asking the statistical question, "How likely is it that a scattered group of balls will all come together to form a large, ordered, stationary group with
only one (or a few) moving?" the answer would be "Not bloody likely!"
It is these statistical sorts of laws that govern macroscopic objects like an actual rubber ball or billiards ball or a box sliding across the floor. Thermodynamics deals in these sorts of
macroscopic, statistical situations and includes the second law of thermodynamics, which says that entropy (which often can be roughly thought of as disorder) in a closed system can never decrease.
That law already manifestly breaks time reversal symmetry, because entropy can increase in time but never decrease indicating which direction is "forward" in time. For example, if you take a pot full
of hot water and a pot full of cold and put them in contact, thermodynamics says that the hot water will cool and the cold water will warm, which increases the entropy of the system, until they reach
equilibrium. You could put a thermometer in each to watch this happen. If you played a video of that in reverse, though, you'd notice it was very strange when one of the two pots started getting
hotter and the other colder with no outside influence. This is sometimes called the thermodynamic arrow of time.
|
{"url":"http://everything2.com/title/time+reversal+invariance","timestamp":"2014-04-18T08:08:05Z","content_type":null,"content_length":"29695","record_id":"<urn:uuid:25b80016-8d88-47f6-87e4-c7f7239ca69e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Bitcoin
A hash algorithm turns an arbitrarily-large amount of data into a fixed-length hash. The same hash will always result from the same data, but modifying the data by even one bit will completely change
the hash. Like all computer data, hashes are large numbers, and are usually written as hexadecimal.
BitCoin uses the SHA-256 hash algorithm to generate verifiably "random" numbers in a way that requires a predictable amount of CPU effort. Generating a SHA-256 hash with a value less than the current
target solves a block and wins you some coins.
Also see http://en.wikipedia.org/wiki/Cryptographic_hash
|
{"url":"https://en.bitcoin.it/wiki/Hash","timestamp":"2014-04-18T02:58:05Z","content_type":null,"content_length":"15070","record_id":"<urn:uuid:646d68ec-7f70-46b4-be00-7609fdc4b36c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: MIXLOGIT: marginal effects
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: MIXLOGIT: marginal effects
From Arne Risa Hole <arnehole@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: MIXLOGIT: marginal effects
Date Mon, 6 Feb 2012 17:25:24 +0000
Thanks Maarten, I take your point, but as Richard says there is
nothing stopping you from calculating marginal effects at different
values of the explanatory variables (although admittedly it's rarely
done in practice). Also the LPM is fine as an alternative to binary
logit/probit but what about multinomial models?
On 6 February 2012 16:56, Maarten Buis <maartenlbuis@gmail.com> wrote:
> On Mon, Feb 6, 2012 at 3:03 PM, Arne Risa Hole wrote:
>> I disagree when it comes to marginal effects: I personally find them
>> much easier to interpret than odds-ratios. In the end the choice will
>> depend on your discipline and personal preference.
> My point is that it is fine if you prefer to think in terms of
> differences in probabilities, but in that case just go for a linear
> probability model. If you are only going to report marginal effects
> than you will summarize the effect size with one additive coefficient,
> which is just equivalent to a linear effect. By going through the
> "non-linear model-marginal effects" route you are doing indirectly
> what you can do directly with a linear probability model. Direct
> arguments are more clearer than indirect arguments, so they should be
> preferred.
> Even if you are uncomfortable with a linear probability model, the
> "non-linear model-marginal effects" route is still not going to help.
> The non-linear model will circumvent the linearity which is in such
> cases a problem, but than you are undoing the very reason for choosing
> a non-linear model by reporting only marginal effects.
> In short, there are very few cases where I can think of a useful
> application of marginal effects: either you should have estimated a
> linear model in the first place rather than post-hoc "fixing" a
> non-linear one or you are undoing the very non-linearity that was the
> reason for estimating the non-linear model in the first place.
> Hope this clarifies my point,
> Maarten
> --------------------------
> Maarten L. Buis
> Institut fuer Soziologie
> Universitaet Tuebingen
> Wilhelmstrasse 36
> 72074 Tuebingen
> Germany
> http://www.maartenbuis.nl
> --------------------------
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-02/msg00314.html","timestamp":"2014-04-17T06:53:16Z","content_type":null,"content_length":"11055","record_id":"<urn:uuid:5eecb863-0fc0-403c-b645-344e08b40389>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resolving the universal sheaf on the Quot scheme
up vote 4 down vote favorite
Let $X$ be a smooth projective variety, and $Quot$ some quot scheme, i.e. it parametrizes quotients of some fixed sheaf $F$ on $X$ of some fixed Hilbert polynomial. There is a universal quotient
sheaf $\mathcal{Q}$ on $X \times Quot$ such that $\mathcal{Q}|_{X \times [E]} = E$. Because $X$ is smooth and $F$ is flat over $Quot$, there is a finite resolution of the universal quotient $\mathcal
{Q}$ by finite dimensional locally free sheaves on $X \times Quot$.
Recall that by construction, $Quot$ enjoys a Plucker embedding into a Grassmannian of sections of $F(n)$ for some large $n$.
(How) can the resolution of $\mathcal{Q}$ be described explicitly in terms of restrictions of tautological bundles on the Grassmannian?
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/116540/resolving-the-universal-sheaf-on-the-quot-scheme","timestamp":"2014-04-21T05:19:42Z","content_type":null,"content_length":"44913","record_id":"<urn:uuid:e4d5de4d-711b-4535-a346-6b59a5c95bff>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math for Craft Design
by Linda
Have you ever wanted to create an attractively spaced stripe, or tried to calculate how long an item should be once you've gotten the width figured out? If so, you've come to the right place.
There are three mathematical concepts I've found extremely helpful when designing a new pattern. Don't let the math part scare you off.
They're called Fibonacci numbers, Lucas numbers and the Golden Ratio. The Golden Ratio is also called phi. They're closely related, particularly Fibonacci numbers and the Golden Ratio. Never fear, I
won't get into all the gory details here. That would assume I fully understand it all myself. :)
Here are the first few Fibonacci and Lucas numbers and the Golden Ratio. Follow the appropriate link for a short explanation and examples of how to use each of them.
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233...
2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199...
Generally accepted to be 0.618033.
So, why use these particular numbers? Essentially, a pattern or proportion that's based on them will appear more attractive to the human eye than one that's not. This is partially true because
they're quite common in nature.
• Knitters who love math may want to try their hand at my Moebius Scarf - a loop with a true half twist, and cozy too! With photo.
CTRL-D to Bookmark This Site!
|
{"url":"http://www.planetshoup.com/easy/tips/math.shtml","timestamp":"2014-04-17T02:02:17Z","content_type":null,"content_length":"10971","record_id":"<urn:uuid:437cb019-4781-4785-bd48-9fb9e1db859e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Images of the Possible - Geometer Sketch Pad
I am not a math person, and so I was not excited about using the Geometer Sketch Pad program. My partners and I laughed about our skills and tackled what we thought would be the easiest assignment
LittleOnes. We were successful and progressed to the fourth grade levels. Some of the concepts that were covered even at the second grade level were far beyond what my current students know and can
do, and certainly beyond what I could have done at the elementary levels. I took geometry in high school and we were remembering the difficulty of measuring angles with a protractor. This program
gives kids the ability to look at the principles and the functions, without being encumbered with the slow, cumbersome and inaccurate tools we used. We all liked the instant ability to see that no
matter what the angles were, the total of the degrees stayed the same. How long would that one principle have taken us to discover in the old way of doing things. I can visualize that it could create
some problems for the teacher. The quicker students would be done with the assigned tasks and would move on to play with the program. It was even a temptation for us, and I am sure that it would be
for students too. As for challenges for the student (and to some degree for the teacher too) it offers a temptation to use the calculator to do some of the tasks that we did ourselves. Measuring
lines and angles and doing the math all have value too. Not everything should be that easy, so there would have to be some balance between learning the basic skills and then letting the calculator do
the work you already know how to do, instead of leaning on the calculator to do the things you DON'T know how to do. Once the basics are mastered, this program would allow even very young students to
move through geometry very quickly, and thus cover much more ground in a single year than we would ever have imagined. We all laughed when Joan suggested that perhaps this would have changed our
career trajectories, but after playing with this, this idea doesn't sound all that far fetched.
|
{"url":"http://blog.lib.umn.edu/hime0001/pimlott/2006/11/images_of_the_possible_geomete.html","timestamp":"2014-04-19T01:49:42Z","content_type":null,"content_length":"5542","record_id":"<urn:uuid:2c64e4e2-428b-4ea3-ac88-35e150e944ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Smart" Calculating
I'm trying to do some work with some simple trigonometric functions. (Law of Sines and Cosines knowledge is useful here) I have a triangle:
\ /
(I apologize for the crudity of this triangle!)
To label, the top side is c, and the other two sides are a and b. The top-left angle is A, the top-right angle is B, and the other is C.
The angle measure of C is 45°, and its opposite side c is unknown. The other two sides are 2.0 units in length. From that I know that the other angles are going to be 67.5°. The problem that occurs
is if I try to do this in C++ I know that c = 1.53073... but if I try to calculate angle A or B, the answer will always come out as 67.500008° I know that the angle measure of these are really 67.5°.
I calculate it this way:
1 c = sqrt(pow(a, 2.0f) + pow(b, 2.0f) - 2.0f*a*b*cosf(C * PI/180.0f));
2 A = asinf((a*sinf(C * PI/180.0f))/c) * 180.0f/PI;
3 B = asinf((b*sinf(C * PI/180.0f))/c) * 180.0f/PI;
The PI/180 and 180/PI are necessary because the functions do the calculation in radians. And PI is just some constant float I made of 3.14159265358979... and clearly that is just an estimation of PI.
What I want to know is how do I calculate it so that the angle measure of A and B will be 100% accurate?
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/general/85110/","timestamp":"2014-04-18T18:21:37Z","content_type":null,"content_length":"12164","record_id":"<urn:uuid:696471f7-05c6-4a42-b3d6-a0fe9ac62527>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Metaharmonic Lattice Point Theory
Historical Aspects
Preparatory Ideas and Concepts
Tasks and Perspectives
Basic Notation
Cartesian Nomenclature
Regular Regions
Spherical Nomenclature
Radial and Angular Functions
One-Dimensional Auxiliary Material
Gamma Function and Its Properties
Riemann–Lebesgue Limits
Fourier Boundary and Stationary Point Asymptotics
Abel–Poisson and Gauss–Weierstrass Limits
One-Dimensional Euler and Poisson Summation Formulas
Lattice Function
Euler Summation Formula for the Laplace Operator
Riemann Zeta Function and Lattice Function
Poisson Summation Formula for the Laplace Operator
Euler Summation Formula for Helmholtz Operators
Poisson Summation Formula for Helmholtz Operators
Preparatory Tools of Analytic Theory of Numbers
Lattices in Euclidean Spaces
Basic Results of the Geometry of Numbers
Lattice Points Inside Circles
Lattice Points on Circles
Lattice Points Inside Spheres
Lattice Points on Spheres
Preparatory Tools of Mathematical Physics
Integral Theorems for the Laplace Operator
Integral Theorems for the Laplace–Beltrami Operator
Tools Involving the Laplace Operator
Radial and Angular Decomposition of Harmonics
Integral Theorems for the Helmholtz–Beltrami Operator
Radial and Angular Decomposition of Metaharmonics
Tools Involving Helmholtz Operators
Preparatory Tools of Fourier Analysis
Periodical Polynomials and Fourier Expansions
Classical Fourier Transform
Poisson Summation and Periodization
Gauss–Weierstrass and Abel–Poisson Transforms
Hankel Transform and Discontinuous Integrals
Lattice Function for the Iterated Helmholtz Operator
Lattice Function for the Helmholtz Operator
Lattice Function for the Iterated Helmholtz Operator
Lattice Function in Terms of Circular Harmonics
Lattice Function in Terms of Spherical Harmonics
Euler Summation on Regular Regions
Euler Summation Formula for the Iterated Laplace Operator
Lattice Point Discrepancy Involving the Laplace Operator
Zeta Function and Lattice Function
Euler Summation Formulas for Iterated Helmholtz Operators
Lattice Point Discrepancy Involving the Helmholtz Operator
Lattice Point Summation
Integral Asymptotics for (Iterated) Lattice Functions
Convergence Criteria and Theorems
Lattice Point-Generated Poisson Summation Formula
Classical Two-Dimensional Hardy–Landau Identity
Multi-Dimensional Hardy–Landau Identities
Lattice Ball Summation
Lattice Ball-Generated Euler Summation Formulas
Lattice Ball Discrepancy Involving the Laplacian
Convergence Criteria and Theorems
Lattice Ball-Generated Poisson Summation Formula
Multi-Dimensional Hardy–Landau Identities
Poisson Summation on Regular Regions
Theta Function and Gauss–Weierstrass Summability
Convergence Criteria for the Poisson Series
Generalized Parseval Identity
Minkowski’s Lattice Point Theorem
Poisson Summation on Planar Regular Regions
Fourier Inversion Formula
Weighted Two-Dimensional Lattice Point Identities
Weighted Two-Dimensional Lattice Ball Identities
Planar Distribution of Lattice Points
Qualitative Hardy–Landau Induced Geometric Interpretation
Constant Weight Discrepancy
Almost Periodicity of the Constant Weight Discrepancy
Angular Weight Discrepancy
Almost Periodicity of the Angular Weight Discrepancy
Radial and Angular Weights
Non-Uniform Distribution of Lattice Points
Quantitative Step Function Oriented Geometric Interpretation
|
{"url":"http://www.psypress.com/books/details/9781439861844/","timestamp":"2014-04-19T14:03:23Z","content_type":null,"content_length":"36180","record_id":"<urn:uuid:aaa83bca-9aba-41b7-923c-389a37cc276b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Boulder, CO ACT Tutor
Find a Boulder, CO ACT Tutor
...After graduation I was hired on at John Evans Middle School in Greeley, CO where I taught physical science. After a year there I moved to Boulder and I now am a life science teacher at
Thornton Middle School. I believe that I can get any child or person to learn I just need to find the correct methods of teaching for that work for that particular client.
18 Subjects: including ACT Math, chemistry, physics, anatomy
...But my love for teaching drew me back into math and am presently a first-year Ph.D Mathematics student at CU-Boulder. I have had great success with students at all ages; partly because I know
my stuff, but mostly because I love to teach. I had a 780 on the SAT in 2000 and recently got a perfect...
26 Subjects: including ACT Math, chemistry, physics, calculus
...My chosen field of study is Solar Physics, so I have special expertise in all things Sun related. Does the thought of preparing for standardized tests fill you with dread? With a little pract
ice and preparation, I'll show you that the math section problems really aren't that bad!
18 Subjects: including ACT Math, calculus, physics, geometry
...I have always loved problem solving and have used math and algebra since I was a kid. I have been able to use my math and problem solving skills for a long and rewarding career in software
engineering. I am now helping my own kids learn these useful tools for understanding and solving simple and complex math problems.
17 Subjects: including ACT Math, geometry, algebra 1, statistics
...That is the number one reason to work with a tutor when you're preparing-- a good tutor can boost your confidence, helping you understand the way the test is structured and giving you
strategies for how to attack each kind of question. It's not all about test-taking tricks, though. Somewhere al...
31 Subjects: including ACT Math, English, ESL/ESOL, GRE
Related Boulder, CO Tutors
Boulder, CO Accounting Tutors
Boulder, CO ACT Tutors
Boulder, CO Algebra Tutors
Boulder, CO Algebra 2 Tutors
Boulder, CO Calculus Tutors
Boulder, CO Geometry Tutors
Boulder, CO Math Tutors
Boulder, CO Prealgebra Tutors
Boulder, CO Precalculus Tutors
Boulder, CO SAT Tutors
Boulder, CO SAT Math Tutors
Boulder, CO Science Tutors
Boulder, CO Statistics Tutors
Boulder, CO Trigonometry Tutors
Nearby Cities With ACT Tutor
Arvada, CO ACT Tutors
Brighton, CO ACT Tutors
Broomfield ACT Tutors
Cherry Hills Village, CO ACT Tutors
Denver ACT Tutors
Federal Heights, CO ACT Tutors
Golden, CO ACT Tutors
Greenwood Village, CO ACT Tutors
Longmont ACT Tutors
Louisville, CO ACT Tutors
Northglenn, CO ACT Tutors
Superior, CO ACT Tutors
Thornton, CO ACT Tutors
Westminster, CO ACT Tutors
Wheat Ridge ACT Tutors
|
{"url":"http://www.purplemath.com/boulder_co_act_tutors.php","timestamp":"2014-04-20T19:25:29Z","content_type":null,"content_length":"23854","record_id":"<urn:uuid:5b77ca44-1b91-4858-9a99-df1def31f83f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
So what exactly is a complex number?
Steve Holden steve at holdenweb.com
Tue Sep 4 07:10:22 CEST 2007
Roy Smith wrote:
> Boris Borcic <bborcic at gmail.com> wrote:
>> Complex numbers are like a subclass of real numbers
> I wouldn't use the term "subclass". It certainly doesn't apply in the same
> sense it applies in OOPLs. For example, you can't say, "All complex
> numbers are real numbers". In fact, just the opposite.
> But, it's equally wrong to say, "real numbers are a subclass of complex
> numbers", at least not if you believe in LSP
> (http://en.wikipedia.org/wiki/Liskov_substitution_principle). For example,
> it is true that you can take the square root of all complex numbers. It is
> not, however, true that you can take square root of all real numbers.
That's not true. I suspect what you are attempting to say is that the
complex numbers are closed with respect to the square root operation,
but the reals aren't. Clearly you *can* take the square root of all real
numbers, since a real number *is* also a complex number with a zero
imaginary component. They are mathematically equal and equivalent.
> Don't confuse "subset" with "subclass". The set of real numbers *is* a
> subset of the set of complex numbers. It is *not* true that either reals
> or complex numbers are a subclass of the other.
I don't think "subclass" has a generally defined meaning in mathematics
(though such an assertion from me is usually a precursor to someone
presenting evidence of my ignorance, so I should know better than to
make them).
obpython: I have always thought that the "key widening" performed in
dictionary lookup is a little quirk of the language:
>>> d = {2: "indeedy"}
>>> d[2.0]
>>> d[2.0+0j]
but it does reflect the fact that the integers are a subset of the
reals, which are (as you correctly point out) a subset of the complexes.
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
More information about the Python-list mailing list
|
{"url":"https://mail.python.org/pipermail/python-list/2007-September/460856.html","timestamp":"2014-04-20T01:37:13Z","content_type":null,"content_length":"5206","record_id":"<urn:uuid:13ed6d27-8b03-4844-b38c-945987546971>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Existence of limits!
February 2nd 2010, 11:30 AM #1
Jan 2010
Existence of limits!
How would you go about showing that the limit as x approaches 0 of (x + sgn(x)) does not exist? (sgn(x) = x/|x| )
If x< 0, then x+ sgn(x)= x- 1. If x> 0, then x+ sgn(x)= x+1. For x very close to 0 and negative, x+ sgn(x) is very close to -1. For x very close to 0 and positive, x+ sgn(x) is very close to 1.
For any putative limit, a, take $\epsilon$ to be the smaller of |a-1|/2 and |a+1|/2. Depending on which you use, there will be either positive x or negative, arbitrarily close to x such that |x+
sign(x)- a| is larger than $\epsilon$.
A slightly less messy approach would be to assume that $\lim_{x\to0}\left\{x+\text{sgn}(x)\right\}$ exists and equals $L$. Then, $L=L-0=\lim_{x\to0}\left\{x+\text{sgn}(x)\right\}-\lim_{x\to 0}x=\
lim_{x\to0}\left\{x+\text{sgn}(x)-x\right\}=\lim_{x\to0}\text{sgn}(x)$. The combination of the limits was justified under the assumption that our limit exists. But the above then implies that $\
lim_{x\to0}\text{sgn}(x)$ exists, which by following HallsOfIvy's example is just ridiculous.
February 2nd 2010, 11:54 AM #2
MHF Contributor
Apr 2005
February 2nd 2010, 01:01 PM #3
|
{"url":"http://mathhelpforum.com/differential-geometry/126824-existence-limits.html","timestamp":"2014-04-17T15:06:21Z","content_type":null,"content_length":"38570","record_id":"<urn:uuid:4a05740c-8a49-4389-8f50-d105df122f85>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|