content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Brownian Motion and Characterization of Domains (Open)
Summary: Given a plane domain $\Omega$ bounded by a regular Jordan curve $\Gamma$ and a Lebesgue measurable subset $E$ of $\Gamma$, let $P(E;z)$ denote the probability that a Brownian particle
starting at $z \in \Omega$ hits by boundary $\Gamma$ for the first time at a point in $E$. The problem calls for a characterization of the domains such that for arbitrary $z \in \Omega$ and $0 < C <
|\Gamma|$, the optimization problem $\sup\{P(E;z): |E| = C\}$ is solved by taking $E$ to be an appropriate single arc of $\Gamma$ with measure $C$.
Classification: Primary, differential equations; Secondary, PDE
Note from the Editor: Professor Berrone has kindly pointed out that a response to question (i) can be found in his paper Characterization of domains through families of measures, which was published
in Demonstratio Mathematica, XXXVI, No. 2 (2003), pp. 313--328
Lucio R. Berrone
Universidad Nacional de Rosario
e-mail: berrone@unrctu.edu.ar | {"url":"http://www.siam.org/journals/categories/01-003.php","timestamp":"2014-04-16T21:59:30Z","content_type":null,"content_length":"9812","record_id":"<urn:uuid:44c42ad8-c9d9-4bba-9fba-e7631e05e907>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numerical] Computing a contour of an integral function
Suppose I want to compute all complex points z such that,
[tex]\int_{z^*}^z F(t) dt = 0[/tex]
Here [tex]z^*[/tex] is a given constant. In general, the points z which satisfy that relation form a continuous curve beginning from the initial point.
What is the best way to tackle this numerically? I'm sure it's a fairly standard numerical problem. Unfortunately, I'm just not sure where to look (in the literature). | {"url":"http://www.physicsforums.com/showthread.php?t=334303","timestamp":"2014-04-16T10:27:25Z","content_type":null,"content_length":"19729","record_id":"<urn:uuid:bb4a96f0-20b2-4d24-b018-f17bc213ac9a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 142 - Accelerated Calculus II - Spring 2004
ROOM: Lopata Hall 301
TIME: MTTF 9:00-10:00
INSTRUCTOR: Ilya Krishtal
OFFICE: 112A Cupples I
PHONE: (314)935-6785
E-MAIL: krishtal@math.wustl.edu
SEAT ASSIGNMENT (Find room and seat for final exam)
TEXTBOOK: Calculus, Early Transcendentals, 5th edition, by James Stewart
You will need a graphing calculator, like the TI-83. See the math dept
link: Acceptable Calculators
HOMEWORK: I expect that you carefully review on the textbook what has been explained during the lectures. I strongly recommend that you read material in advance in order to ask clarifying questions
at the right time. Homework problems will be assigned to you at the end of each lecture. Even-numbered problems should be handed in for grading on the following Monday. Your homework is THE MOST
IMPORTANT PART OF YOUR LEARNING PROCESS. The purpose of the homework problems is to allow you to verify your understanding, and work out some representative examples of what you will be asked to
solve in class. Therefore, it is extremely important that you carefully work out the homework problems if you want to do well in the class. Please, notice that MANY OF THE EXAMS AND QUIZ PROBLEMS
WILL BE either PICKED UP DIRECTLY FROM THE HOMEWORK PROBLEMS, or very similar to them.
EXAMINATIONS: There will be between 4 and 8 Thursday quizzes (about 15 min each) after completion of certain parts of the course. Questions will come directly from the homework problems. There will
be THREE midterm exams on: Tuesday, FEBRUARY 10; Wednesday, MARCH 17; Wednesday, APRIL 14. Exams will last for 2 hours, between 6:30 and 8:30 pm. The FINAL EXAM is on Friday, MAY 7 from 10:30 am
to 12:30 pm (THEY WILL NOT BE IN THE REGULAR CLASSROOM!)
These exams will consist of problems to be done in their entirety (no multiple choice), and will be hand graded, with partial credit being given when appropriate. A correct answer with no work shown
might receive no credit , if it is believed that the answer could not have been figured out in your head.
GRADING: You will be allowed to drop the worst 2 homework scores (based on the persantage you got right). The homework average will count 25% towards the final grade. The quizzes will count 20%
towards the final grade. Each midterm exam will count 10% and the final exam will count 25% towards the final grade. The final grade will be assigned according to the following scheme:
A: 89-100, B: 75-88, C: 63-74, D: 51-62, F: 0-50.
For those taking the course Pass/Fail, Pass will be equivalent to a grade D or better. It's possible that, at the end of the semester, the instructor may decide that it's appropriate to use a
slightly more lenient scale or to make other changes benefiting students. It's certain that your grade will not be lower than that predicted by the above table.
MAKE-UP EXAMS: No make-ups exams will be given. Excused absences from any of these tests must be obtained from myself prior to the examination. Unexcused absence from an exam will result in a score
of zero on that exam.
OFFICE HOURS: My office is in Cupples I, room 112 A. Office hours are Monday, Wednesday and Friday 12:30-13:30. You can certainly talk to me at other times. If you want to reach me or schedule an
appointment, please, talk to me at the end of the class or contact me by e-mail at: krishtal@math.wustl.edu. If I'm unavailable for some reason, you contact your grader. He will try to provide
you with all you need.
• Further Applications of Integration (Sections 8.1--8.3)
• Parametric Equations and Polar Coordinates (Sections 10.1--10.4)
• Infinite Sequences and Series (Sections 11.1--11.10)
• Vectors (Sections 12.2--12.5, 12.7)
• Vector Functions (Sections 13.1--13.3)
• Partial Derivatives (Sections 14.1--14.7)
• Multiple Integrals (Sections 15.1--15.9)
• Vector Fields (Sections 16.1--16.4)
Last modified January 14, 2004 | {"url":"http://www.math.wustl.edu/~krishtal/math142my.html","timestamp":"2014-04-20T05:59:12Z","content_type":null,"content_length":"6974","record_id":"<urn:uuid:32c1574c-9755-40ac-b5a3-c487f8ab0215>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Inverse Function Problem
September 21st 2005, 05:06 PM #1
[SOLVED] Inverse Function Problem
f (x) = 4+2x+3e^x
f −1(7) = ?
f (x) = 6x^3 +3x+5
g(x) = f −1(x)
g(x) = ?
Any help would be greatly appreciated. Thanks.
For the first, there is no simple formula for the inverse of such a function, so you are looking for a numerical answer. It might help you to plot the function and get an estimate for a possible
answer, then try verifying the answer you've guessed: it's almost a trick question.
For the second, there is a general formula for the solution of a cubic, but are you sure that was what you're being asked here?
September 21st 2005, 10:27 PM #2 | {"url":"http://mathhelpforum.com/algebra/959-solved-inverse-function-problem.html","timestamp":"2014-04-16T20:57:05Z","content_type":null,"content_length":"30067","record_id":"<urn:uuid:1108d419-8d8e-4b8a-96fb-281bb9dc36b6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
The European Mathematical Society
Stockholm University announces a position as Professor in Mathematical Locic within the Department of Mathematics. The subject comprises both the logical study of the deductive structure of
mathematics, and the mathematical study of formal logical systems. In addition, it deals with the use of logic in computer science. The main tasks of the new professor are research, teaching,
supervision, together with administrative tasks at the department and the faculty. The position holder is expected to work at developing the relations between the Mathematics Department and
theoretical computer science at NADA and other institutions in the Stockholm area. Since most professors at the Faculty of Science are men, applications from women are particularly welcome. The full
announcement can be found at the University's web site www.su.se under the link "Lediga anställningar". | {"url":"http://www.euro-math-soc.eu/node/639","timestamp":"2014-04-16T22:05:54Z","content_type":null,"content_length":"11787","record_id":"<urn:uuid:f71b1de4-2b0e-4e09-a7ab-5fdcfbd38c93>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Onto Functions and Stirling Numbers
Date: 09/22/2002 at 23:51:10
From: Tom
Subject: Onto Functions and Stirling Numbers
Hello Dr. Math,
I don't know if you can help me with this question but how would I
show for m >= 3
s(m, m-2) = (1/24)m(m-1)(m-2)(3m-1)
where s(m,n) are referred to as the Stirling numbers of the first
Is there a formula for the Stirling numbers of the first kind that I
could use?
Thank you.
Date: 09/23/2002 at 13:06:47
From: Doctor Anthony
Subject: Re: Onto Functions and Stirling Numbers
We use S(n,m) to denote Stirling Numbers of the First Kind. It
represents the number of ways to seat n people at m circular tables
with at least one person at each table. The arragements at any one
table are not distinguished if one can be rotated into another.
The ordering of the tables is NOT taken into account. Briefly,
these numbers count the number of arrangements of n objects into
m non-empty circular permutations.
The generating function for S(n,m) is the coefficient of x^m in the
expansion of x(x-1)(x-2)....(x-n+1)
The numbers satisfy the recurrence relation
S(n+1,r) = S(n,r-1) + n.S(n,r)
We have S(n,1) = (n-1)!
S(n,n) = 1
S(n,n-1) = C(n,2) (select 2 persons for one table)
We get the following table using the recurrence relation:
| 1 2 3 4 5 6 ....
1 | 1
2 | 1 1
n 3 | 2 3 1
4 | 6 11 6 1
5 | 24 50 35 10 1
6 | 120 274 225 85 15 1
And using the generating function, S(5,2)
is coefficient of x^2 in x(x-1)(x-2)...(x-5+1)
= x(x-1)(x-2)(x-3)(x-4)
= x^5 - 10x^4 + 35x^3 - 50x^2 + 24x
and we can see that |coefficient| of x^2 is 50.
The other coefficients also give the required terms in row 5 of above
In how many ways can six people be seated round 3 identical tables if
there is at least one person at each table?
From the above table we find S(6,3) = 225
To get the recurrence relation from first principles, suppose we have
distributed 5 people in S(5,2) or S(5,3) ways. That is, all five are
at all 3 tables or at only 2 tables and we then add a sixth person.
If only at two tables, then the 6th person MUST go to the empty table.
He could be placed in one way and we get a term S(5,2) in this
If the 5 people are distributed at 3 tables there will be 5 gaps that
the extra person could occupy, and so the contribution in this
situation to the new total is 5 x S(5,3).
It follows that S(6,3) = S(5,2) + 5 x S(5,3)
Check from the table above:
S(5,2) + 5 x S(5,3)
50 + 5 x 35 = 225 = S(6,3)
Now turning to your question, we are asked to show
S(m,m-2) = (1/24)m(m-1)(m-2)(3m-1)
We showed earlier that S(m,m-1) = C(m,2)
That is, we have m people and have to select two to sit together at
one of the tables. It does not matter which table, and with just two
at the table you cannot change the order.
For S(m,m-2) we have m-2 tables and m people. We could have 3 at one
table and 1 at each of the other tables. The 3 persons are chosen in
C(m,3) ways and they can be arranged in 2 ways at the table. This
gives 2 x C(m,3) arrangements. Alternatively we could have 2 at two
tables and 1 at each of the other tables. This could be done in
C(m,4) x C(4,2)/2 ways.
Notice that we divide by 2 because the number of WAYS of dividing 4
into 2 groups of 2 is C(4,2)/2, since when we select 2 that
automatically selects the other 2.
Combining these answers we get
S(m,m-2) = 2 x C(m,3) + C(m,4) x C(4,2)/2
2 x m(m-1)(m-2) m(m-1)(m-2)(m-3)x 3
= ----------------- + --------------------
3! 4!
m(m-1)(m-2)[8 + 3(m-3)]
= -----------------------
= -------------------
- Doctor Anthony, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/61259.html","timestamp":"2014-04-17T10:45:41Z","content_type":null,"content_length":"9241","record_id":"<urn:uuid:30655547-fab0-4eff-bfd3-399bdabc9eed>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Waves/1D Terminology
From Wikibooks, open books for an open world
s = Displacement (unit = metre, m): Displacement is the position of a particle of the medium relative to its equilibrium position.
T = Period (unit = second, s): Period is the time for one complete cycle or oscillation of the medium.
φ = Phase (unit = degrees or radians): Phase is the position (with respect to time) of the oscillation taking zero disturbance as zero time. As the disturbance goes through a cycle the position in
the cycle is called the phase and, since the mathematical expression for the disturbance usually involves a sine function of the time, the phase is measured in degrees or radians. A full cycle is
360° and a quarter of a cycle is 90°.
A or a = Amplitude (unit = various): Amplitude is the maximum disturbance of the medium from its equilibrium position or state.
f = Frequency (unit = hertz, Hz): Frequency is the number of waves passing any point per second or the number of oscillations of the medium per second.
λ = Wavelength (unit = metre, m): Wavelength is the distance between two points on successive wave fronts that are in phase. A Wave Front is a line joining all points that are in phase. Successive
wave fronts are drawn at the same phase and waves are one wavelength apart. Waves travel perpendicular to the wave front.
I = Intensity (unit = W/m^2 = kg/s^3, W·m^−2 = kg·s^−3): Intensity is the energy passing through unit area (perpendicular to the area) per second.
v = Speed (unit = m/s, m·s^−1): Speed is the distance travelled by a wave front per second. | {"url":"http://en.wikibooks.org/wiki/Waves/1D_Terminology","timestamp":"2014-04-20T13:27:57Z","content_type":null,"content_length":"28518","record_id":"<urn:uuid:62c35f1a-1d54-467e-a4e0-07f575d32e2f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
I would agree with the article with one exception.
If it had the energy to do so, then equal energy would be applied against the shooter and he too would be knocked down.
This is not true - and is a misconception that is often stated. You can't examine only one part of the entire energy equation and conflate Newton's second and third laws of motion with the kinetic
energy of the bullet.
The problem with the statement is that he is using Newton's second and third laws of motion to describe the acceleration of the bullet and the equal and opposite force applied to the gun - but
ignoring the kinetic energy of the bullet after it has been accelerated to its top speed, and the energy contained in the bullet at its impact speed.
Kinetic energy is (simply) calculated as KE= 1/2 x m x v^2. That's why the amount of energy of a bullet is described in foot pounds at impact - this is NOT the same as the amount of energy expended
to accelerate the bullet.
Newton's second and third laws applied to the bullet and the gun do not cancel the KE = 1/2 x m x v^2 equation. Or, the more complex formula often used for bullets of: KE = 1/2 x weight (mass) x v^2
/ 7000 / 32.175.
As an example, a .357 magnum, 158 grain bullet accelerated to 1250 feet-per-second, the above equation gives a calculated 548 ft/pounds of energy at the muzzle - but that is NOT the force applied to
the gun - that is the amount of energy contained in the bullet due to the speed generated over the time it was in the barrel by the force applied to it.
Whether you are knocked over by the guns' recoil generated during acceleration of the bullet has no bearing on the amount of energy generated by the bullet when it impacts another object.
The bullet weight is very small (relatively) and is at rest when the gun is fired. The lightweight bullet is accelerated over time (the time it is in the barrel) and an equal force is generated in
the opposite direction against an object (the gun) over the same time. Because of the gun's mass, it is accelerated much less than the bullet.
In comparison, a 158 grain bullet weighs 0.0226 pounds. Given a .357 magnum pistol that weighs 2.75 pounds means the gun weighs 121.7 times more than the bullet. Or, the bullet is 0.0082% the weight
of the gun - however you want to describe the difference.
Looking at it in a different way - with a .357 magnum a 158 grain bullet is accelerated to 1250 feet- per second by the forces applied to it. The gun it is fired out of weighs 2.75 lbs and is
accelerated to 14.3 feet-per-second
generating 8.7 ft/lbs of recoil. That's why you are not knocked over by the recoil - but that has NOTHING to do with the energy contained in the bullet at impact. | {"url":"http://www.firearmstalk.com/forums/f14/best-caliber-knockdown-power-defense-103095/index10.html","timestamp":"2014-04-18T09:15:38Z","content_type":null,"content_length":"78947","record_id":"<urn:uuid:cb29fd3c-7004-49a7-a7e4-dbef5c1c1dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Theorem schemes
John Baldwin jbaldwin at uic.edu
Wed Jun 21 17:36:14 EDT 2006
On Wed, 21 Jun 2006 joeshipman at aol.com wrote:
> I am looking for examples of well-known mathematical results of the
> following form:
> (*) Whenever the sentences Phi_1, Phi_2, Phi_3,... are all true (in a
> structure satisfying certain other properties), the sentences Psi_1,
> Psi_2, Psi_3, ... are all true (in that same structure).
> An example would be my recent result improving the fundamental theorem
> of algebra, where the "certain other properties" are the axioms for
> fields, Phi_i is the statement "every polynomial whose degree is the
> ith prime number has a root" and Psi_j is the statement "every
> polynomial of degree j has a root".
If I understand correctly another example is:
usually phrased as: if a polynomial map from C^n to C^n is one-to-one
then it is onto.
In the suggested formalization
The phi_i are the axioms for an algebraically closed field of
characteristic zero
and the psi_i are sentences asserting the proposition for polynomials in n
variables and degree d (where i codes n and d).
See page 2 of Marker model theory of fields LN in Logic 5
or page 21-22 of Cherlin LNM 521 --- both published by Springer.
The proof (due to Ax) is a nice combination of compactness and the fact
that for all
there exist sentences are preserved by unions of chains.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-June/010626.html","timestamp":"2014-04-16T07:21:29Z","content_type":null,"content_length":"3825","record_id":"<urn:uuid:f1924c30-947e-4f17-8723-749f9c6e5c69>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Critiquing the Doomsday Argument
by Robin Hanson 27Aug98
A thought-provoking argument suggests we should expect the extinction of intelligent life on Earth soon. In the end, however, the argument is unpersuasive.
Brandon Carter, H.B. Nielsen [12], John Leslie [8], J. Richard Gott [5], Nick Bostrom [1], and others have elaborated a creative argument suggesting that "doom" is more likely than we otherwise
The basic idea is that, all else equal, you should not assume that you are especially unusual on any one axis. If you have just taken up a new fashion, you shouldn't assume you are one of the first;
that fashion is just as likely to be on the way out as it is to be on the way in. If you think up a idea that a dozen people are likely to think of as well, you shouldn't assume you are the first.
A similar intuition is applied to the case of finding yourself in an exponentially growing population that will suddenly end someday. Since most of the members will appear just before the end, you
should infer that that end probably isn't more than a few doubling times away from now. If you get involved in a pyramid investment scheme, it will likely collapse soon and leave you at a loss. And
since humanity has been growing exponentially or faster for millennia, humanity may well end before it can double a few more times. Hence the name, "doomsday argument," (hereafter known as DA).
Of course the end of humanity need not imply doom; perhaps our descendants will just be so different that they are no longer "human." DA proponents often argue, however, that since a valid "reference
class" is something like all intelligent creatures, we should conclude that any descendants aren't likely to be intelligent.
Do we face doom soon? Most people's initial reaction is that the argument has to be wrong somehow. Yet most of the knee-jerk counter-arguments they offer are easily rebutted. There have, however,
been some thoughtful publications on this topic, mostly critical [2,3,4,6,7,11], and mostly focusing on the same criticism:
All else is not equal; we have good reasons for thinking we are not randomly selected humans from all who will ever live.
You should include everything you know when doing inference, and you usually know things that imply you are not random. If you edit a fashion magazine, you have reason to think you will hear of
fashions on their way in. If you never even read fashion magazines, you have reason to think you will hear of fashions on their way out. Similarly, standard calculations about doom suggest we are
earlier than random humans.
For example, assume that human population grows exponentially, that doom happens when the population reaches 10^d, and that our "prior" (before information) expectations about d is that it is
uniformly distributed between 0 and 20. In this case if we interpret our information that today's population is about 10^10 as just telling us that d is at least ten, our "posterior" (after
information) expectations should be that d is uniformly distributed between 10 and 20. This implies a median future growth factor of 10^5 before doom, which is far from "doom soon."
But for an exponentially growing population which ends suddenly, don't most members show up within a few doubling times of the end? Yes, but most members also show up in scenarios where the
population has nearly reached its maximum possible size. In the example above, most people show up when the population has reached over 10^19. If you show up before that point, you already know you
are special. Anyone who shows up when the population is well below the largest size that one's prior gives stubstantial weight to knows they must be unusually early. This is why people who have done
these sort of calculations haven't been terribly worried about doom soon.
In response to such standard analyses, DA advocates invite us to imagine that we had amnesia, and have forgotten our place in history. If we then learned that that the population now were small, that
must increase our posterior beliefs in an early doom. This follows directly from the fact that had we learned instead that the current population was very large, we could have excluded early doom
scenarios. So if the above prior on d, uniform within 0 to 20, was reasonable for someone in the amnesia situation, then upon learning our place in history, that the population is now 10^10, we must
expect doom relatively soon.
A defender of the standard analysis can, however, reply that knowing that you are alive with amnesia tells you that you are in an unusual and informative situation; it is far from being "prior" to
information. You may be more likely to develop amnesia in response to a tramatic situation, for example. And even if everyone had the same random chance of developing amnesia, the mere fact that you
exist suggests a larger population. After all, if doom had happend before you were born, you wouldn't be around to consider these questions. So the mere fact that you exist would seem to tell you a
For example, imagine that you found yourself hung over in a hotel room, and couldn't remember who you were, other than that you are a musician on tour. You wonder: am I a one-hit-wonder, or do I have
a lasting music career? If on average only one quarter of musicians have a lasting career, but musicians with lasting careers spend three times as much time in hotel rooms on tour, then you should
estimate a fifty-fifty chance you have a lasting career. This is because only half of the total musician hotel room-days are filled with one-hit-wonders. Amensia implies optimism.
Similarly, if you can't recall how old you are, you should expect to be older than the average person. Why? Because older people have more hotel-days in their lifetime. Similarly, if you can't recall
who you are in humanity, you should become more optimistic about humanity's chances. In the standard approach, priors are defined without considering who, if anyone, will live. In this case, learning
that you are alive with amnesia must make you expect both that doom is near, and that doom will happen very late. In our example, beliefs change from being uniform on d to being exponential in d,
with most weight near the upper limit of 20. If you then learned, to your great surprise, that the population is now only 10^10, your beliefs regarding d would then give much more weight to lower
values of d; it is true that you would expect an earlier doom. But where you would end up is with d being uniformly distributed across 10 to 20, just as if you had never had amnesia.
DA advocates seem to be saying, in contrast, that the amnesia situation is the natural one to be defining "priors" for, with counterfactual situations where you might not have existed being
irrelevant distractions. So the question seems to come down to whether, when defining a prior, you should assume that you would have existed at some point in human history, with amnesia about your
particular situation. And if one chooses amnesia, there seems the further question of which type of amnesia to assume. Knowing that you lived in a city on Earth, for example, is very different from
not being able to exclude either living in space as a computer mind, or as a hunter-gatherer on the savanna.
DA advocates say many other things in defense of their position. But but I find it hard to make sense of many of their qualitative arguments, and I find it frustrating that these arguments have
rarely been fully formalized using our standard formal approach to modeling inference (at least when everything is finite). This approach is:
1. Choose a space of possible states.
2. Assign a prior probability distribution over states.
3. For each agent in each state, say what other states they can exclude based on their observations. The set of states an agent cannot exclude is their information set (and the set of such sets
forms a state partition).
4. Posterior probabilities for each agent are just priors renormalized to their information sets.
Admitedly, DA advocates have written down various conditional probability expressions. But they do not seem to have described the state space they have in mind in enough detail for me to judge
whether I think their priors reasonable. Reading between the lines, however, DA advocates do seem to be claiming that our standard approach to defining state spaces and/or priors is biased against
doomsday, and so we should revise the practice of decision theory in an important way.
In the above example we chose the state description to be just d, the population exponent at doom. Instead, DA advocates seem to argue, you should also include a description of which human you turned
out to be. If you assume you were equally likely to be any one of the actual humans there will actually ever be, this increases the chances of doom soon (though not by a lot in many cases [6,7]).
To explore this, let us consider an even simpler example. Assume there are four spatial positions, a,b,c,d, five points in times, 1,2,3,4,5, and that each space/time combination can hold one of
these: D(ead rock), M(onkey), H(uman), P(osthuman). Each "universe" then describes which of D,M,H,P occupy each space/time slot. Here are four universes: *,&,#,@.
* | 1 2 3 4 5 & | 1 2 3 4 5 # | 1 2 3 4 5 @ | 1 2 3 4 5
------------- ------------- ------------- -------------
a | D D D D D a | D D D D D a | D D D D D a | D D D D P
b | D D D D D b | D D D D D b | D D D D D b | D D D D P
c | D M M M M c | D M M M M c | D M M H D c | D M M H P
d | D M M M M d | D M H M M d | D M H H D d | D M H H P
These are universes where humans never evolve (*), evolve but quickly die (&), displace the biosphere and then die (#), or evolve into spacefaring posthumans before dying (@).
In our example, the usual state description would just specify which was the true universe among *,$,#,@, and a uniform prior would give equal probability to these four possibilities. If an agent at
d3 could only observe that d3 is occupied by a H, and not a M, she could exclude only the universe *. That makes her information set {$,#,@}, and her full partition {{*}{$,#,@}. Assuming equal prior
probabilities for all four universes, she'd then assign only a 1/3 probability to "doom soon," i.e., being in universe &.
Instead of this usual approach, DA advocates seem to suggest that you should extend your state description to include a description of which member of the "reference class" you turned out to be.
Assuming the reference class is humans and posthumans, and using space-time coordinates to denote members, then the universe & has one associated state &d3, the universe # has three associated states
{#d3,&d4,&c4}, and the universe @ has seven associated states {@d3,@d4,@c4,@d5,@c5,@b5,@a5}.
The prior that DA advocates seem to prefer is to divide up the prior one might assign to a universe among its associated states. So P(&d3) = 1/4, but P(#d3) = 1/(4*3) and P(@d3) = 1/(4*7). If my
current information were "I'm a H at time 3," that would mean that the true state is one of {&d3,#d3,@d3}, and my posterior probability of "doom soon" (&) is 1/(1+ 1/3 + 1/7) = 72%. Thus it seems
that DA advocates can formalize their intuition that our standard analyses tend to misstate the state space and/or prior, and that analysis using an extended state space and matching prior can make
doom more likely.
Of course we always knew you could make doom more likely if we chose different priors. But how reasonable are these priors? I see these problems with this approach:
1. It is not clear how many states to associate with universes, such as *, which have no members. Yet we need to know this to do anthropic reasoning about what our existence tells us about our
2. This DA prior doesn't suggest doom if there are many aliens who count as in the "reference class," and who are insensitive to our doom. If we could have been aliens instead of human, then the
fact that we are human suggests that humans are relatively numerous. (Bostrom discusses this at length [1].)
3. There seems to be no satisfactory principle for choosing the reference class of "creatures like us," even though this choice can make a big difference. Changing the reference class in our example
from "H or P" to "just H" lowers the chance of doom soon (i.e., universe &) given "I'm a H at 3" from 72% to 60%, and changing the class to "M or H or P" lowers it to 32%.
4. People who want to think of themselves as "educated" tend to define this as everyone with their level of education or higher. Then they can proudly compare the favorable average properties of
their "educated" class relative to the "uneducated" class. A similar potential for bias arises when humans define "creatures like us" as creatures almost as intelligent as we are or better.
5. It seems hard to rationalize this state space and prior outside a religious image where souls wait for God to choose their bodies.
This last objection may sound trite, but I think it may be the key. The universe doesn't know or care whether we are intelligent or conscious, and I think we risk a hopeless conceptual muddle if we
try to describe the state of the universe directly in terms of abstract features humans now care about. If we are going to extend our state desciptions to say where we sit in the universe (and it's
not clear to me that we should) it seems best to construct a state space based on the relevant physical states involved, to use priors based on natural physical distributions over such states, and
only then to notice features of interest to humans.
Toward this end, let us think of a human as a certain set of atoms arranged at a certain time in a certain way. Those atoms could be arranged to make a human, or a monkey, or a rock. In this sense,
"I could have been a rock." And let us express the DA idea that "I" could have been you by saying that "I" could have been "at" your atoms instead of mine, arranged they way you are.
Formally, let us extend the state description in our simple example to include not only which of *,&,#,@ is the true universe, but also which space-time slot I turn out to occupy, allowing any valid
space-time slot as possible. So, for example, I hope the state is @c4, where I am human in a universe without doom soon, and I'm glad it is not #d5, where "I" would be a rock.
If we choose a uniform prior over all 80 states so defined, then if my current information was "I'm a H at time 3," that would mean that the state is one of {&d3,#d3,@d3}, and my posterior would
assign equal probability to universes &,#,@. Since this is only a 1/3 chance of "doom soon," we here get back the result we had at first, using universes at states. Thus choosing states and priors in
a more physics-oriented way seems to eliminate the doom-enhancing effects of extending the state space to allow us to imagine that I might have been you.
This alternative approach to extending states does have some problems. It seems to suggest that a non zero prior probability of a universe with an infinity of humans implies probability one that we
find ourselves in an infinite universe. And it seems difficult to use it when universes have varying numbers of space-time slots. If these difficulties cannot be overcome, however, I would rather go
back to the standard approach to defining states.
It is interesting that doomsday argument proponents seem to challenge our usual way of doing inference, by preferring an extended state space where we explicitly model the idea that "I could have
been you." However, if we try to do this in a physics-oriented way, avoiding describing states directly in abstract features of interest to humans but not the universe, we get seem to get the same
chance of doom as if we hadn't extended states at all. Humanity may in fact face doom soon, and we have many reasons to be concerned about this. But I do not think the doomsday argument is one of
[1] Nick Bostrom, "Investigations into the Doomsday argument" Technical Report, www.hedweb.com/nickb/doomsday/investigations.doc 1998.
[2] D. Dieks, "Doomsday - Or: the Dangers of Statistics" Philosophical Quarterly 42:78-84, 1992.
[3] William Eckhardt, "A Shooting Room-View of Doomsday" Journal of Philosophy 59:244-259, 1997.
[4] William Eckhardt, "Probability Theory and the Doomsday Argument" Mind 102(407):483-488, 1993.
[5] J. Richard Gott, "Implications of the Copernican principle for our future prospects", Nature, 363:315-319, 27 May, 1993.
[6] Tomas Kopf, Pavel Krtous, Don Page, "Too Soon For Doom Gloom?" Technical Report, xxx.lanl.gov/abs/gr-qc/9407002, 1994.
[7] Kevin Korb, Jonathan Oliver, "A refutation of the doomsday argument" Mind 107(426):403-410, April 1998.
[8] John Leslie, The End of the World: The Science and Ethics of Human Extinction, Routledge, London, 1996.
[9] John Leslie, "Doom and Probabilities" Mind 102(407):489-491, 1993.
[10] John Leslie, "Doomsday Revisited" Philosophical Quarterly 42:85-89, 1992.
[11] Jonathan Oliver, Kevin Korb, "A Bayesian Analysis of the Doomsday Argument", Technical Report, www.cs.monash.edu.au/~jono/TechReports/analysis2.ps Jan. 1998.
[12] H.B. Nielsen, "Random Dynamics and Relations between the Number of Fermion Generations and the Fine Structure Constants", Acta Physica Polonica, B 20(5):427-68, 1989. | {"url":"http://hanson.gmu.edu/nodoom.html","timestamp":"2014-04-17T03:48:17Z","content_type":null,"content_length":"19350","record_id":"<urn:uuid:4d703fa4-5e1b-4c09-94dd-2a96c2f06ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expresso is a Simple Text Editor with Calculation Capabilities
"Ray gave us insightful tactical advice and helped us write a polished press release. As soon as it was published, we instantly reached a much wider audience than we could have dreamed of attaining
on our own. At a lower cost, prMac gave us a much more comprehensive service than many of its competitors. Highly recommended!"
There are 12800 Members.
Expresso is a Simple Text Editor with Calculation Capabilities
January 30, 2013 in Software
[prMac.com] Moscow, Russian Federation - Deep IT Pro company introduces a new product, Expresso 1.0 for Mac OS X. Expresso is a simple and useful text editor with calculation capabilities for Mac OS
X. Expresso allows you to calculate math expressions inside Rich Text documents, assign results to other variables and use those results anywhere in your document or within new expressions, annotate
calculations with text, place expressions anywhere in text, organize calculations inside tables like in spreadsheets.
Expresso allows you to:
* Calculate math expressions inside Rich Text documents
* Assign results to other variables and use those results anywhere in your document or within new expressions
* Annotate calculations with text
* Place expressions anywhere in text
* Organize calculations inside tables like in spreadsheets
Expresso has common math operators and functions available: ( , -, *, /, %, !, **) in addition, to the following functions: sum(), count(), min(), max(), median(), stddev(), average(), random(), sqrt
(), log(), exp(), ceil(), floor(), sin(), cos(), tan() and others! Please see the help page for more details.
Expresso's Simple Features & Rules:
* All dependent expressions are calculated automatically
* Press Enter in any place of an expression or just after the last square bracket to calculate a value
* Use square brackets [ ] for expressions. For example: [2 2]
* Use variables in your expressions. For example: [10*x]
* Input the name of a variable before the expression followed by ":". For example: [x:2 2], variable x will have the value 4
* Move the cursor or click on an expression to edit
* Calculations can be saved in RTF format
You can associate a value with a variable such as [x:10*5] and then use the variable in another expression such as [x/52]. When you calculate the variable "x" all dependent expressions are calculated
System Requirements:
* Requires Mac OS X 10.7 or later
* 1.8 MB
Pricing and Availability:
Expresso 1.0 is $4.99 USD (or equivalent amount in other currencies) and available worldwide exclusively through the Mac App Store in the Productivity category.
Deep IT is an independent software company focusing on high quality applications for the World Wide Web, Apple's Mac OS X and iOS. We provide the Deep analysis and the best IT solutions. We change
the reality!
Igor Belyaletdinov
Russian Federation | {"url":"http://prmac.com/release-id-53993.htm","timestamp":"2014-04-18T15:41:22Z","content_type":null,"content_length":"42431","record_id":"<urn:uuid:ec7d7271-fc91-4723-8014-4186ffe5e490>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Re: coefficient explanation
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Re: coefficient explanation
From Joseph Coveney <stajc2@gmail.com>
To <statalist@hsphsun2.harvard.edu>
Subject st: Re: coefficient explanation
Date Sat, 15 Jun 2013 17:10:11 +0900
Kayla Bridge wrote:
The model I am working with now is:
y=beta1*x1+beta2*x2+u (here, beta1 is significant)
However, I realize the correlation between y and x1 is due to some other factor
which is not present in the model. Therefore, I add this critical variable that
can best proxy for this factor, x3, in the model. Now the model is
y=beta1*x1+beta2*x2+beta3*x3+u. In this case, beta1 should weaken when x3 is
present. But my question is: beta1 should have smaller magnitude than before but
still significant or beta1 should be insignificant when x3 is added? If beta1 is
still significant but with smaller value when x3 is added, can I say x3 is a
critical value which is ignored before or correlation between y and x1 is
Any suggestion is appreciated.
It sounds like you're analyzing data from an observational study. Maarten Buis
has posted on this list before on what can happen to the magnitudes and signs of
regression coefficients when additional variables are added to a regression
model of an observational study. You might want to search the archives for some
of his posts.
You seem to suggest that your subject matter knowledge tells you that the
apparent association between y and x1 is illusory, that in reality it only
reflects the action of some other factor on both. If so, then is there a good
reason to include x1 in the model at all, especially if you have in hand a
halfway-decent measure of this other factor, namely, x3? If your subject matter
knowledge allows, you might consider modeling the relationships between y, x1
and x3 (and x2) by means of path analysis or even a structural equation model if
your dataset has enough indicator variables to assure model identification.
(Type "help sem" in Stata's command window for more information.)
I assume that your model actually does have an intercept, that its omission in
your post is inadvertent.
Joseph Coveney
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2013-06/msg00723.html","timestamp":"2014-04-18T16:00:08Z","content_type":null,"content_length":"9443","record_id":"<urn:uuid:f7b31397-adcb-43c2-8d42-1e1ecf530499>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time, speed and distance travelled
Author Time, speed and distance travelled
Ranch Hand
I am helping my younger brother prep for aptitude tests.
Joined: Aug 19, 2004 I have below a question for which I was able to get at the answer, but I am not sure if my approach is the fastest or the best.
Posts: 376 All he has is about 50 secs to work it out, but it took me more than that to get at the accurate answer.
A person who decided to go on a weekend trip should not exceed 8 hrs of travel time.
The average speed of the forward journey is 40mph.
Traffic causes the average speed of the return journey to drop to 30mph.
How far can he select a picninc spot?
Please let me know what is the answer and what method you used to arrive at it. Thanks.
Ranch Hand
On my mark, get set, GO!
Joined: Feb 18, 2005
Posts: 989 Ok, it took me 48 seconds to come up with this (including the long division and transferring the final decimal answer to the bottom of the page):
1 X is the distance from home to the picnic area.
X/40 = travel time going out.
X/30 = travel time returning.
X/40 + X/30 <= 8
3X/120 + 4X/120 <= 8
7X/120 <= 8
7X <= 960
X <= 137 1/7 miles or 137.(142857) miles, however you want it expressed.
All units are in miles, hours, or miles per hour, whichever is appropriate.
Ranch Hand
Joined: Aug 19, 2004 Perfect! Thanks
Posts: 376
Ranch Hand
Hi Kayal cox,
Joined: Mar 15, 2005
Posts: 41 r u from kayal in south India?
Ranch Hand
Joined: Apr 14, 2005 Sabeer this is programming diversion forum and not about asking geographical region of the poster.You can PM her about knowing latitude/longitude of her location.
Posts: 54
subject: Time, speed and distance travelled | {"url":"http://www.coderanch.com/t/35356/Programming/Time-speed-distance-travelled","timestamp":"2014-04-23T09:47:54Z","content_type":null,"content_length":"25453","record_id":"<urn:uuid:8a7cf62e-4669-4791-a650-c206aa108e49>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mount Prospect Algebra Tutor
...I currently tutor two students in Prealgebra and like to instill confidence in them so they know that math is something you can be good at if you do it enough. When I took the Basic Skills test
for my teaching certificate I received a perfect score on the math section. Currently I am taking a L...
17 Subjects: including algebra 1, Spanish, English, reading
...I use Microsoft Word and Publishing and have used PrintShop Deluxe in the past. I am a certified teacher with experience teaching Social Studies. Most of my experience is in 7th grade, but this
past September through December I was teaching Social Studies to grades K-8, quite a task.
40 Subjects: including algebra 2, algebra 1, reading, English
...If you are interested in taking home tutoring classes for your kids, and improving their grades, do not hesitate to contact me. Qualification: Masters in Computer Applications My Approach : I
assess the child's learning ability in the first class and then prepare an individual lesson plan. I break down math problems for the child, to make him/her understand in an easy way.
8 Subjects: including algebra 1, algebra 2, geometry, ACT Math
...Therefore, my degree naturally incorporates a wide range of U.S. History topics. I'm also something of a history buff in my personal life.
28 Subjects: including algebra 1, English, reading, physics
After graduating from DePauw University, I tutored high school students in writing. I personalized each lesson according to the needs of the student as well as their interests. For example, I had
a male student who was very interested in football.
15 Subjects: including algebra 1, reading, English, chemistry | {"url":"http://www.purplemath.com/mount_prospect_algebra_tutors.php","timestamp":"2014-04-18T08:55:12Z","content_type":null,"content_length":"24015","record_id":"<urn:uuid:dfb3dbd1-32a9-4d02-af31-ab08e4ae6973>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 125- STRUCTURE AND CONCEPTS OF ELEMENTARY MATHEMATICS I - Cuyamaca College
In blending the mathematical topics of sets, whole numbers, numeration, number theory, integers, rational and irrational numbers, measurement, relations, functions and logic, the course will
investigate the interrelationships of these topics using a problem- solving approach and appropriate use of technology.
AA/AS GE, CSU, CSU GE, IGETC, UC credit limit
Prerequisite: MATH 103 or 110, 097 or equivalent with a grade of “C” or better or “Pass”
Corequisite: None
Recommended Preparation: None
3 UNITS: 3 hours lecture, 1 hour laboratory | {"url":"http://www.cuyamaca.edu/courses/math/math-125.html","timestamp":"2014-04-19T02:09:52Z","content_type":null,"content_length":"1738","record_id":"<urn:uuid:425368a9-0f52-4757-bc1a-c493d990e105>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstracts of Papers
Abstracts of Papers
No. 1: Special Issue on Diagrammatic Representation and Reasoning,
No. 2,
No. 3,
No. 4.
Reprints of the papers may be obtained from their authors; contact Editorial Office in case you need the address of the respective author.
Editorial Office, MGV
Institute of Computer Science
ul. Ordona 21
01-237 Warszawa, Poland
Special Issue on Diagrammatic Representation and Reasoning.
Special Issue Editor: Zenon Kulpa.
Excerpt from the Introduction to the Special Issue:
The problem of finding appropriate representations of various types of knowledge is a subject of continued research in artificial intelligence and related fields since the beginning of these
disciplines. Much of the progress in science involved finding new representations of various phenomena or formal constructs: from diagrams of Euclid, through calculus notation of Newton and
Leibnitz, to Feynman quantum particle diagrams, and many others. An appropriate way of representing knowledge about some phenomenon, problem, or formal system allows for effective description of
the domain knowledge, and facilitates reasoning and problem solving in the domain. As was stated by Simon: "...solving a problem simply means representing it so as to make the solution
One of the types of knowledge representation, used by people since unknown times, but only recently becoming an object of focused research (and computer implementation of the results) is the
representation involving visual, graphical and pictorial means (diagrams).
In recognition of growing interest and volume of works in this emerging field of diagrammatic representation and reasoning as well as its close ties with the fields of computer processing,
understanding and generation of images and pictures, this Special Issue of the Machine GRAPHICS & VISION Journal has been compiled. It contains seven papers which cover a number of fundamental
concepts and directions of research in the field.
Kulpa Z.:
Diagrammatic representation for a space of intervals.
MGV vol. 6, no. 1, 1997, pp. 5-24.
The paper presents a two-dimensional graphical representation for the space of intervals and interval relations. It consists of the representation of the space of all intervals (the IS-diagram),
and two diagrammatic representations of the space of relations between intervals: the W-diagram (based on the IS-diagram), and lattice diagram (for the neighbourhood lattice of the set of basic
relations). Also, a set of new graphical symbols for the interval relations is proposed. Examples of application of the proposed representations to represent knowledge about interval algebra and
certain interval problems are given, to illustrate the possibilities and advantages of the proposed representation.
Key words: diagrams, diagrammatic representation, diagrammatic reasoning, intervals, interval relations, interval algebra, knowledge representation, qualitative physics.
Barker-Plummer D., Bailin S.C.:
The role of diagrams in mathematical proofs.
MGV vol. 6, no. 1, 1997, pp. 25-56.
This paper describes our research into the way in which diagrams convey mathematical meaning. Through the development of an automated reasoning system, called GROVER, we have tried to discover
how a diagram can convey the meaning of a proof. GROVER is a theorem proving system that interprets diagrams as proof strategies. The diagrams are similar to those that a mathematician would draw
informally when communicating the ideas of a proof. We have applied GROVER to obtain automatic proofs of three theorems that are beyond the reach of existing theorem proving systems operating
without such guidance. In the process, we have discovered some patterns in the way diagrams are used to convey mathematical reasoning strategies. Those patterns, and the ways in which GROVER
takes advantage of them to prove theorems, are the focus of this paper.
Key words: mathematical diagrams, reasoning strategies, visualization, proof, automated reasoning.
Anderson M., McCartney R.:
Learning from diagrams.
MGV vol. 6, no. 1, 1997, pp. 57-76.
The theory of inter-diagrammatic reasoning provides a syntax for a general diagram and set of operators that can be used to reason with collections of related diagrams. We show the theory's
utility as a means of learning from diagrammatic representations by presenting two different methods of machine learning that learn from diagrams in two distinct diagrammatic domains.
Olivier P.:
Hierarchy and attention in computational imagery.
MGV vol. 6, no. 1, 1997, pp. 77-88.
The most recent characterisation of visual mental imagery is in terms of a complex set of interdependent subsystems rooted in a theory of visual perception. We concentrate our attention on a
particular component of the architecture for visual mental imagery, the visual buffer, and consider its role in the performance of imagistic reasoning. After a brief description of the relevant
properties of the visual buffer we critically review previous computational models of imagery before going on to describe our own model of the visual buffer and its application to reasoning about
the kinematic behaviour of higher pairs.
Key words: visual mental imagery, computational imagery, diagrammatic reasoning, kinematics.
Lemon O., Pratt I.:
Spatial logic and the complexity of diagrammatic reasoning.
MGV vol. 6, no. 1, 1997, pp. 89-108.
Researchers have sought to explain the observed "efficacy" of diagrammatic reasoning (DR) via the notions of "limited abstraction" and inexpressivity. We argue that application of the concepts of
computational complexity} to systems of diagrammatic representation is necessary for the evaluation of precise claims about their efficacy. We show here how to give such an analysis. Centrally,
we claim that recent formal analyses of diagrammatic representations fail to account for the ways in which they employ spatial relations in their representational work. This focus raises some
problems for the expressive power of graphical systems, related to the topological and geometrical constraints of the medium. A further idea is that some diagrammatic reasoning may be analysed as
a variety of topological inference. In particular, we show how reasoning in some diagrammatic systems is of polynomial complexity, while reasoning in others is NP hard. A simple case study is
carried out, which pinpoints the inexpressiveness and complexity of versions of Euler's Circles. We also consider the expressivity and complexity of cases where a limited number of regions is
used, as well as Venn diagrams, "GIS-like" representations, and some three dimensional cases.
Key words: spatial logic, diagrammatic reasoning, complexity, topological inference, Euler's Circles, Venn diagrams, GIS.
Oliver M., O'Shea T., Fung P.:
Three representations of modal logic: a comparative experiment.
MGV vol. 6, no. 1, 1997, pp. 109-128.
An empirical study was carried out to assess the comparative benefits for learners of three representations of modal logic. The three representational styles used were syntactic, diagrammatic,
and a learning environment representation, which combined diagrams with block-world examples. Results showed that the syntactic and learning environment conditions were significantly more
effective for learners than the diagrammatic condition. The learning environment condition also had significant motivational benefits, and provided a meaningful grounding for the concepts being
learnt. An explanation based on Green's cognitive dimensions is proposed for the poor performance of learners using the diagrammatic condition. This experiment demonstrates that not all graphical
representations are of benefit to learners, and that in addition to choosing a representation which maps efficiently onto the domain, motivational and cognitive factors must be considered. A
learning environment style of representation, combining law encoding diagrams with concrete examples, is proposed to be an effective way of teaching topics such as modal logic.
Due to editor's error, the first line of the Introduction to this paper has been omitted in the printed version.
The first paragraph of the paper should read:
"This paper reports on a study which investigates the effects of three representational styles for learners of modal logic. The conditions used teaching material that was informationally
equivalent, but which differed in representational style. The three styles used can be characterised as syntactic, diagrammatic, or "learning environment" (a combination of diagrams with
examples). This paper focuses on two of the concepts covered in the material: proofs and modal systems. A complete report can be found in [13]."
Missikoff M., Pizzicannella R.:
A Deductive Approach to Object-Oriented Diagramming for Information System Analysis.
MGV vol. 6, no. 1, 1997, pp. 129-151.
With the rapid expansion of Object-Oriented (O-O) programming, also Object-Oriented Analysis and Design (OOAD) is rapidly developing, with a significant number of methods being proposed in the
literature, and a few experimented on the field.
OOAD methods are based on a diagrammatic approach. Diagrams are intuitive, fast to learn and easy to use. However they lack formality and therefore their use is mainly descriptive, while
validation and verification are, to a large extent, performed manually. Lack of formality in diagrammatic modeling is not easily removed without loss of intuitiveness and transparency.
In this paper we present a method aimed at adding formality, without loss of clarity, in an OOAD diagram. This is achieved by using a simple, rule based, diagrammatic formalism: TQD++, and a
deductive approach. TQD++ has been conceived to supply a diagrammatic representation for the Object-Oriented specification language: TQL++, used to construct conceptual models of O-O database
applications, having a formal semantics. Our diagrammatic approach is based on an abstract syntax, where the graphical symbols are represented in their essence, not their appearance. The actual
drawing of symbols is left to a high level layout facility. In this approach, the abstract diagram models the part of the analysis best suited to be represented graphically, then the analysis
model is completed by conceptual annotations in TQL++. This hybrid analysis model can be verified and validated by Mosaico, an O-O conceptual modeling tool developed at IASI-CNR.
Kozera R., Klette R.:
Finite difference based algorithms for a linear shape from shading.
MGV vol. 6, no. 2, 1997, pp. 157-201.
We analyse different sequential algorithms for the recovery of object shape from a single shading pattern generated under the assumption of a linear reflectance map. The algorithms are based on
the finite difference approximations of the derivatives. They operate on a rectangular discrete image and use the height of the sought-after surface along a curve in the image (image boundary) as
initial data.
Tieng Q.M., Boles W.W., Deriche M.:
Recognition of polyhedral objects based on wavelet representation and attributed graphs.
MGV vol. 6, no. 2, 1997, pp. 203-224.
We propose a method for recognising polyhedral objects from either their weak perspective or orthogonal projection on an image plane. The method is applicable for one solely polyhedral object per
image. Each solid object is represented by an attributed graph whose nodes represent the object faces and the arcs represent the edges formed by the intersection of adjoining faces. Each node of
the graph is associated with an attributed vector consisting of a set of wavelet transform based affine invariant features in addition to a scalar feature. This scalar feature is the type of
shape which the face belongs to. Each arc is attached with one scalar attribute value which is the angle between two adjoining faces corresponding to the ends of that arc. A hierarchical matching
algorithm is proposed for recognising unknown objects based on the attributed graphs. The algorithm consists of four stages arranged in the order of increasing computational complexity. The
proposed method has been tested with several polyhedral objects in synthetic as well as real images. Experimental results show that in most cases, a single projection image is sufficient to
recognise the unknown object in the scene.
Verestoy J., Chetverikov D.:
Shape defect detection in ferrite cores.
MGV vol. 6, no. 2, 1997, pp. 225-236.
In the framework of a European technological research project, a general method is presented for shape measurement and defect detection of industrially produced objects using the characteristic
2D projections of the objects. The method is applied to the visual inspection and dimensional measurement of ferrite cores. An optical shape gauge system is described, based on rotation-invariant
shape matching. A novel shape representation is introduced that facilitates the invariant matching. Special attention is paid to finding the optimal reference position and orientation (pose) of
the measured shape for invariant comparison to the reference shape. This problem arises because no reference pose (e.g., baseline) can be specified a priori as shape defects may deteriorate any
of the dimensions.
Key words: image analysis, industrial inspection, ferrite cores, shape gauge, dimensional measurement, invariant shape comparison, shape matching.
Pham B.:
A hybrid representation model for aesthetic factors in design.
MGV vol. 6, no. 2, 1997, pp. 237-245.
A general hybrid representation scheme is presented for aesthetic factors in design, which consists of three parts: visual, symbolic and quantitative. This scheme would allow designers' aesthetic
intents to be specified in ways that are more intuitive and familiar to designers. The representation is then inferred and mapped onto appropriate mathematical and geometric techniques in order
to achieve required effects on shape and other characteristics of designed objects. This approach would provide an environment which allow designers fluid exploration of ideas in the conceptual
stage of design, without being encumbered with precise mathematical concepts and details.
Key words: aesthetic factors, computer-aided design.
Palenichka R. M.:
Lossless image data compression by binary block segmentation.
MGV vol. 6, no. 2, 1997, pp. 247-264.
The problem of reversible data compression of radiographic images is considered with application to the diagnostics imaging. The hierarchical (multiresolution) prediction approach with a
subsequent entropy encoding is proposed for efficient compression by using a preliminary spatial decorrelation of image data. The process of decorrelation is envisaged as a feature extraction
process by a differentiation in the space domain based on a piecewise polynomial model of the image data. The binary segmentation of blocks allows to control the block entropy before an encoding
during the process of the multiresolution prediction. The experimental results confirm a relatively high ratio of this lossless compression.
Key words: image lossless compression, binary segmentation, entropy encoding, prediction, decorrelation.
Ignatiev V.M., Abuzova I.V., Larkin E.V.:
Image filtering in the MIMD concurrent computer.
MGV vol. 6, no. 2, 1997, pp. 265-274.
Organization principles and time characteristics of an image filtering in the MIMD concurrent computer with interprocessor communicator are considered. It has been shown, that the dependence of
computational time on the number of processors has a minimum point, which position on the time-axis is defined by hardware and software features of a system.
Key words: filtering, concurrency, communicator, image.
Stevens M.R., Beveridge J.R., Goss M.E.:
Visualizing multi-sensor model-based object recognition.
MGV vol. 6, no. 3, 1997, pp. 279-303.
A difficult problem when designing automatic object recognition algorithms is the visualization of relationships between sensor data and the internal models used by the recognition algorithms. In
our particular case, we need to coregister color, thermal (infrared), and range imagery, to 3D object models in an effort to determine object positions and orientations in three-space.
In this paper we describe a system for interactive visualization of the various spatial relationships between the heterogeneous data sources. This system is designed to be closely linked to the
object recognition software such that it allows detailed monitoring of each step in the recognition process. We employ several novel techniques for visualizing the output from an imaging range
device. Our system also incorporates sensor models which can simulate sensor data for visible features of stored object models, and display these features in the proper position relative to the
appropriate sensor.
Key words: CAD modeling, multi-sensor, visualization.
Ranefall P., Nordin B., Bengtsson E.:
A new method for creating a pixel-wise box classifier for colour images.
MGV vol. 6, no. 3, 1997, pp. 305-323.
When segmenting colour images pixelwise classification is often a useful approach. In this paper a method for creating a pixelwise box classifier to be applied to multiband images is presented.
It is based on a hierarchical colour space splitting algorithm originally presented as a method for selecting colours to a display that does not support full colour quality. Through the addition
of the concept of interactively defined reference pixels the original unsupervised clustering algorithm is transformed into a supervised classification algorithm. This classifier is compared with
the commonly used Maximum Likelihood (ML) classifier, with respect to speed and average colour distance. It is also shown that the algorithm applied to a reference image defines a metric in the
colour space. The proposed method is particularly useful when the same classifier should be applied to several similar images, since the resulting box classifier can be implemented efficiently
using table look-up techniques.
Key words: box classifier, colour images, supervised classification.
Wang Y., Bhattacharya P.:
An algorithm for finding parameter--dependent connected components of gray-scale images.
MGV vol. 6, no. 3, 1997, pp. 325-340.
In a previous work, we have introduced the concept of a parameter-dependent connected component of gray-scale images, which is a convenient tool to analyze or understand images at a higher level
than the pixel level. In this paper, we describe an algorithm for finding the parameter-dependent components for a given image. We discuss different strategies used in the algorithm. Since the
proposed algorithm is independent of the formation of the images, it can be used for the analysis of many types of images. The experimental results show that for some appropriate values of the
parameters, the objects of an image may be represented by its parameter-dependent components reasonably well. Thus, the proposed algorithm provides us with the possibility of analyzing images
further at the component level.
Key words: gray-scale image, connected components, algorithm.
Aboul Ella H., Nakajima M.:
Image morphing with scattered data points based on snake model and thin plate spline transformation.
MGV vol. 6, no. 3, 1997, pp. 341-351.
Image morphing is an active field of research and recent efforts aim at improving both user interface and warping results. Generally, the morphing technique involves three problems. The first is
to establish correspondence between given two images, which is most difficult part of morphing process. The correspondence is usually established by hand. The second problem is to define or
construct a mapping function which deforms the first image towards the second one based on these feature points, and vice versa. The third problem is to blend pixel values of the two deformed
images to create in-between images, this will end the morphing process. In this study aims to raise strategy to solve these problems.
First, we adopt a semi-automatic algorithm based on snake model to specify the feature correspondence between two given images. It allows a user to extract a contour that defines a facial
features such as lips, mouth, profile, etc., by specifying only endpoints of the contour around the feature that serve as the extremities of a contour. Then we use these two points as anchor
points, and automatically computes the image information around these endpoints to provide boundary conditions.
Next, we optimize the contour by taking this information into account first only close its extremities. During the iterative optimization process, the image forces are moving progressively from
the contour extremities towards its center to define the feature. It helps the user to define easily the exact position of a feature. It may also reduce the time taken to establish feature
correspondence between two images. For the second image morphing problem, this paper presents a warping algorithm using thin plate spline, a well known scattered data method which has several
advantages. It is efficient in time complexity and smoothed interpolated morph images with only a remarkably small number of feature points specified. It allows each feature point to be mapped to
the corresponding feature point in the warped image.
Finally, the in-between images could be defined by creating a new set of feature points through the cross-dissolving process.
Key words: feature specification, image warping, image morphing, snake, thin plate spline.
Hajdasz T., Maciejewski H.:
Image filtering for noise reduction based on fractal dimension.
MGV vol. 6, no. 3, 1997, pp. 353-361.
In this paper we present an algorithm for noise reduction based on estimation of fractal dimension of neighborhood of points in the image. Based on the estimated fractal dimension, pixels
interpreted as noise are distinguished from objects of the original clean image. This technique is effective for filtering e.g., scanned technical drawings. A few examples presented in the paper
show that the algorithm introduces hardly any blurring into edges of the image processed. The algorithm can be possibly applied in software for filtering scanned images.
Key words: filtering for noise reduction, fractal dimension.
Abdel-Qader I.M.:
Computation of displacement fields in noisy images.
MGV vol. 6, no. 3, 1997, pp. 363-380.
Motion estimation is an ill-posed problem since the motion parameters that depend on optical variations are not unique. Researchers need to add constraints to obtain unique displacement
estimates. In this paper, a new motion estimation algorithm based on Markov Random Fields (MRF) modeling is developed. The algorithm adds a new constraint to the motion problem, named the
restoration term. As with MRF model-based algorithms, the motion estimation problem is formalized as an optimization problem. Mean Field Annealing Theory is used to compute accurate displacement
fields in the noisy image with explicit consideration of the noise presence. Furthermore, the algorithm computes the mean field values of the estimated image. The algorithm results in accurate
displacement vector fields, even for scenes with noise or intensity discontinuities as demonstrated by the implementation results on synthetic and real world image sequences.
Gorelik A.G.:
Three-valued calculi in the problem of conversion from CSG to boundary models.
MGV vol. 6, no. 3, 1997, pp. 381-387.
This paper describes a computer method for conversion from Constructive Solid Geometry (CSG) models to Boundary Representation (B-Rep). CSG-model of the object is represented in an algebraic form
as a function of arbitrary length. The conversion from CSG-model to B-Rep is performed in one step only. For solving this problem , the Indexing Three-valued Calculi (ITC) are used.
Key words: constructive solid geometry, boundary representation, conversion, three-valued calculus.
Tarel J.-P.:
Global 3D planar reconstruction with uncalibrated cameras and rectified stereo geometry.
MGV vol. 6, no. 4, 1997, pp. 393-418.
This article presents a seldom studied 3D reconstruction approach, and focus on its numerous advantages. It is a global approach, in which the algorithm attempts to reconstruct geometric high
level features by using only global attributes of the image projections of the feature. Our elementary feature is a 3D planar patch, and geometric moments are the global attributes. The proposed
method of reconstruction has the following advantages:
the use of the geometric moments of image region does not require a pixel-to-pixel matching and yields robustness to small errors of segmentation on region edges,
the rectified stereo geometry constraint allows reconstruction with uncalibrated or calibrated cameras when epipolar geometry is known,
valid matched regions are selected and thus the probably occluded planar patches are not reconstructed,
interpolation of views between the left and right cameras can be performed to produce synthetic intermediate views of the observed scene.
Experiments on real and synthetic stereograms are presented to illustrate the advantages of the approach.
Key words: 3D, stereo, reconstruction, planar assumption, uncalibrated camera, rectified geometry, moments of inertia, geometric attributes, affine invariants, view morphing.
Dabkowska M., Mokrzycki W.S.:
A multiview model of convex polyhedra.
MGV vol. 6, no. 4, 1997, pp. 419-450.
This paper deals with the multiview models of convex polyhedra for visual identification systems. In particular, it consists of a new method and an algorithm for generating views of convex
polyhedra using the view sphere conception. It also describes a new method and an algorithm for completing the set of views of a model. The method consists in defining one-view areas on the view
sphere and investigating adjacency of these areas. Information about the adjacency of one-view areas is needed for checking the covering of the sphere with these areas, for finding possible gaps
in the covering and for generating missing views. The completeness of the covering of the sphere with one-view areas is necessary for the correctness of the representation.
Key words: model based identification of objects, 3D view representations of models, view sphere, one-view area, covering of a view sphere with one-view areas, completeness of representation.
Wünsche B.:
A survey and analysis of common polygonization methods & optimization techniques.
MGV vol. 6, no. 4, 1997, pp. 451-486.
Implicitly defined surfaces of (isosurfaces) are a common entity in scientific and engineering science. Polygonizing an isosurface allows storing it conventionally and permits hardware assisted
rendering, an essential condition to achieve real-time display. In addition the polygonal approximation of an isosurface is used for simplified geometric operations such as collision detection
and surface analysis. Optimization techniques are frequently employed to speed up the polygonization algorithm or to reduce the size of the resulting polygon mesh.
Bartkowiak A., Szustalewicz A.:
Detecting multivariate outliers by a grand tour.
MGV vol. 6, no. 4, 1997, pp. 487-505.
A method of viewing multivariate data vectors and identifying among them outliers is described. The method is applied to two sets of benchmark data: Browne's stack-loss data and
Hawkins-Bradu-Kass data. All the outliers contained in these data are easily identified.
Generally, it is expected that the method will yield good results (i.e. will find the outliers) for data having elliptical or nearly elliptical distributions.
Key words: exploratory data analysis, atypical data vectors, graphical representation of points from multivariate space, sequential rotations and projections.
Grabska E.:
Theoretical concepts of graphical modelling. [Dissertation Abstract]
MGV vol. 6, no. 4, 1997, pp. 507.
The purpose of the dissertation is to develop a theoretical basis of graphical modelling which is closely connected with creative and commercial design. Graphical modelling is discussed within
the framework of graph transformations.
Revievers' index
Authors' index
Contents of volume 6, 1997
Maintained by Zenon Kulpa
Last updated Oct 4, 1999 | {"url":"http://bluebox.ippt.pan.pl/~zkulpa/MGV/MGV6.html","timestamp":"2014-04-18T08:02:04Z","content_type":null,"content_length":"33303","record_id":"<urn:uuid:b4aa4ed7-c55a-4395-9c91-d37a1f22bd0a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ewing Township, NJ SAT Math Tutor
Find an Ewing Township, NJ SAT Math Tutor
...As a teacher of chemistry and science at the college level I've taught and used math in my classes for over 20 years. Also, I have 5 children of my own who I've tutored through their own math
classes through about 20 years. My 3 oldest children are currently all studying science and math in college.
10 Subjects: including SAT math, chemistry, algebra 2, GED
I taught high school mathematics in New Jersey for over 20 years. I have experience tutoring students from my former high school and I have worked as a tutor for Sylvan Learning Center and other
organizations. My favorite math subjects are pre-calculus and statistics.
14 Subjects: including SAT math, calculus, statistics, geometry
...Please note that I only tutor college students, advanced high school students, returning adult students, and those studying for standardized tests such as SAT, GRE, and professional licensure
exams.I have nearly completed a PhD in math with a heavy emphasis in computer algebra. I am a world-reno...
11 Subjects: including SAT math, calculus, ACT Math, precalculus
...So, before every issue, I would work one-on-one with each of my writers to introduce them to new writing techniques and work to rewrite their articles to prepare for print. I'm generally a
math and writing nerd. When I teach, I find it most important to -Give perspective about the field of study we're covering.
25 Subjects: including SAT math, chemistry, calculus, writing
...I also have a USSF State D license administered by New York State West Youth Soccer Association. Microsoft Access is a great information management tool that allows you to store, report, and
analyze information within a relational database. I can assist you with understanding the program; create tables, forms, report and queries; and how to form expressions and create functions.
27 Subjects: including SAT math, calculus, statistics, geometry
Related Ewing Township, NJ Tutors
Ewing Township, NJ Accounting Tutors
Ewing Township, NJ ACT Tutors
Ewing Township, NJ Algebra Tutors
Ewing Township, NJ Algebra 2 Tutors
Ewing Township, NJ Calculus Tutors
Ewing Township, NJ Geometry Tutors
Ewing Township, NJ Math Tutors
Ewing Township, NJ Prealgebra Tutors
Ewing Township, NJ Precalculus Tutors
Ewing Township, NJ SAT Tutors
Ewing Township, NJ SAT Math Tutors
Ewing Township, NJ Science Tutors
Ewing Township, NJ Statistics Tutors
Ewing Township, NJ Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bensalem SAT math Tutors
Cherry Hill SAT math Tutors
Cherry Hill Township, NJ SAT math Tutors
Edison, NJ SAT math Tutors
Lawrence, NJ SAT math Tutors
Levittown, PA SAT math Tutors
Middletown Twp, PA SAT math Tutors
Morrisville, PA SAT math Tutors
New Brunswick SAT math Tutors
Piscataway SAT math Tutors
Plainfield, NJ SAT math Tutors
Trenton, NJ SAT math Tutors
Washington Crossing SAT math Tutors
West Trenton, NJ SAT math Tutors
Yardley, PA SAT math Tutors | {"url":"http://www.purplemath.com/ewing_township_nj_sat_math_tutors.php","timestamp":"2014-04-19T23:59:38Z","content_type":null,"content_length":"24574","record_id":"<urn:uuid:834916b1-a1d4-46d3-88aa-289dba21072b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wire Routing Problem
April 29th, 2009, 12:56 PM #1
Junior Member
Join Date
Apr 2009
Hi guys,
So I've been working on writing an algorithm, and I'm somewhat stuck. Basically what the problem is is I have an m x m 2D array that looks something like this:
1 S 1 0
0 0 1 T
Basically, I have to go from S to T only moving through the ones, and only moving either up, down, left, or right (no diagonal).
The first pass through the grid I have to label every coordinate with its distance from S. IE starting at S, the new grid above would look like this:
1 S 1 0
0 0 2 T
Then I have to find the shortest path to T from S, and return the coordinates of every step of that path. I have implemented a queue class to do this process (have to write my own ADT), but I'm
not sure exactly how to finish the algorithm for the coordinate labeling. So far it looks something like this, I'm just really struggling with the moving from one position to the next, then
checking every neighbor, etc. Basically, what I want to do is this: use the queue to push all the current neighbors of the current coordinate into the queue. If T is not one of them, then pop the
and repeat it for new front. I also need to label the distances at some point in the loop as well.
Here's some code:
startX and startY is the position of S in the grid, and targetX and targetY is the position of T in the grid. Also, theGrid[startX][startY] = 50 and theGrid[targetX][targetY] = 51 (ie 50
represents S and 51 represents T in the int array)
void updateGrid (int ** theGrid) {
Queue<int> theQueue;
top = startY - 1;
left = startX - 1;
right = startX + 1;
bottom = startY + 1;
int lastX, lastY;
int distance = 1;
while (theQueue.dequeue() != 51) {
//Right here I'm just not sure how to manipulate the coordinates so the queue is looking at the proper position
Any help would be greatly appreciated.
Re: Wire Routing Problem
Also, forgot to include this, here is my queue class:
#ifndef QUEUE_H
#define QUEUE_H
#include <iostream>
using namespace std;
#define maxSIZE 100
template <class qType> class Queue {
qType queue[maxSIZE];
int front, back, currentSize;
Queue() {
front = back = currentSize = 0;
void enqueue(qType num);
qType dequeue();
bool isEmpty();
int theSize();
template <class qType> int Queue<qType>::theSize() {
return currentSize;
template <class qType> bool Queue<qType>::isEmpty() {
if (front == back) {
return 1;
return 0;
template <class qType> void Queue<qType>::enqueue(qType num) {
if(back+1 == front || (back+1 == maxSIZE && !front)) {
cout << "Queue is full.\n";
if(back == maxSIZE)
back = 0; // cycle around
queue[back] = num;
template <class qType> qType Queue<qType>::dequeue() {
if(front == back) {
cout << "Queue is empty.\n";
return 0;
if(front == maxSIZE)
front = 0;
return queue[front];
April 29th, 2009, 02:04 PM #2
Junior Member
Join Date
Apr 2009 | {"url":"http://forums.codeguru.com/showthread.php?476123-Wire-Routing-Problem&mode=hybrid","timestamp":"2014-04-19T00:13:07Z","content_type":null,"content_length":"70190","record_id":"<urn:uuid:8f65c548-c5d9-46bd-a96e-0114030a110a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Ellsberg Paradox
I'm reading
, by Gregory Berns, the distinguished chair of neuroeconomics at Emory University. He's a professor of psychiatry and economics, which makes for an interesting combination. The book is about the way
successful and creative people think and act, with a special focus on fear and how it affects behavior. Early in the book, Berns describes something called The Ellsberg Paradox. Berns uses it as an
example of people's fear of the unknown.
There are two large urns placed in front of you. The urns are completely opaque, so you cannot see their contents. The urn on the left contains ten black marbles and ten white ones. The urn on
the right contains twenty marbles, but you do not know the proportion of black to white. Now, the game is to draw a black marble from one of the urns. If you are successful, you win $100. You
only have one chance, so which urn will you draw from? Keep the answer in mind.
Let's play again. Now, the game is to draw a white marble. Again, you only have one chance, so which urn will it be?
Most people when confronted with these choices choose the urn on the left -- the one with the known proportions of black and white marbles. And therein lies the paradox. If you choose the
left-hand urn when trying to pull a black marble, that means you think your chances are better for that urn. But because there are only two colors in both urns, the odds of pulling a white must
be complementary to the odds of pulling a black. Logically, if you thought the left-hand urn was the better choice for a black marble, the right-hand urn should be the better choice for a white
marble. The fact that most people avoid the right-hand urn altogether suggests that people have an inherent fear of the unknown (also called the ambiguity aversion).
Buy Iconoclast on Amazon
63 Responses to “The Ellsberg Paradox”
1. monstrinho_do_biscoito says:
hang on, if i need both a black and a white to maximise my money, then it’s in my interest to go for the left urn with the equal ratio.
2. retchdog says:
The statement is reasonable. The paradox as stated is resolved (in a very weak way) if we suppose that the player assumes the urn on the right has 10 of each marble as well.
It’s a weak resolution because if you repeat this experiment, players use the “left, left” strategy much more than half the time.
This statement is an improvement over its incarnation as the “boxer vs wrestler” problem: two equally-skilled boxers will get 2:1 odds on either, but if the best boxer in the world and the best
wrestler in the world are in a match, would you take 2:1 payoff on either?
(The idea being that the uncertainty about two totally different styles is a different kind of uncertainty. Of course, nowadays everyone knows that the wrestler would obliterate the boxer.)
3. nprnncbl says:
I agree with those saying this is not a paradox, and further I don’t think choosing the first urn in both cases is fallacious reasoning. This can be viewed as a game theoretic max-min problem:
what’s the best I can do in the worst case? That is, I want to maximize my expected payoff based on my choice of urn, where the distribution of balls in the unknown urn is as unfavorable as
In the first case, the worst case is that all the balls in the second urn are white, yielding a payoff of zero for that urn; but the first urn has an expected payoff of $50, so it is preferred.
In the second draw, we have gained no additional information about the second urn, and so we again make a worst-case assumption, only this time, the worst case is that all of the balls are black,
the first urn is again preferable.
The fallacy lies in assuming that we think about the second urn probabilistically: not only is this not what we do, it is not the only way to analyze the problem logically.
Thinking probabilistically, #18 Airshowfan averaged over possible distributions of the balls in the hidden urn, but made an assumption about this distribution, that it was uniform. This is often
a reasonable assumption, in that it is the distribution over the number of balls in the urn which has maximum entropy, although as it turns out, whatever the distribution of balls in the urn, the
probability of drawing white vs. black are, of course, complementary. And in the case of a uniform distribution, we end up again with an expected gain of $50 for either color. So there is no
prior reason to prefer the second urn.
So (Bayesian) probability (with the maximum entropy principle) yields no preference; but game theory (max-min principle) yields a preference for the first urn.
If we change the number of balls in the first urn so the odds are not even for that urn, then thinking probabilistically will lead us to choose urn 1 for the favored color, and urn 2 (which still
has even odds) for the disfavored color. If n>10 for one of the balls, then this strategy has an expected yield of (n/20)*100 + 50 dollars. But the game theoretic prediction remains unchanged,
and we will choose the first urn both times.
Things can change if we know we are going to be making multiple draws, and there is substantial research into the exploration-exploitation* tradeoff, from both Bayesian and min-max perspectives.
(*Question for Portuguese speakers: how would you translate that, since both words are exploração?)
In this case, if we know in advance we are making the two draws, then the worst case distribution for the second urn is evenly split, and we have no preference. If we choose the second urn, the
ball we draw does not change that assessment. So max-min no longer predicts a preference for the first urn.
Probabilistically, if the first urn is balanced, then we have seen that the second urn has the same expected payoff, so we have no reason to prefer it base on what we know. But if we draw from
the second urn, we gain information about it, even from a single draw. Drawing a white ball yields more support to there being a preponderence of what balls, and less to there being a
preponderence of black. So, probabilistically, we would choose from the second urn first, and then if we chose a white ball, choose from the second urn again, otherwise switch to the first urn
for the second ball. (Note that I’m assuming that all choices are made with replacement.)
So if we know in advance we are making the two draws, the predictive capability of the two theories is reversed: max-min gives no preference, while Bayesian probability yields a non-trivial
4. airshowfan says:
KyleTexas is right. Not only about how you have more info if you didn’t replace the first marble you drew (but the puzzle seems to imply that you do replace it, since it does everything it can to
make things 50-50 and does not seem to have an “A-ha, but I didn’t say you replaced the marble!” trick answer), but also about “It’s not really interesting to say people are adverse to risk ‘all
things being equal’. What’s interesting is how *unequal* odds will a person accept in exchange for more certainty?”. That’s kinda the point I was trying to make: If the odds are equal, OF COURSE
anyone will pick more knowledge. A much more interesting question is, indeed, whether you’re willing to pass up better odds in cases when you have less-complete knowledge.
And in response to #25′s “I KNOW I have a 50% chance on the left. I have no way of knowing my chances on the left. My chances swing as low as 5% all the way up to 95%, but I don’t KNOW”; You can
“know” your odds in that weird Schroedinger’s Cat kind of way, by adding up all possible odds and assigning them equal probability. You don’t know your odds, but if you average all your possible
odds (which really is not that different from averaging all the possible marble picks from the urn where you do know the odds), you get your odds given the information you have (and it turns out
that they’re 50-50). Since you have no information about how the person who set up the game set it up, just assume any outcome is equally likely.
All you have to do is divine from what you know of me whether I’m more likely to put the poison into my own goblet, or my enemy’s. ;)
5. qlfwyyd says:
Thanks to everyone who posted comments. I was incredibly confused – and frustrated – by the initial post. But I think some of these further explanations have cleared it up. It would have been
much clearer to me if in the initial post they explained that logically the urn on the right has the same chance of drawing a black or white ball as the urn on the left using AIRSHOWNFAN’s
6. Falcon_Seven says:
This whole ‘exercise’ focuses on the ‘probability’ that you will chose one scenario over another based on the information supplied. There is no ‘paradox’, probabilistic or otherwise, that
represents a negative consequence, so you will choose whatever ‘gamble’ you deem is most likely to ‘payoff’, given the information you are to base your decision on. You are either rewarded or you
are not, so this is essentially a ‘guessing game’.
However, if the ‘paradox’ were framed thusly, “‘Box A’ and ‘Box B’, both of which are capable of delivering either unbelievably excruciating pain or unbelievable pleasure -however, it is
impossible to tell which box will deliver what at any given time until you stick your hand into it- yet there is a small but detectable difference in ‘Box B’ that it could deliver unbelievable
pleasure, at twice the intensity of ‘Box A’ and at a slightly higher rate than ‘Box A’.”, what would you choose if this were the only information on which you were to base your decision?
This exercise is similar to an Einsteinian ‘thought experiment’ in an effort to explain choice. The only problem with it is that it does not represent the ‘real world’ when it comes to risk
because there is no downside to your choice.
7. gregberns says:
Let’s be clear, I never said that it is irrational to stick with known probabilities, but it is a paradox from a mathematical point of view (as I show in 36). But keep in mind that we are talking
about subjective probabilities, which are different than objective probabilities. Subjective probabilities are revealed by a person’s choices, and in this example, many people flip them back and
forth. Why?
Lots of discussion here that is essentially rationalization of this behavior, and I think many have touched on the reason I believe: the human brain evolved in a world that imposed higher costs
on ambiguity. When researchers at Caltech did fMRI on the brains of people faced with similar dilemmas, they observed strong responses in parts of the brain associated with fear and arousal —
what most feal as gut-level responses.
The point of the book (and this example)is that if you want to do truly innovative things, it takes a lot of effort by the prefrontal cortex to override these deep seated responses. Most people
8. mwitthoft says:
Anyway, thanks for making it clear that I don’t want to buy the book.
9. lvvvop says:
I don’t know what is in the urn to the right, but it’s a game, so I risk nothing either way: I can’t risk what I don’t have.
But what if it’s not a game, what if my survival depends on which urn I choose and which colour I specify? The point I’m trying to make is that theoretically we have time to assign probabilities
and then make a decision that gives us the best outcome, how often though, do we, when confronted with a situation that requires us to act quickly do we immediately choose what we know as opposed
to what we don’t? Maybe always, and those who know that make a lot of money at our expense, and deep down we know that too: But how much are you willing to risk before you are no longer willing
to risk anything?
Think of Monty Hall: two thirds of the time I’ll win if I switch, but I’ll have only one chance to play, what if chose right on my first turn? And there lies the dark heart of probability: I only
have one chance, therefore I need to know more than what happens in the long run to make an informed decision.
10. Dewi Morgan says:
Rechecking my reasons for picking the right urn, I think it was healthy skepticism. The “unknown” urn is the mystery. “What’s in the box?” – it’s well worth losing the potential for the win, to
find what the “gimmick” is: you can’t do that without checking the mystery urn. You’d walk away and *never know* if the urn held a puma. That’s not worth a 50% chance of a win.
11. Ugly Canuck says:
Yes. Instead of marbles black & white use boxes of snapping turtles and fluffy puppies.
12. mystical limpet says:
“You have no way of knowing what’s in the right jar, so drawing the black ball might be impossible (0/20). However, it’s just as likely that drawing a black ball is garaunteed (20/20).”
A lot of this assumes that the 2nd urn contains a truly random sample of marbles. But the original question doesn’t say that, it just says you don’t know the proportion of black to white marbles.
For all I know the person giving me the test deliberately filled the 2nd urn with white marbles, hoping I would be drawn to the unknown.
It’s like in Family Guy, when Peter is offered the prize of either a boat (known) or the mystery box (unknown). He chooses the mystery box, since it could be anything, including a boat.
Anyhow, if the point of the exercise is that successful people (whatever that means) are more likely to take risks, I guess that’s true. But they’re only successful because their risk paid off.
It would be just as true to say that unsuccessful people take more risks.
13. KyleTexas says:
The Wikipedia article posted by SCHORSH @ #15 is really good:
Seems to me, especially from reading this discussion, that the answer is almost an Occam’s Razor-type thing. Basically, when people are presented with two statistically equal “gambles”, but one
is dead simple to figure out, and the other requires a bit of mathematical figuring to understand, people overwhelmingly pick the simpler one.
Wow, people don’t like to think. Unless they think there’s something in it for them, or some pain that can be avoided. If not, they’d just as soon not be bothered.
I guess according to the “Ellsberg Paradox”, if you were a dictator you could pretty much invade any other country for any reason, and the populace wouldn’t really question you as long as you
kept them insulated from the cost of the invasion. Of course, in order to do that, you’d need to make sure the people doing the actual fighting were only a small percentage of the population,
which would mean you’d need some mechanism to keep them conscripted into the military indefinitely so that nobody among the broader population would be compelled to serve (i.e., consider whether
the balls in the jar add up). It might also be a good idea to enact a monetary policy of incredibly low interest rates, facilitating unprecedented access to credit. That way people could enrich
themselves by selling each other houses without actually having to produce anything — sort of like asking people to choose a white ball from two jars, one of which has 19 white balls, and the
other has an unknown number of white balls. On the liberal blogs on a place called the internet, some geeks might debate the merits of which jar you should choose, but that’s cool. Just so long
as nobody asks, “Why the fuck are you offering me $100 to pick marbles out of jars?”
14. certron says:
There is another trick at work here… being paralyzed into non-action by the uncertainty between the available choices.
(It doesn’t hurt that I’m currently listening to the Adam Freeland song with a similar title.)
15. Nylund says:
Think of this: You have two coins, one you know is fair, and the other you have no idea if its fair, two heads, or two tails. If I want to get a heads, which coin would you choose to flip? well,
you might think, “I know one has a 50-50 chance of getting heads. The other may have a zero percent chance, a 100%, or a 50% chance. If you assume all are just as likely, your choice of coins
does not matter. The weighted average of the 2nd coin, 1/3(50%) + 1/3(100%) +1/3 (0%) = 50%.
Now it may not be right to assume that each is just as likely, but without any more information, its about as logical an assumption as you can make.
So, if each coin is just as likely, it doesn’t matter which you choose, so regardless of whether or not I wanted heads or tails, I’d be indifferent to either coin.
But, if I knew beforehand that I was going to do this twice, I’d of course pick the “unknown” coin in an attempt to gain more information about it. I could then eliminate one of my 3 terms from
my weighted average stated above.
So not knowing that there will be a second choice affected my first choice. The “unknown” that screws me up in this problem is not the “unknown” contents of the urn, but the “unknown” fact that I
will be asked the question twice in a row.
16. arbitraryaardvark says:
Instead of marbles, package contained Bobcat.
Would not play again.
17. mdh says:
#53 – the one thing i do know is they’ll give me 20 bucks for being right, and I don’t know what happens to the 20 bucks if I’m wrong.
That’s at least evidence of the potential for their own self-interest right there, so I assume there is at least some chance they’re gaming me.
And I loved the Princess Bride ref.
18. CharlesSpongeworth says:
Maybe I’m missing the point, but isn’t it just an Optimist/Pessimist game? I thought that the 50:50 odds of getting a black marble weren’t that good, so I thought I’d take a gamble with the urn
on the right, which might be better or might be worse. I would choose it again for the white marble as I’m still feeling lucky.
Does this mean there’s something wrong with my head?
19. ryuubu says:
I chose the right one both times…
20. bardfinn says:
Even the Wikipedia article is badly worded:
“The balls are well mixed so that each individual ball is as likely to be drawn as any other.”
Which makes a claim of probability of draw, which goes directly to proportion – because of that sentence, the yellow and black balls are in equal proportion to the red balls. Either the sentence
is false or you now know the exact proportion.
What that sentence /ought/ to read as, is “The balls are well mixed so that there are no abnormal groupings of one colour.”
21. Anonymous says:
Brilliant! Funny and to the point about the silliness of this experiment.
As to the “point” of the book, about innovation and so-called “gut-level” versus intellectual or innovative decision-making — this has a lot more to do with learning styles etc. For the majority,
“gut-level” is social behaviour — for some, math and logic are the fundamental elements, and social is hard.
Innovation happens only when someone is out-of-touch with the world around them. This is practically the definition of the word.
22. gregberns says:
Full disclosure: I am the author of the original passage which is quoted from my new book. I’m glad that it has provoked such a debate.
As many here have noted, all things being equal, they prefer the urn where probabilities are known. If you think your chance of picking a black marble is greater in the left urn then your
subjective probabilities are:
P(B)L > P(B)R
but the probability of picking a white marble is complimentary to picking a black marble:
P(B)L = 1 – P(W)L
P(B)R = 1 – P(W)R
if you plug these back into the original ordinal relationship, you get:
P(W)R> P(W)L
which is why it’s “paradoxical” to stick with the left urn.
But my book is about the neural circuitry that underlies both innovative thinking and why the brain is not really evolved to be truly innovative (because it operates under a strict energy
budget). Uncertainty is only part of it. Perception is another big part. Check out the excerpts on Amazon!
23. Divers Hands says:
One of the larger problems I find with some of the logic here is that it misses the real point of how possible probabilities function when looking at the thought experiment proposed by
Schroedinger’s famed Cat. Shroedinger was not trying to point out that all possibilities exist prior to a choice, but how ridiculous that prospect is when applied to the real world. For example,
prior to choosing which urn one chooses from, the mathematical reasoning is that the probabilites are equal. However, the minute one makes the choice to choose from the unknown urn, the
probabilities cease being equal, as whatever the actual quantities of marbles in the urn are is now the actual odds one is dealing with. Hence Schroedinger’s use of a cat: no one really beleives
that the cat is actually both alive and dead in the box. It is one or the other. Only in the realm of theory is the cat both alive and dead.
Regardless, I still fail to see how any part of this paradox (either the poorly worded one from the post or the more logically consitent one at the Wikipedia link) demonstrate anything pertaining
to humanity’s fear of the unknown. as the postings here seem to demonstrate, it really seems to seperate us into how inclined we are to place our faith in a theoretical universe or an actual one.
24. robwiss says:
I’m not sure I agree with the author’s premise. Making a less than optimal or irrational choice due to not fully understanding a problem doesn’t seem like an example of fear of the unknown to me.
Rather, it seems like straightforward evidence that average people are awful at probability theory. You don’t need to do an experiment to understand that. All you need to do is look at the class
average for intro statistics and probability classes.
A more interesting experiment would be to see what odds the urn with the known odds would need to have in order to get people to prefer the unknown urn.
I’d guess not much below 50%, maybe 40%.
26. Anonymous says:
No matter which way you bet, the expected value of the outcome is $100 if you don’t know how many black and yellow balls there are.
More importantly, if an adversary was loading the urn, he or she could make the expected value no less than $100 if A&D or B&C are chosen, but could affect the expected value if A&C or B&D are
chosen. I think people are wary of situations where an adversary may know in advance which decision they are likely to make. This seems well founded, because if there were even a slight
statistical likelihood of humans choosing A&C more than B&D, it would benefit the urn owner to put more black balls in the urn. People might not know *why* or even *if* they were more
statistically inclined to choose A&C over the other choices, but the smart people would pick a safe choice over one that might allow an adversary to get ahead.
There have been a few studies showing that when playing these kinds of games, people play to minimize other people’s winnings as well as to maximize their own.
27. neurolux says:
How would I choose which urn to draw from? I guess I’d flip a coin.
28. airshowfan says:
@#49: There are other words for “exploitation” in Portuguese, but they are not English cognates for “exploration”; they are cognates for “abuse” and “advantage” and “use”.
@#50, paragraph 2: That’s what prompted me to make the Princess Bride reference: Either you can deduce something about the person who set up the game, or you cannot. If you can, then use that
information, and offset the odds from 50-50 by some amount. If you cannot, then I don’t see any course of action more reasonable than assuming that every outcome is equally likely.
As far as I can tell, there is no real difference between saying “this urn contains marbles, half are white and half are black, so I have a 50% chance of picking out a black one” and saying “this
urn contains some black marbles and some white marbles, I have no idea what the proportion is, so given my knowledge it is fair to expect a 50% chance of picking out a black one”.
If you disagree with my statement (that there is no real difference), you might claim “But there could be NO black marbles in there” or “the person setting up this game could want you to lose and
have done everything in their power to make that happen” or some other possibility. Yes, that is true, but the opposite of each of those possibilities is about as probable, if you have no
knowledge about it. I mean, as an illustrative example, take the urn you DO know the ratio for (it’s half and half). Maybe a minor hand injury some time ago leads you to be more likely to draw
from the left side of the box, where the concentration of white marbles is a tad higher. Or maybe you like to draw from near the bottom, where the concentration of white marbles is a tad higher.
Or maybe your skin cells have very minor photosensitivity which will draw your fingertips towards white marbles. All kinds of things COULD make it more likely for you to draw a white marble from
the half-and-half urn. But since you have zero knowledge of these factors, you assume that each of those factors is about as likely to cause a white-marble preference as a black-marble
preference, and you say “Since I cannot know, I’ll assume every outcome (picking any one marble) is equally likely”.
Why is it ok to make that assumption with respect to taking any marble from the half-and-half urn, but not with respect to the distribution of marble colors on the other urn?
29. dainel says:
Keep the answer in mind.
Did everyone picked the left urn? Am I the only one who picked the right urn? Both times?
30. Ugly Canuck says:
A bird in hand is worth two in the bush.
Does “fear” = “wariness”? Or is “fear” perhaps too strong a word?
31. JArmstrong says:
“The urn on the right contains twenty marbles, but you do not know the proportion of black to white.”
Nor do I know whether all twenty marbles are blue or green or candy-striped. It wasn’t until the third sentence of the third paragraph that I knew there were only black and white marbles in the
second urn; I don’t know if this would have affected my decision, but the set-up could have been crafted differently.
32. dragonfrog says:
If you’re doing the second test after the first, then you may well have info to point you in the right direction.
If you picked from the left urn and didn’t replace the marble, then the ratio will be 10-9 one way or the other. If you won the first time, stick with the left urn; if you lost the first time,
switch to th right urn.
Just sayin, is all…
33. bpratt says:
“Logically, if you thought the left-hand urn was the better choice for a black marble, the right-hand urn should be the better choice for a white marble.”
Well, no. The left hand urn is a better choice in both cases since you have more information about it. I think reading that book would be pretty annoying if it’s based on that kind of spurious
34. Wordguy says:
@ #3: Right. You know you have a 50/50 chance with the urn on the left. For the urn on the right you know that you either have the same 50/50 chance, or a worse chance for one of the colors.
Hopefully the whole book isn’t like that.
35. chip says:
This is an unfair test of logic. The “You only have one chance” in the first paragraph pushes you toward (what ends up being) the illogical choice. By the time you find out you get another shot
at it, it’s too late.
If I had known from the beginning that I would be playing twice, I’d have chosen from the right urn both times. If the proportion of black to white in that urn is not 50/50, then I would have a
better chance on one of the two tries. If the proportion was 50/50, then it would have been a wash.
36. Tommy says:
That example is just horrible. You’re not making any assumption about the ratio of marbles in the urn on the right. How would you?
37. TJIC says:
@2 posted by JArmstron:
> Nor do I know whether all twenty marbles are blue or green or candy-striped.
Well said.
Based on the environments we evolved in, the left urn makes a hell of a lot more sense.
Would you rather forage in a field that has both berry bushes AND apple trees, or in another field that you’ve never been to?
There are some interesting quirks in human nature, but there are huge numbers of “illogical” behaviors that, when analyzed, either (a) were stated so badly by the researchers that there’s no
obvious illogic in people’s reactions, or (b) actually make sense – e.g. loss aversion, because declining marginal utility arises from the real world we live in.
38. PaulR says:
If the options was to lose $100, the choice would be to go for the right urn.
It’s a fairly well-known phenomenon called prospect theory. We’re willing to take a chance on a loss, but not so much when there’s a gain. Bruce Schneier has discussed this a few times on his
39. jmzero says:
The second choice is no different than the first, since you have no more information about the unknown jar than you did when you started.
I think it’s interesting that most people would choose the “known” jar – and I certainly would – but the second choice is really no different.
40. Dewi Morgan says:
My personal picks were: first round, pick the urn with the unknown proportions, in order to gain information, even if I gained no money. It wouldn’t be much information, but it would be “there is
at least one ball of the colour I extracted”. I did not (because I hadn’t read onwards) expect to be able to use the information for anything, but I did it anyway, because information is always
Second round, as others have pointed out, my answer was “same urn if I lost last time, otherwise, change urns”.
I disagree that it’s aversion to uncertainty: If that were the case, then everyone would select as I did. I think it’s a far less esoteric aversion: the aversion to methematical complexity.
The odds of drawing from the 50% urn are *easy*. The odds of drawing from the other urn would take some time to think about. Nobody wants to take that time, when there is a reward waiting to be
had. Sure, intuitively the odds appear to be 50-50 too, but just a “more complicated” 50-50, with potential for pitfalls. Avoid the complexity, go for the simple.
The only reason I chose different was that I decided NOT to bother figuring out the odds. Instead, I decided that since, knowing my luck, I’d lose a 50-50 split nine times out of ten, I’d go with
the other one, because then I’d at least gain something from the experience even if the pot were rigged.
41. HarveyBoing says:
If you choose the left-hand urn when trying to pull a black marble, that means you think your chances are better for that urn.
The “paradox” is an interesting observation. But it’s not a true paradox. In particular, the above quote is logically unjustified. It makes an unsubstantiated claim with an obvious
counter-example: if you choose the left-hand urn when trying to pull a black marble, that means that you acknowledge your changes might be better for the other urn, but you prefer to stick with a
known “decent chance”.
Change the puzzle so that the known ratio is 19 white marbles and 1 black marble, and now I am more likely to choose the other urn because it would be difficult for that urn to have a worse
chance of choosing black than the left-hand urn (assuming we’re assured it has only black and white marbles). Even if the right-hand urn does have fewer black marbles than the left-hand one (i.e.
none), I’ve lost very little by taking the gamble, because I had very little chance of drawing a black marble from the left-hand urn anyway.
It’s the fact that the left-hand urn is “good enough” that leads to the identical choice in both cases, not that it is known to be a superior choice as compared to the right-hand urn.
There’s no paradox, though I agree that it’s an interesting way to present a concept of human behavior with respect to the unknown. The reluctance to choose the right-hand urn certainly is
related to the tendency to avoid the unknown, but there’s no paradox in always preferring the known to the unknown in an example like this.
The example doesn’t even demonstrate that the unknown is always less-preferred. It only demonstrates that it is when all else is equal. In other situations, the same person might well prefer the
42. maoinhibitor says:
You don’t know the proportion of black to white marbles in the urn on the right when you are asked to draw a black marble.
You don’t know the proportion of black to white marbles in the urn on the right when you are asked to draw a white marble.
Making your first choice does not change the state of the marbles in the urn on the right. You still have absolutely no idea about the proportion over there.
Is this a poor summary of the Ellsberg Paradox? Or is this just a good illustration of the weakness of binary thinking?
43. Daemon says:
That example definately doesn’t support the concept that we have a fear of the unknown… at most it shows a preferance for the known, provided the known is sufficiently non-lame.
44. Anonymous says:
I always tell people as an illustration of my understanding:
When it comes to the lottery, one ticket is infinitely better than none; but two or two hundred is really no better. An over-simplification, to be sure, but a useful one at that.
45. randomcat says:
#3 is spot-on. Not having any information about the right urn, the left urn is the only sensible choice. Especially considering the possibility that someone with $100 at stake may have had access
to the right urn. Rational pursuit of self-interest is not the same thing as fear.
46. doug117 says:
Agree with #3 et al. Spurious thinking.
“Logically, if you thought the left-hand urn was the better choice for a black marble, the right-hand urn should be the better choice for a white marble.”
That is the statement that is goofy.
Ambiguity aversion is perhaps fear, but nevertheless quite a useful thing.
47. Falcon_Seven says:
Maybe we should ask Schrodinger’s cat and see how it feels about its chances.
48. Schorsch says:
As usual, why buy this book with the tortured, illogical description of the paradox, when Wikipedia does it better: http://en.wikipedia.org/wiki/Ellsberg_paradox
The Wikipedia description makes sense. Check it out.
49. asuffield says:
Well yes, if there is no difference at all in the comparative value of the two choices (which there isn’t, the probabilities are identical) then I’ll pick the option about which I have more
information. But only if there is no difference.
That’s not a paradox. It’s as good a way to choose between two equivalent options as any other.
50. airshowfan says:
Shroedinger was not trying to point out that all possibilities exist prior to a choice, but how ridiculous that prospect is when applied to the real world.
Okay then. Every marble is actually half-black and half-white, until you pull it out at which point you force it to choose… ;)
51. Spikeles says:
Or… you could just… “accidentally” knock them both over and see which one the marble comes out of. Oops.. sorry.. so clumsy.. can i have my $100 now?
52. airshowfan says:
See, but the thing is, no urn is the OBVIOUS choice. With both urns, you have a 50-50 chance of drawing a white marble or a black marble (assuming all marbles in both urns are white or black).
There are twenty-one possible configurations for the right urn. 1: All twenty are black. 2: Nineteen black and one white. 3: Eighteen black and two white. And so on. You have no idea which of
those 21 configurations you’re dipping your hand into. So if you assume each of the 21 configurations is equally likely (and it is the discomfort with making this assumption that will keep people
from going with the right urn if they have a choice), the odds of drawing out a black marble is (1/21)*(20/20) plus (1/21)*(19/20) plus (1/21)*(18/20)… which adds up to half. Just like the left
urn. So the odds of the right urn are statistically the same as the odds of the left urn, in a weird Schrodinger’s Cat kind of way.
The point of the “paradox” is that, all other things being equal (i.e. two urns both with 50-50 odds), you’d rather take a gamble in a system with fewer unknowns – or, more precisely, in a system
where you have to make fewer assumptions in order to estimate your odds.
53. highlyverbal says:
“Logically, if you thought the left-hand urn was the better choice for a black marble…”
But I didn’t think the left-hand one was BETTER! The fact that I picked it only shows I thought it was equivalent -or- better. Sheesh.
54. SeamusAndrewMurphy says:
So, what I’m getting is that you only have one chance to make a decision, but you’ll have two.
That’s perhaps not the most empirical methodology.
It seems this tells nothing about fear of the unknown, but something of using your thinking cap to determine odds on known info versus unknown. That’s fear?
Being risk averse is not the same as fearing the future. It’s logical thinking.
Using this example, Las Vegas odds makers must exist in a near panic state.
As an aside, everyone knows the axiom that 50% of all doctors graduate in the bottom half of their class. Well so do economist/psychiatry profs.
55. juanpa says:
“Logically, if you thought the left-hand urn was the better choice for a black marble, the right-hand urn should be the better choice for a white marble.”
If you thought the left-hand urn was better for a black marble then you also thought the probability on the right-hand urn for the black marble where less than 0.5.
Given the information available here I should said this would not be warranted (It should be also 0.5 for the right-hand urn). But, if you thought that for whatever reason, then yes, you should
also conclude that the chances for the white-marble on the right-hand urn are greater than 0.5.
56. KyleTexas says:
GREGBERNS @ #36 thanks for joining the discussion.
I think the point you’re missing is what I’d like to term the “Cider In Your Ear” angle, from the quote from Sky Masterson’s advice in the musical Guys and Dolls:
“One of these days in your travels, a guy is going to show you a brand-new deck of cards on which the seal is not yet broken. Then this guy is going to offer to bet you that he can make the jack
of spades jump out of this brand-new deck of cards and squirt cider in your ear. But, son, do not accept this bet, because as sure as you stand there, you’re going to wind up with an ear full of
What your saying is that if test subject believes urn #1 is the best one to go for when choosing white marbles, it would therefore be *irrational* for same subject to choose urn #2 for black
But this isn’t really true, because the math shows that statistically, it doesn’t matter which urn you pick, given a truly random distribution of the marbles in the unknown urn an
*otherwise*fair*experiment*. Your saying if you pick one urn for one color marble, the rational thing is to pick the other urn for the other color. But it doesn’t *really* matter which urn you
pick first. But that’s not really the position the test subject is operating from, either:
We’re conditioned as human beings to expect a catch. We’re conditioned to think there’s something you know that we don’t — that you, the gamble offerer, know how to make that jack of spades jump
out of the deck and squirt cider in our ears. Given this fact, the natural inclination is to go with the option with more certainty, because that option appears to contain fewer unknown
variables, thereby reducing your “cider-to-ear-squirting” advantage.
The problem with your experiment is your setting it up in sociological terms: somebody is offering you something of real value in your life — in this case $100 — under circumstances which are
somewhat fishy. So the natural inclination is to respond in a psychologically appropriate way. But then you’re putting on your mathematical hat, crunching the numbers and saying “Look at the
irrational humans!”
Of course they’re behaving (mathematically) irrationally, because you’re giving them a (psychologically) irrational test: Who in the world is ever going to come up to you and offer you $100
*real* US dollars to pick marbles?
This isn’t a math problem. And it’s not a human nature problem, either. Strictly speaking, it’s a *humans* (plural) nature problem. Because the decision-making of the test subject isn’t the
result of their own internal silliness; it’s the result of you, the test-giver, presenting a weirdly contrived situation. What, you’re going to give me $100 for doing nothing but pick a marble?
That’s ridiculous!
The rational thing to do is to respond from a position of distrust, which means to always gravitate toward the option which provides more certainty. No P(B)L = 1 – P(W)L proof can account for
57. forgeweld says:
Thanks for the direction to the coherent explanation at Wikipedia, Schorsch.
58. highlyverbal says:
The tie-breaker for me on the urns was on the value of information! If I had a chance to draw another without replacement, as sometimes happens in my life, then knowing the rest of the marbles
has value.
I prefer information, I am not adverse to ambiguity.
59. juanpa says:
Actually I think the difference is that in the case of the left-hand urn, in order to assign probabilities you can use the frequencies.
In the right-hand urn you have to resort to some other heuristic in order to assign probabilities, in this case the appropiate is the indiference principle (or symmetry principle).
But i’d say this principle is not obvious to people not exposed to probability thinking, where as frequency ==> probability is perhaps a basic brain function.
60. nigel1965 says:
“Logically, if you thought the left-hand urn was the better choice for a black marble, the right-hand urn should be the better choice for a white marble.”
I’m sorry, but how is that logical? I KNOW I have a 50% chance on the left. I have no way of knowing my chances on the left. My chances swing as low as 5% all the way up to 95%, but I don’t KNOW.
61. KyleTexas says:
Actually, the correct answer for which urn you should use for the second pick depends on the *color* of the marble you withdrew with your first pick.
If you pull out a black marble from the 1st (known proportion urn), you should pick from that urn again, because you now know it has 10 white marbles to 9 black, giving you a .5263 probability of
pulling out a white marble with your second pick. If, on the other hand, you pull out a white marble on the first try, you should switch urns (where you can assume random probability distribution
of marbles) b/c you only have a .4737 probability of picking white again from the known urn.
The same rule holds true when drawing marbles from the second urn, although the value of the data point from your first pick decreases. Basically, if you pull out a black marble on your first
pick, you should still stay with that urn, but you only have a .5013 probability of picking out a white one (again, assuming random distribution of marbles)
If you put the marbles *back* after the first draw, and you pull from the “known-urn” first, it really doesn’t matter which urn you pick from second.
However, if you put the marbles *back* and you pull from the unknown urn first, then you should *switch* urns if you pull a black marble first, and *stay* with the unknown urn if you pull a white
marble first. Because you’re more likely than not to pick the dominant color on your first draw.
But agreed, overall this is a pretty dumb thought experiment, at least as far as proving the point that people are *afraid* of the unknown. *Maybe* you could stretch it to say people tend to
favor options about which they have the most information, but I think that just implies they’re rational. I mean, the goal is to win $100 bucks, right?
People naturally pick the known urn because of issues stated by #2 and others — e.g., it’s unclear whether some of the marbles in the unknown urn might be “candy-striped.” Now, once the Puzzle
Daemon explains “No, there are only black and white marbles,” you might think about it more and realize it doesn’t really matter which urn you pick on your first go. But again, that’s just
another way of making your decision based on more information–in this case the information that there are absolutely *no* other variables at play than the unknown proportion of the marbles in urn
#2. In that case, what the heck, I’ll pick either urn.
But I only feel that way because now I have more information about *both* urns. Before, I only had reliable information about one urn. If I knew there were 9 black marbles and 11 white marbles, I
wouldn’t pick from the known-proportion urn when drawing for a black marble. I’d take my chance with the unknown
Now, here’s the interesting question: If I knew there were 499 black marbles and 501 white marbles in the known urn, and an unknown distribution in the second urn, which would I pick? I might
actually pick the first urn, because of nagging questions like “are any of the marbles candy-striped?” Getting dinged by the probability of .001 might be an acceptable “insurance premium” against
the fact that the Puzzle Daemon is pulling a fast one on me that I haven’t been able to figure out.
It’s not really interesting to say people are adverse to risk “all things being equal.” What’s interesting is how *unequal* odds will a person accept in exchange for more certainty?
62. mdh says:
@#2 – my sentiments exactly. Just because I don’t know the mix of black and white marbles on the right-side does not preclude there being zero of either color in that one, and just 20 blue
marbles instead.
Plus, if I draw correctly from the left my first time, that increases my chances slightly for the second draw from the left.
At least from the way they phrased it in the set-up, that’s the smart bet.
#16 – I’m ‘afraid’ your right.
63. Bugs says:
#3 is incomplete, the article is correct. Arguably badly worded, but correct.
You have no way of knowing what’s in the right jar, so drawing the black ball might be impossible (0/20). However, it’s just as likely that drawing a black ball is garaunteed (20/20). Therefore
to calculate the “expected” return, you take the average of all the probabilities: impossible, certain and every value in between. This gives you exactly equal odds, the same as the left jar.
Most people (including me for a few minutes) spotted the chance of success being impossible but not the equal chance of success being certain. This proves the article’s point that we’re
pessimistic about unknowns. It’s also further proof that we are, as a species, rubbish at dealing with probability.
The article is badly worded because it doesn’t tell you what happens to the jars between your choices.
If you’re given a new, randomly filled right jar that’s fine.
However, if they’re reset to their starting state, the colour of the first marble drawn from the right jar gives you information you can use to adjust your odds. For example, if you drew a black
marble on the first try you know there can’t be more than 19 white balls in the jar. Now you know there are somewhere between 19 and 0 white balls, but still somewhere between 20 and 0 black
balls. The calculation now tells us to expect slightly more black balls than white (10.5:9.5) so now we’re better off going to the left basket. The inverse is true too, obviously.
Another way of looking at it:
Forget the left urn (10/20) now, just consider the right one. As an example, let’s say the black:white ratio is 17:3. (so 17 black balls out of 20 i.e. 17/20)
First you need to draw a black ball. Obviously, your chance of success is 17/20.
The ball you pick is put back. Now you need to pick a white ball. Obviously your chance of success is 3/20.
Those two probabilities, whatever the actual numbers are, will always be in proportion such that they average out to be 10/20. If you’re trying to draw alternate coloured balls you can expect to
be lucky half the time. If the jar is randomly refilled between each draw, this still works because the black:white ratio will average out to be even. With no information, the most rational
course is to act as if it’s 10/20. | {"url":"http://boingboing.net/2008/09/18/the-ellsberg-paradox.html","timestamp":"2014-04-18T18:16:33Z","content_type":null,"content_length":"150498","record_id":"<urn:uuid:5d0a35bd-13a9-4947-a46e-f96f8fcd15bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Events and Set Operations
Shodor > Interactivate > Discussions > Events and Set Operations
Mentor: We can look at a dice game to understand an event in probability. If a player wins when a six-sided die rolls a 1 or 2, we can say: "The event of the player winning happens in two outcomes
out of six." Sometimes people draw pictures for events, circling or highlighting all the outcomes for each event.
Event A: the player wins.
Event B: the player loses.
Mentor: Let us consider one more game as an example. In this game, four players share a twelve-sided die. Player A (Anton) wins if the die shows 1, 4, 6, or 12. Player B (Boris) wins if the die shows
2, 3, 4, 7 or 12. Player C (Chris) wins if the die shows 1, 4, 8, 9, or 12. Player D (Dorothy) wins if none of the other players win. I will describe several events to demonstrate how convenient the
diagrams can be. First try to find the probabilities without looking at the diagrams: this way you will see which of them are hard to find without the diagrams. Find the probabilities of the events
listed in the table. The corresponding diagrams and the answers are also in the table. Instead of writing: "The probability of Event A is .33" we can write simply P(A)=.33 The answers follow from
counting the outcomes, out of twelve total.
Event A: Player A wins
Event B: Player B wins
Event C: Player C wins
Event D: Player D wins
Event E: Player A or Player B wins (there is a special short notation for this: E = A U B which reads: "Event E is equal to the union of Events A and B)
Event F: Player B wins but Player C does not win (the special notation for this is F = B\C which reads: "Event F is equal to Event B minus Event C")
Event G: Both Player A and Player C win (the notation for that is
Event H: Player D does not win (the notation here is H = D ^C which reads: "Event H is equal to the complement of Event D").
©1994-2014 Shodor Website Feedback
Mentor: We can look at a dice game to understand an event in probability. If a player wins when a six-sided die rolls a 1 or 2, we can say: "The event of the player winning happens in two outcomes
out of six." Sometimes people draw pictures for events, circling or highlighting all the outcomes for each event.
Event A: the player wins.
Event B: the player loses.
Mentor: Let us consider one more game as an example. In this game, four players share a twelve-sided die. Player A (Anton) wins if the die shows 1, 4, 6, or 12. Player B (Boris) wins if the die shows
2, 3, 4, 7 or 12. Player C (Chris) wins if the die shows 1, 4, 8, 9, or 12. Player D (Dorothy) wins if none of the other players win. I will describe several events to demonstrate how convenient the
diagrams can be. First try to find the probabilities without looking at the diagrams: this way you will see which of them are hard to find without the diagrams. Find the probabilities of the events
listed in the table. The corresponding diagrams and the answers are also in the table. Instead of writing: "The probability of Event A is .33" we can write simply P(A)=.33 The answers follow from
counting the outcomes, out of twelve total.
Event A: Player A wins
Event B: Player B wins
Event C: Player C wins
Event D: Player D wins
Event E: Player A or Player B wins (there is a special short notation for this: E = A U B which reads: "Event E is equal to the union of Events A and B)
Event F: Player B wins but Player C does not win (the special notation for this is F = B\C which reads: "Event F is equal to Event B minus Event C")
Event G: Both Player A and Player C win (the notation for that is
Event H: Player D does not win (the notation here is H = D ^C which reads: "Event H is equal to the complement of Event D").
Mentor: We can look at a dice game to understand an event in probability. If a player wins when a six-sided die rolls a 1 or 2, we can say: "The event of the player winning happens in two outcomes
out of six." Sometimes people draw pictures for events, circling or highlighting all the outcomes for each event.
Event A: the player wins.
Event B: the player loses.
Mentor: Let us consider one more game as an example. In this game, four players share a twelve-sided die. Player A (Anton) wins if the die shows 1, 4, 6, or 12. Player B (Boris) wins if the die shows
2, 3, 4, 7 or 12. Player C (Chris) wins if the die shows 1, 4, 8, 9, or 12. Player D (Dorothy) wins if none of the other players win. I will describe several events to demonstrate how convenient the
diagrams can be. First try to find the probabilities without looking at the diagrams: this way you will see which of them are hard to find without the diagrams. Find the probabilities of the events
listed in the table. The corresponding diagrams and the answers are also in the table. Instead of writing: "The probability of Event A is .33" we can write simply P(A)=.33 The answers follow from
counting the outcomes, out of twelve total.
Event E: Player A or Player B wins (there is a special short notation for this: E = A U B which reads: "Event E is equal to the union of Events A and B)
Event F: Player B wins but Player C does not win (the special notation for this is F = B\C which reads: "Event F is equal to Event B minus Event C")
Event G: Both Player A and Player C win (the notation for that is
Event H: Player D does not win (the notation here is H = D C which reads: "Event H is equal to the complement of Event D"). | {"url":"http://www.shodor.org/interactivate/discussions/EventsAndSetOperatio/","timestamp":"2014-04-19T07:01:57Z","content_type":null,"content_length":"15445","record_id":"<urn:uuid:de7c2272-3e83-49e2-8a72-f1bf47b321b3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Giveaway That you Do NOT Want To Miss
Hi friends! I haven't done many give aways. I had plans on doing one when I hit 1,000 followers....didn't quite get done. :-) You ever have great plans and then just can't get to all of those great
plans?! Please tell me that I am not the only one!
I started blogging in the Summer of 2011 and I have met some really fantastic teachers from all over. I love the fact that we are collaborating with each other from afar. I have made some great
friends and appreciate all that they have done for me.
I also appreciate you! It still blows my mind that I have that many people that are interested in what I have to say, or find the things that I make useful. I love what I do. It is really my second
job, but so much for enjoyment to.
Ok, blah..blah...blah..I know what you are saying...get on with the give away woman!! LOL! I get ya! It's coming! Here goes...
I am giving away a TpT gift certificate. You can use it to buy any resources that you want from your favorite sellers on TpT!
You can earn up to 5 entries:
1. Follow my blog and leave ONE comment saying that you follow
2. Follow me on pinterest and leave ONE comment saying that you follow me there
3. Become a fan on Facebook and leave ONE comment saying that you are a fan
4. Follow me on Teachers Pay Teachers and leave ONE comment saying that you follow me there.
5. If you blog about this give away, then leave ONE comment with your blog link.
I will draw the winner at 12:00 pm CST on Sunday, November 11th.
Thank you so much for following, commenting, downloading and supporting me! I appreciate it! GOOD LUCK!!
368 comments:
1. I follow you on Facebook.
2. I follow you on Pinterest.
3. I follow your TpT Store.
4. I am following your blog. I have anjoyed some of your products. Thanks.
5. I follow your TPT stores. gmartindale@ejesd.net
6. I follow you on TPT, facebook, your blog, and pinterest. Appreciate your work. Gretchen Martindale, gmartindale@ejesd.net
7. i subscribe via email.
8. i "liked" you on FB
Lorina DeMello
9. I have followed your blogged and loved it!
10. I follow your pinning!
11. I follow and love your facebook page!
12. I follow your TPT page, thank you for all the ideas! :)
13. What a great prize! I am your newest blog follower!
14. And also follow you on tpt!
15. I follow your blog.
16. I follow you on pinterest!
17. I liked you on Facebook!
18. I'm a blog follower.
19. I follow you on TPT!
20. I follow you on pinterest.
21. I follow you on Teachers Pay Teachers.
22. I already follow your AWESOME blog! :o)
23. You're already on my favorite sellers list at Tpt! :o)
24. I already "like" you on Facebook! :o)
25. I follow you on Pinterest! :o)
26. I follow you on your blog and now on FB and TPT. I couldn't figure out how to comment on TPT though! Good luck to all.
27. Wow! I follow your blog. dputnam@menands.org
28. I follow you on Facebook.
29. I follow your blog.
30. I follow you on Pinterest.
31. Facebook fan :)
32. Pinterest follower !
33. I follow you on pinterest
34. I follow your blog by email
35. I follow you on Pinterest
36. I follow you on Facebook
37. I follow you on TPT
38. I follow your blog!
The Teacher Diaries
39. I follow you on Pinterest!
The Teacher Diaries
40. I am a fan on Facebook!
The Teacher Diaries
41. I follow you on Teachers Pay Teachers!
The Teacher Diaries
42. I follow your fabulous blog! :)
43. I follow your blog!!
44. I follow you on pinterest!!!
45. I follow you on TpT!!!
46. I follow you on pinterest. Great site and idea!
47. I follow your TPT store.
48. I am following you on Pinterest.
49. I am an email subscriber.
50. I follow your blog! Love love!
51. I follow you on Pinterest!
52. I follow you on TPT!
53. I follow you on Facebook!
54. I follow your blog! I love it!
Kari :)
55. I follow you on Pinterest!
Kari :)
56. I follow you on Facebook!
Kari :)
57. I follow you on TpT!
Kari :)
58. I follow your blog!
59. I follow you on facebook!
60. I follow you on pinterest.
61. I follow you on TPT.
62. I follow your blog!
63. Simons Says follows you on Facebook!
64. MrsSimonsSays follows boards on Pinterest!
65. MrsSimonsSays follows you on TpT!
66. Thanks for the opportunity! Happy teaching to YOU!
MrsSimonsSays (at) gmail (dot) com
67. I follow your blog and I even have you on my bloglist. If you have time, I'd love it if you would check my blog out too!
68. I follow your blog.
69. I follow you on Pinterest.
70. I follow you on Facebook.
Cindy Ackerman Luoma
71. I follow your TPT store too
72. I follow your blog!!
73. I follow you on Teachers Pay Teachers! Asklar@mail.usf.edu
74. I follow you on Pinterest!
75. I follow your blog by email.
76. I follow your TpT store too.
77. I follow your blog! Love your stuff.
78. I follow you on Facebook.
79. I follow your TPT store.
80. I follow you on Pinterest!!
81. i follow you.
82. I follow your blog.
83. I follow you on tpt and Pinterest
84. I am so grateful for our bloggy friendship!! I follow your blog, of course :)
85. I'm following you on Pinterest too!
86. And FB!
87. I follow your blog :)
88. I now follow you on Pinterest :)
89. I already like & follow your fb page
90. I am now a follower on tpt too! Thanks for all the awesome items! Hope I win so I can purchase a few of your items! :))
91. I follow you on pinterest.
92. I follow your TPT store
cbartram @kitcarsonschool.com
93. I follow your blog.
94. I follow you on facebook.
95. I follow your blog!
(and am having a giveaway right now, too)
Kindergarten Kel
96. I follow you on TpT.
Kindergarten Kel
97. I am a facebook follower.
Kindergarten Kel
98. I follow your blog! :)
Crayons and Curls
99. I follow you on Pinterest!
Crayons and Curls
100. I follow you on FB! :)
Crayons and Curls
101. I follow your TpT store! :)
Crayons and Curls
102. I am a blog follower!
Sweet Times in First
103. I follow you on Pinterest!
Sweet Times in First
104. I follow you on TpT!
Sweet Times in First
105. I like ya on FB!
Sweet Times in First
106. I follow you on facebook :-)
107. I follow you on pinterest :-)
108. I follow you on TPT!
109. I follow your blog on googleblogs!
110. I follow on pinterest
111. I follow on facebook.
112. I follow on teachers pay teachers.
113. I follow you by email...
114. I follow your TPT store
115. I follow you on pinterest.
116. I'm a fan on facebook...
117. I follow your blog :)
118. I follow your Faceboook.
119. I follow you on TPT.
120. I am following you on Pinterest!
121. I posted about the giveaway on my Facebook :)
122. I follow you on FB
123. I follow you on TPT
124. I follow you on Pinterest
125. I follow your blog!
126. I follow your wonderful blog!
127. I follow you on TPT!
128. I follow you on Pinterest!
129. I follow you on Facebook!
130. Hi. I follow you on Pinterest.
BJ Thorn
131. I follow your TPT store.
BJ Thorn
132. I follow your blog.
BJ Thorn
133. I am now a follower on Pintrest
Gina Knight Harville
134. I follow you on Facebook!
135. I now follow you on Teachers Pay Teachers!
136. I'm a follower. You are most definitely not the only one with plans that don't get done. Thanks for this chance to utilize those great plans you have finished!
137. I also follow your store on TpT.
138. I am now a follower of your blog, pintrest and facebook! :)
139. I follow you on TPT
140. I am now a follower on Pinterest too :)
141. I follow your blog! Go Huskers!
142. I follow you on Pinterest!
143. I am a Facebook fan!
144. I follow your TPT store!
145. I follow your blog...mattandba@gmail.com
146. I follow you on pinterest!
147. I like you on FB...mattandba@gmail.com
148. I follow your TpT store! mattandba@gmail.com
149. I follow your blog! Of course! :)
Kindergarten Smiles
150. I follow you on Pinterest too!! :)
Kindergarten Smiles
151. I follow you on Facebook!!
Kindergarten Smiles
152. and I follow you on TPT! Love everything you do :)
Kindergarten Smiles
153. I follow your blog and thanks for this
154. I follow you on pinterest
155. I follow you on facebook
156. I follow your TpT store
157. I follow your blog!
Heather (heathernnance@yahoo.com)
158. I follow you on Pinterest.
Heather (heathernnance@yahoo.com)
159. I like your FB page.
Heather (heathernnance@yahoo.com)
160. I follow your TPT store.
Heather (heathernnance@yahoo.com)
161. I follow your blog
162. I follow your TPT store
163. And now I follow you on Pinterest!
164. I follow your blog.
Susan in NC
165. I follow your facebook page.
Susan in NC
166. I follow your TpT.
167. Phantom Weblink Cloaker is the latest product from Soren Jordansen, Cindy Battye and Bob Merrick, the same team that were responsible for the best advertising ClickBank Buccaneer promotion
system a few several weeks ago
phantom link cloaker review
168. was the winner announced? | {"url":"http://livelovelaughkindergarten.blogspot.com/2012/11/a-giveaway-that-you-do-not-want-to-miss.html?commentPage=2","timestamp":"2014-04-19T10:02:33Z","content_type":null,"content_length":"455030","record_id":"<urn:uuid:803017b6-d577-438a-bb1a-5f3837b6788a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability and Moment Generating Functions
May 22nd 2008, 02:16 AM #1
May 2008
I am terrible when it comes to probability and moment generating functions. I missed 2 lectures this week and I am now completely lost on the topic.I have two questions that I need to do by
tomorrow and I was hoping if shown how to do one of the questions, I could work out the other one myself as it is similar.
Here is the first question:
Let Xn be a discrete random variable that takes the values 1,2,...,n with equal probability 1/n. Find the probability generating function of Xn and then determine its moment generating function.
Determine the moment generating function of Yn = Xn/n. Show that the moment generating function of Yn = Xn/n converges pointwise to the moment generating function of a random variable that is
uniformly distributed on (0,1).
I know this is a lot to do, so even if someone could tell me what it is I have to do here would be great. Like I said, I missed 2 lectures so Ive tried to learn this concept on my own, and have
had nobody to correct any assumptions I make that are incorrect. Thank-you.
I am terrible when it comes to probability and moment generating functions. I missed 2 lectures this week and I am now completely lost on the topic.I have two questions that I need to do by
tomorrow and I was hoping if shown how to do one of the questions, I could work out the other one myself as it is similar.
Here is the first question:
Let Xn be a discrete random variable that takes the values 1,2,...,n with equal probability 1/n. Find the probability generating function of Xn and then determine its moment generating function.
Determine the moment generating function of Yn = Xn/n. Show that the moment generating function of Yn = Xn/n converges pointwise to the moment generating function of a random variable that is
uniformly distributed on (0,1).
I know this is a lot to do, so even if someone could tell me what it is I have to do here would be great. Like I said, I missed 2 lectures so Ive tried to learn this concept on my own, and have
had nobody to correct any assumptions I make that are incorrect. Thank-you.
This will get you started:
Probability generating function: Read Generating Functions.
So $G_{X_n} (t) = E(t^{X_n}) = \sum_{j = 1}^{n} \frac{t^j}{n} = \frac{1}{n} \sum_{j = 1}^{n} t^j$.
Moment generating function: Read Moment-generating function - Wikipedia, the free encyclopedia.
So $m_{X_{n}} (t) = E \left( e^{tX} \right) = \sum_{j = 1}^{n} \frac{e^{jt}}{n} = \frac{1}{n} \sum_{j = 1}^{n} e^{jt}$.
The moment generating function of Y = aX + b is $m_Y (t) = e^{bt} \, m_X(at)$: See Moment Generating Function.
In your question $a = \frac{1}{n}$ and b = 0.
The moment generating function of a random variable distributed uniformly on (a, b) is $\frac{e^{bt} - e^{at}}{t(b - a)}$: See Uniform distribution (continuous) - Wikipedia, the free encyclopedia
Substitute a = 0 and b = 1 to get the answer your shooting for when finding $\lim_{n \rightarrow \infty} m_Y (t)$ ......
Last edited by mr fantastic; May 22nd 2008 at 03:05 AM. Reason: Fixed a dodgy hyperlink and some dodgy latex
Thank-you so much. I'll see what I can come up with.
May 22nd 2008, 02:43 AM #2
May 22nd 2008, 03:11 AM #3
May 2008 | {"url":"http://mathhelpforum.com/advanced-statistics/39247-probability-moment-generating-functions.html","timestamp":"2014-04-17T18:32:32Z","content_type":null,"content_length":"40838","record_id":"<urn:uuid:618f0493-6930-4d9b-8831-547b08293e88>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plotting a constant y-axis plane
March 1st 2011, 09:52 PM #1
Oct 2010
I am trying to plot a plane such that the y-axis is constant at 3 on a 3-D graph. So the plane is going to be stick upwards parrellel to x-axis while staying at y=3 throughout. The idea is to
actually have this plane to cut another surface. I try to get the function in the form of z=f(x,y) and here's what I did:
Since the plane is a constant y=3 plane, I assume (0, 3, 0) is a point on the plane. Then I move 5 in the z-axis and then minus them to get a vector direction.
$\begin{pmatrix}<br /> 0\\ <br /> 3\\ <br /> 0<br /> \end{pmatrix}<br /> -<br /> \begin{pmatrix}<br /> 5\\ <br /> 3\\ <br /> 0<br /> \end{pmatrix}<br /> =<br /> \begin{pmatrix}<br /> -5\\ <br />
0\\ <br /> 0<br /> \end{pmatrix}<br />$
I then get another vector direction on this plane by moving 2steps up z-axis and 5 steps to x-axis; (2, 3, 5).
$<br /> \begin{pmatrix}<br /> 0\\ <br /> 3\\ <br /> 0<br /> \end{pmatrix}<br /> -<br /> \begin{pmatrix}<br /> 2\\ <br /> 3\\ <br /> 5<br /> \end{pmatrix}<br /> =<br /> \begin{pmatrix}<br /> -2\\
<br /> 0\\ <br /> -5<br /> \end{pmatrix}<br />$
Then I cross these 2 direction vectors to get the normal line.
$<br /> \begin{pmatrix}<br /> -5\\ <br /> 0\\ <br /> 0<br /> \end{pmatrix}<br /> \times<br /> \begin{pmatrix}<br /> -2\\ <br /> 0\\ <br /> -5<br /> \end{pmatrix}<br /> =<br /> \begin{pmatrix}<br
/> 0\\ <br /> -25\\ <br /> 0<br /> \end{pmatrix}<br />$
I then did a dot product of this normal vector with the point (0, 3, 0) to get the equation of the plane: $0x -25y +0z = -75$
But, from $0x -25y +0z = -75$, how do I form it to z=f(x, y)? z is zero in this case and I can't make it the subject in terms of x and y.
How should I carry on from here? Thanks.
I think you're making this a lot more work than is necessary. The equation for the y = 3 plane is (drum roll) y = 3! (That's an exclamation point, not a factorial.) If you look carefully, that's
the equation you got from your (erroneous) calculations. I say erroneous because you have, throughout, switched your x and z components; for some reason, the fact that you're technically using a
left-handed coordinate system did not, in the end, mess up your cross product.
You cannot find the equation of this particular plane in the form z = f(x,y), because the plane is massively multi-valued in z. The analogy is that the equation x = 3, a vertical line in the xy
plane, cannot be solved for y.
What you're really asking is how to plot that plane on whatever software you're using. What software are you using?
oh...so y=3 cannot be plotted on a 3D graph...
I am using Mathcad and thought I could create a y=3 plane to visualise a level curve or something of that sort related to interception. I saw some graphs having like a surface and y or x constant
plane on the same graph showing the intersections and thought I could do the same.
No, I didn't say you couldn't plot y = 3 on a 3D graph. Of course you can. You just can't write it as z = f(x,y), like you can with many surfaces.
Unfortunately, I don't know the first thing about Mathcad. I would look into implicit plots or maybe parametric plots. In Mathematica, for example, the command would be
ParametricPlot3D[{x, 3, z}, {x, -5, 5, 0.1}, {z, -5, 5, 0.1}]
So the {x,3,z} is the vector, then you have the arguments {x,xmin,xmax,xgridsize}, and then {y,ymin,ymax,ygridsize}.
Try looking up the help in Mathcad and see if there isn't a command like either one of these.
March 2nd 2011, 02:23 AM #2
March 2nd 2011, 03:04 AM #3
Oct 2010
March 2nd 2011, 04:54 AM #4 | {"url":"http://mathhelpforum.com/calculus/173143-plotting-constant-y-axis-plane.html","timestamp":"2014-04-21T03:19:35Z","content_type":null,"content_length":"43689","record_id":"<urn:uuid:2d841683-aaf6-4aa0-9a34-f1f2ef3f2dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angle between vector and its transpose
Assuming that question is meaningless, what is the intuition behind taking transpose?
Are you familiar with the dot product? Well, this can be written like [itex]v^\top w[/itex]. If you have some vector [itex]v[/itex] then the matrix that projects onto that vector is given by [itex]vv
^\top[/itex] (if you don't understand what projection means, that's OK, but this example probably doesn't make sense.)
Additionally, there are important classes of matricies that are defined using transpose (or complex conjugate transpose which is where you do the transpose and take the complex conjugates of the
entries of your matrix, if your matrix has real entries then obviously the complex conjugate transpose is just the transpose.)
For example, if [itex]Q^\top=Q^{-1}[/itex] the matrix is called orthogonal. These matrices preserve angles and lengths of vectors. These are good for numerical applications for that reason, but also
it is MUCH easier to compute the transpose than it is to compute the inverse (in a sense, you don't need to "compute" the transpose, your code just needs to "iterate backward" - if you don't
understand that part, its OK.)
Another special class is Symmetric Matrices where [itex]M[/itex] is symmetric if [itex]M=M^\top[/itex]. These are really nice for several reasons. First, they are diagonalisable by an orthogonal
matrix. Since they are diagonalisable, there is an eigenbasis and so you can do a "spectral resolution." Also, the eigenvalues are all real, even if the entries are complex.
This is just *very light* scratching the surface, but there are MANY important topics that involve transposes of matrices and vectors. | {"url":"http://www.physicsforums.com/showthread.php?p=4186059","timestamp":"2014-04-18T13:54:30Z","content_type":null,"content_length":"72260","record_id":"<urn:uuid:48839652-411b-4762-942d-a8e2c5a945ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] structured array comparisons?
Matthew Brett matthew.brett@gmail....
Sat Mar 7 13:09:09 CST 2009
I'm having some difficulty understanding how these work and would be
grateful for any help.
In the simple case, I get what I expect:
In [42]: a = np.zeros((), dtype=[('f1', 'f8'),('f2', 'f8')])
In [43]: a == a
Out[43]: True
If one of the fields is itself an array, and the other is a scalar,
the shape of the truth value appears to be based on the comparison of
that array, ignoring the scalar:
In [44]: a = np.zeros((), dtype=[('f1', 'f8', 8),('f2', 'f8')])
In [45]: a == a
Out[45]: array([ True, True, True, True, True, True, True,
True], dtype=bool)
If the scalar is different, then the shape is from the array, but the
truth value is from the scalar:
In [46]: b = a.copy()
In [47]: b['f2'] = 3
In [48]: a == b
Out[48]: array([False, False, False, False, False, False, False,
False], dtype=bool)
If there are two arrays, it blows up, even comparing to itself:
In [49]: a = np.zeros((), dtype=[('f1', 'f8', 8),('f2', 'f8', 2)])
In [50]: a == a
ValueError Traceback (most recent call last)
/home/mb312/<ipython console> in <module>()
ValueError: shape mismatch: objects cannot be broadcast to a single shape
Is this all expected by someone?
Thanks a lot,
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-March/040926.html","timestamp":"2014-04-18T08:19:47Z","content_type":null,"content_length":"3930","record_id":"<urn:uuid:3e349a4a-67ff-42d2-b55d-04d786cc733f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
math blogs
math blogs
This page lists links to blogs, and online mathematical communities like forums and wikis. For links to main repositories of free books, lectures, videos, reviews and math papers see math archives
and for a list of societies and institutions see math institutions. A higher level branch page is math resources. Math is here just a mnemo, this page partains also to the related areas of interest
for $n$Lab like LaTeX, physics, philosophy, theoretical computer science etc.
For a quick introduction to math blogging, see
See also MO question: most helpful math resources on the web, the directory of many math blogs with alerts on recent updates www.mathblogging.org.
Please help with improving this page:
• add links that are missing;
• add information to links on what kind of activity tends to go on at the sites being linked to.
Note: To maintain the quality and relevance of this list, any new links not containing a brief description may be removed.
Math blogs and wikis
Wikis and online encyclopaedias
Current Blogs
Older Blogs (last is entry over a year old)
Probability and Statistics
Theoretical Computer Science
Please see this page.
Technical help with math blogs
Revised on December 17, 2013 19:15:43 by | {"url":"http://nlab.mathforge.org/nlab/show/math+blogs","timestamp":"2014-04-16T21:58:57Z","content_type":null,"content_length":"48304","record_id":"<urn:uuid:c8cee570-6afd-43f1-bc9e-c4d3b02d3780>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is Fisher's Exact Test?
Animandel - Fisher's exact test is fun to examine and carry out, but with such a small sample as mentioned in the article, the accuracy of your results is anyone's guess. You should try using the
test in areas where the sample size calculate is a good percentage of the total group.
For example, instead of trying to conclude what percentage of men and women where blue pants, narrow the field to men and women at your work place. This way, you have a better chance of getting
usable results. | {"url":"http://www.wisegeek.com/what-is-fishers-exact-test.htm","timestamp":"2014-04-20T19:04:39Z","content_type":null,"content_length":"69708","record_id":"<urn:uuid:0e9f90eb-7178-44bb-9dd5-8c07746513c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Scalar Kalman Filter
This document gives a brief introduction to the derivation of a Kalman filter when the input is a scalar quantity. It is split into several sections:
Discrete time linear systems are often represented in a state variable format given by the equation:
where the state, x[j], is a scalar, a and b are constants and the input u[j] is a scalar; j represents the time variable. Note that many texts don't include the input term (it may be set to
zero), and most texts use the variable k to represent time. I have chosen to use j to represent the time variable because we use the variable k for the Kalman filter gain later in the text.
Equation 1 can be represented pictorially as shown below, where the block with T in it represents a time delay..
Figure 1
Now imagine some noise is added to the process such that:
The noise, w, is white noise source with zero mean and covariance Q and is uncorrelated with the input. The process can now be represented as shown:
Figure 2
Given a situation like the one shown above, a typical question might be: Can we filter the signal x so that the effects of the noise w are minimized? The answer, it turns out is yes. However,
with Kalman filters we can go one step further.
Let us assume that the signal x is not directly measured, but instead we measure z.
The measured value z depends on the current value of x, as determined by the gain h. Additionally, the measurement has its own noise, v, associated with it. The noise, v, is white noise source
with zero mean and covariance R that is uncorrelated with the input or with the noise w. The two noise sources are independent of each other and independent of the input.
Figure 3
The task of the Kalman filter can now be stated as: Given a system such as the one shown above, how can we filter z so as to estimate the variable x while minimizing the effects of w and v?
It seems reasonable to achieve an estimate of the state (and the output) by simply reproducing the system architecture. This simple (and ultimately useless) way to get an estimate of x[j] (which
we will call x^^[j]), is diagrammed below.
Figure 4
This approach has two glaring weakness. The first is that there is no correction. If we don't know the quantities a, b or h exactly (or the initial value x[0]), the estimate will not track the
exact value of x. Secondly, we don't compensate for the addition of the noise sources (w and v). An improved setup which takes care of both of these problems is shown below.
Figure 5
This figure is much like the previous one. The first difference noted is that the original estimate of x[j] is now called x^^[j]^-; we will refer to this as the a priori estimate.
Equation 4
We use this a priori estimate to predict an estimate for the output, z^^[j]. The difference between this estimated output and the actual output is called the residual, or innovation.
If the residual is small, it generally means we have a good estimate; if it is large the estimate is not so good. We can use this information to refine our estimate of x[j]; we call this new
estimate the a posteriori estimate, x^^[j].. If the residual is small, so is the correction to the estimate. As the residual grows, so does the correction. The pertinent equation is (from the
block diagram):
The only task now is to find the quantity k that is used to refine our estimate, and it is this process that is at the heart of Kalman filtering.
Note: We are trying to find an optimal estimator, and thus far we are only optimizing the value for the gain, k. We have assumed that a copy of the original system (i.e., the gains a, b, and
h arranged as shown) should be used to form the estimator. This begs the question: "Is the estimator as developed above optimal?" In other words, should we simply copy the original system in
order to estimate the state, or is there perhaps a better way? The answer it turn out, is that the estimator, as shown above, is the optimal linear estimator that can be developed. The
details are here.
To begin, let us define the errors of our estimate. There will be two errors, an a priori error, e[j]^-, and an a posteriori error, e[j]. Each one is defined as the difference between the actual
value of x[j] and the estimate (either a priori or a posteriori).
Associated with each of these errors is a mean squared error, or variance:
where the operator E{ } represents the expected, or average, value. These definitions will be used in the calculation of the quantity k.
A Kalman filter minimizes the a posteriori variance, p[j], by suitably choosing the value of k. We start by substituting equation 7 into equation 8, and then substituting in equation 6.
To find the value of k that minimizes the variance we differentiate this expression with respect to k and set the derivative to zero. Be patient here, the expression gets much messier before it
becomes simple.
We take this last expression and use it to solve for k.
This expression is still quite complicated. To simplify it we will consider the numerator and the denominator separately.
We start with the numerator, and substitute in equation 3 for z[j].
The measurement noise, v, is uncorrelated to either the input or the a priori estimate of x, so:
Equation 12
This simplifies the expression for the numerator.
Now, in the same way, consider the denominator.
Again, we can use the orthogonality condition from equation 12 to set the last term to zero, so:
where we used the simplification from equation 13 for the first term in the expression, and using the definition of the measurement noise for the second term.
Using the expression for numerator and denominator, we finally get a simple expression for k:
However, there is still a problem because this expression needs a value for the a priori covariance which in turn requires a knowledge of the system variable x[j]. Therefore our next task will be
to come up with an estimate for the a priori covariance.
Before we move on, let's look at this equation in detail. First not that the "constant", k, changes at every iteration. For this reason it should really be written with a subscript (i.e., k[j]).
We'll be more careful about this later.
Next, and more significantly, we can examine what happens as each of the three terms in equation 16 are varied.
□ If the a priori error is very small, k is correspondingly very small, so our correction is also very small. In other words we will ignore the current measurement and simply use past estimates
to form the new estimate. This is as expected -- if our first estimate (the a priori estimate) is good (i.e., with small error) there is very little need to correct it.
□ If the a priori error is very large (so that the measurement noise term, R, in the denominator is unimportant) then k=1/h. This, in effect, tells us to throw away the a priori estimate and
use the current (measured) value of the output to estimate the state. This is made clear by substitution into equation 6. Again, this is as expected -- if the a priori error is large then we
should disregard the a priori estimate, and instead use the current measurement of the output to form our estimate of the state.
□ If the measurement noise, R, is very large, k is again very small, so we disregard the current measurement in forming the new estimate. This is as expected -- if the measurement noise is
large, then we have low confidence in the measurement and our estimate will depend more upon the previous estimates.
Finding the a priori covariance is straightforward starting with its definition.
The middle term drops out as before because the process noise is uncorrelated with previous values of the either the state or its a priori estimate.
Equation 17
Equation 18
We are still not finished, however, because we need an expression for p[j], the a posteriori estimate.
As with the a priori covariance, we find the a posteriori covariance by starting with its definition.
The middle term drops out as before because the measurement noise is uncorrelated with the current values of the either the state or its a priori estimate.
We can simplify this by using our previous definition for k (Equation 16 rearranged)
Substituting Equation 22 into Equation 21 yields
Any Kalman filter operation begins with a system description consisting of gains a, b and h. The state is x, the input to the system is u, and the output is z. The time index is given by j.
The process has two steps, a predictor step (which calculates the next estimate of the state based only on past measurements of the output), and a corrector step (which uses the current value of
the estimate to refine the result given by the predictor step).
Predictor Step
We form the a priori state estimate based on the previous estimate of the state and the current value of the input.
We can now calculate the a priori covariance
Note that these two equations use previous values of the a posteriori state estimate and covariance. Therefore the first iteration of a Kalman filter requires estimates (which are often
just guesses) of the these two variables. The exact estimate is often not important as the values converge towards the correct value over time; a bad initial estimate just takes more
iterations to converge.
Corrector Step
To correct the a priori estimate, we need the Kalman filter gain, k.
This gain is used to refine (correct) the a priori estimate to give us the a posteriori estimates.
We can now calculate the a posteriori covariance
Notes about the Kalman filter gain, k[j].
○ If the a priori error is very small, k is correspondingly very small, so our correction is also very small. In other words we will ignore the current measurement and simply use past
estimates to form the new estimate. This is as expected -- if our first estimate (the a priori estimate) is good (i.e., with small error) there is very little need to correct it.
○ If the a priori error is very large (so that the measurement noise term, R, in the denominator is unimportant) then k=1/h. This, in effect, tells us to throw away the a priori
estimate and use the current (measured) value of the output to estimate the state. This is made clear by substitution into equation 6. Again, this is as expected -- if the a priori
error is large then we should disregard the a priori estimate, and instead use the current measurement of the output to form our estimate of the state.
○ If the measurement noise, R, is very large, k is again very small, so we disregard the current measurement in forming the new estimate. This is as expected -- if the measurement noise
is large, then we have low confidence in the measurement and our estimate will depend more upon the previous estimates.
The notation used in this document was taken from [1]. More common notation is given below.
│ Variable │Notation in this Document│ More Common Notation │
│ time variable │ j │ k │
│ state │ x[j] │ x(k) │
│ system gains │ a, b, h │ a, b, h (note: b is often 0) │
│ input │ u[j] │u(k) (note: often there is no input) │
│ output │ z[j] │ z(k) │
│ gain │ k[j] │ K[k] │
│ a priori estimate │ │ │
│ a posteriori estimate │ │ │
│ a priori covariance │ p[j]^- │ p(k|k-1) or p(k+1|k) │
│a posteriori covariance│ p[j] │ p(k|k) or p(k+1|k+1) │
The notation
can be read as "The estimate of x at time k, based on the information from time k-1"; in other words, the estimate based only upon the past values of the output, or the a priori estimate. The
can be read as "The estimate of x at time k, based on the information from time k"; in other words the estimate based on past and current values of the output, or the a posteriori estimate
1. Example of estimating a constant (along with Matlab code).
2. Example of estimating a first order process (along with Matlab code).
A matrix based (higher order system) Kalman filter is a simple extension of the scalar case presented here. The results are given here, a full description of the mathematics can be found in the
reference [3].
[1] An Introduction to Kalman Filters, G Welch and G Bishop, http://www.cs.unc.edu/~welch/kalman/kalman_filter/kalman.html. See also their other introductory information on Kalman Filters.
[2] Handbook of Digital Signal Processing, D Elliot ed, Academic Press, 1986.
[3] Digital and Kalman filtering : an introduction to discrete-time filtering and optimum linear estimation, SM Bozic, Halsted Press, 1994.
[4] An Engineering Approach to Optimal Control and Estimation Theory, GM Siouris, John Wiley & Sons, 1996.
[5] Statistical and Adaptive Signal Processing, DG Manolakis, VK Ingle, SM Kogon, McGraw Hill, 2000.
[6] Smoothing, Filtering and Prediction - Estimating The Past, GA Einicke, a free on-line text:http://www.intechopen.com/books/ | {"url":"http://www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html","timestamp":"2014-04-20T18:45:43Z","content_type":null,"content_length":"30166","record_id":"<urn:uuid:dd58af81-ce83-450f-a6e2-5f535928302e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milford, MA Math Tutor
Find a Milford, MA Math Tutor
...I have been doing it ever since! I graduated from the Rochester Institute of Technology, in February 2013, with a Bachelor of Science in American Sign Language - English Interpretation. I am
currently attending Cambridge College for a Master of Education degree in Mathematics.
8 Subjects: including algebra 1, algebra 2, prealgebra, precalculus
...I read avidly in my spare time and have a BA in philosophy. My vocabulary skills enable me to score in the 99th percentile on standardized tests. I enjoy helping others to master vocabulary,
whether through games or discussing the different roots, suffixes, and prefixes in a word.
29 Subjects: including geometry, trigonometry, statistics, literature
...I have a Bachelors degree in Electrical Engineering and a Masters in Electrical and Computer Engineering, both from Worcester Polytechnic Institute. Most of my students thus far have been
college students who need to quickly come up to speed with course material.I am a professional firmware engi...
7 Subjects: including prealgebra, algebra 1, C, computer science
...UCLA Ph.D. Former Boston University faculty member. Good rapport with individual students or groups.
30 Subjects: including calculus, prealgebra, logic, reading
...I am comfortable with Standard, Honors, and AP curricula. In addition to private tutoring, I have taught summer courses, provided tutoring in Pilot schools, assisted in classrooms, and run
test preparation classes (MCAS and SAT). Students tell me I'm awesome; parents tell me that I am easy to work with. My style is easy-going; my expectations are realistic; my results are always
8 Subjects: including geometry, algebra 1, algebra 2, precalculus
Related Milford, MA Tutors
Milford, MA Accounting Tutors
Milford, MA ACT Tutors
Milford, MA Algebra Tutors
Milford, MA Algebra 2 Tutors
Milford, MA Calculus Tutors
Milford, MA Geometry Tutors
Milford, MA Math Tutors
Milford, MA Prealgebra Tutors
Milford, MA Precalculus Tutors
Milford, MA SAT Tutors
Milford, MA SAT Math Tutors
Milford, MA Science Tutors
Milford, MA Statistics Tutors
Milford, MA Trigonometry Tutors
Nearby Cities With Math Tutor
Ashland, MA Math Tutors
Bellingham, MA Math Tutors
Franklin, MA Math Tutors
Holliston Math Tutors
Hopedale, MA Math Tutors
Hopkinton, MA Math Tutors
Medway, MA Math Tutors
Mendon, MA Math Tutors
Natick Math Tutors
Upton, MA Math Tutors
Uxbridge Math Tutors
Wellesley Math Tutors
Westboro, MA Math Tutors
Westborough Math Tutors
Woonsocket, RI Math Tutors | {"url":"http://www.purplemath.com/Milford_MA_Math_tutors.php","timestamp":"2014-04-16T13:42:18Z","content_type":null,"content_length":"23637","record_id":"<urn:uuid:362a9f49-9936-4702-b40f-9d1a60710d7b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Considering a Perturbed Function F_eps(x)
Date: 11/20/2009 at 09:59:34
From: Rhoan
Subject: f(x) = x^5 - 300x^2 - 126x + 5005 which has a root a=5
Let eps denote a small number. Consider the perturbed function
F_eps(x) = f(x) + epsx^5
= (1 + eps)x^5 - 300x^2 - 126x + 5005
Let a(eps) denote the perturbed root of F_eps(x) = 0 corresponding to
a(0) = 5. Estimate a(eps) - 5.
I do not have anything to show for this problem. I am not sure how
to start working it as I do not understand perturbed functions. I
have tried to research it, but I am still not clear on its meaning.
Date: 11/27/2009 at 19:55:45
From: Doctor Vogler
Subject: Re: f(x) = x^5 - 300x^2 - 126x + 5005 which has a root a=5
Hi Rhoan,
Thanks for writing to Dr Math.
That's a good question. Basically, they are asking you to define a
function implicitly using F_eps(x).
Specifically, consider
F_eps(x) = f(x) + eps*x^5
= (1 + eps)(x - a(eps))(x - b(eps)) ... (x - e(eps)),
where a(0) = 5.
If you estimate (as in Taylor's Theorem)
a(eps) = a(0) + eps*a'(0) + (smaller stuff on the order of eps^2),
then you only have to compute a'(0).
My first thought is to differentiate (with respect to epsilon, not x)
both sides of the equation, which results in
f(x) + eps*x^5 = (1 + eps)(x - a(eps))(x - b(eps))...(x - e(eps))
Substitute eps = 0, and then try to simplify the right side of the
I find that this doesn't work out so well, since you can't quite find
values for b'(0) and so on. But it does work out more nicely if you
only pull out one factor, instead of separating into five. That
leaves an equation like
f(x) + eps*x^5 = (x - a(eps))((1 + eps)x^4 + b(eps)x^3 + c(eps)x^2
+ d(eps)x + e(eps))
Then do as I suggested: Differentiate both sides of the equation
with respect to eps, not x; substitute eps = 0; and finally try to
simplify the right side of the equation. You might find that it
simplifies very nicely when you substitute x = 5.
If you have any questions about this or need more help, please write
back and show me what you have been able to do, and I will try to
offer further suggestions.
- Doctor Vogler, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/74705.html","timestamp":"2014-04-17T13:23:39Z","content_type":null,"content_length":"7161","record_id":"<urn:uuid:4e8d1333-bfe7-4931-ad00-ddc060703f40>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 21
, 2006
"... The model-theoretic investigation of modules has led to ideas, techniques and results which are of algebraic interest, irrespective of their model-theoretic significance. It is these aspects
that I will discuss in this article, although I will make some comments on the model theory of modules per se ..."
Cited by 64 (20 self)
Add to MetaCart
The model-theoretic investigation of modules has led to ideas, techniques and results which are of algebraic interest, irrespective of their model-theoretic significance. It is these aspects that I
will discuss in this article, although I will make some comments on the model theory of modules per se. Our default is that the term “module ” will mean (unital) right module over a ring (associative
with 1) R. The category of such modules is denoted Mod-R, the full subcategory of finitely presented modules will be denoted mod-R, the
, 1998
"... this article. Throughout, we restrict to studying finite-dimensional associative algebras (with 1) over an algebraically closed field K, and write D for duality with the field. Except where
stated, all modules are left modules. We write A-mod for the category of finite-dimensional A-modules, and A-M ..."
Cited by 18 (0 self)
Add to MetaCart
this article. Throughout, we restrict to studying finite-dimensional associative algebras (with 1) over an algebraically closed field K, and write D for duality with the field. Except where stated,
all modules are left modules. We write A-mod for the category of finite-dimensional A-modules, and A-Mod for the category of all
, 1998
"... this paper. Consider the exact functor ..."
- Proc. London Math. Soc , 1995
"... this paper is to develop further the analysis of existence and properties of generic modules. Our approach depends to a large extent on the embedding of a module category into a bigger functor
category. These general concepts are explained in the first two sections. We continue in Section 3 with a n ..."
Cited by 12 (2 self)
Add to MetaCart
this paper is to develop further the analysis of existence and properties of generic modules. Our approach depends to a large extent on the embedding of a module category into a bigger functor
category. These general concepts are explained in the first two sections. We continue in Section 3 with a new characterization of the pure-injective modules which occur as the source of a minimal
left almost split morphism. This is of interest in our context because generic modules are pure-injective. Next we consider indecomposable endofinite modules. Recall that a module is endofinite if it
is of finite length when regarded in the natural way as a module over its endomorphism ring. Changing slightly the original definition, we say that a module is generic if it is indecomposable
endofinite but not finitely presented. Section 4 is devoted to several characterizations of generic modules in order to justify the choice of the non-finitely presented modules as the generic
objects. We prove them for dualizing rings, i.e. a class of rings which includes noetherian algebras and artinian PIrings. Existence results for generic modules over dualizing rings follow in Section
5. Several results in this paper depend on the fact that a functor f : Mod(\Gamma) ! Mod() which commutes with direct limits and products, preserves certain finiteness conditions. For example, if a \
Gamma-module M is endofinite then f(M) is endofinite. If in addition End \Gamma (M) is a PI-ring, then End (N) is a PI-ring for every indecomposable direct summand N of f(M ). This material is
collected in Section 6 and 7. In Section 8 we introduce an effective method to construct generic modules over artin algebras from so-called generalized tubes. The special case of a tube in the
Auslander-Reiten quiver is discussed in t...
- J. reine angew. Math , 1999
"... This paper grew out of an attempt to understand this phenomenon. The purpose of this paper is to investigate a certain functor T from injective modules I over the cohomology ring H ..."
Cited by 11 (3 self)
Add to MetaCart
This paper grew out of an attempt to understand this phenomenon. The purpose of this paper is to investigate a certain functor T from injective modules I over the cohomology ring H
- COMMENTARII MATHEMATICI HELVETICI , 1997
"... ..."
- Bielefeld University , 1999
"... We consider a large class of matrix problems, which includes the problem of classifying arbitrary systems of linear mappings. For every matrix problem from this class, we construct Belitskiĭ’s
algorithm for reducing a matrix to a canonical form, which is the generalization of the Jordan normal form, ..."
Cited by 11 (6 self)
Add to MetaCart
We consider a large class of matrix problems, which includes the problem of classifying arbitrary systems of linear mappings. For every matrix problem from this class, we construct Belitskiĭ’s
algorithm for reducing a matrix to a canonical form, which is the generalization of the Jordan normal form, and study the set Cmn of indecomposable canonical m × n matrices. Considering Cmn as a
subset in the affine space of m-by-n matrices, we prove that either Cmn consists of a finite number of points and straight lines for every m × n, or Cmn contains a 2-dimensional plane for a certain m
× n. AMS classification: 15A21; 16G60. Keywords: Canonical forms; Canonical matrices; Reduction; Classification; Tame and wild matrix problems. All matrices are considered over an algebraically
closed field k; k m×n denotes the set of m-by-n matrices over k. The article consists of three sections. In Section 1 we present Belitskiĭ’s algorithm [2] (see also [3]) in a form, which is
convenient for linear algebra. In particular, the algorithm permits to reduce pairs of n-by-n matrices to a canonical form by transformations of simultaneous similarity: (A, B) ↦ → (S −1 AS, S −1
BS); another solution of this classical problem was given by Friedland [15]. This section uses rudimentary linear algebra (except for the proof of Theorem 1.1) and may be interested for the general
reader. This is the author’s version of a work that was published in Linear Algebra Appl. 317 (2000) 53–102. 1 In Section 2 we determine a broad class of matrix problems, which includes the problems
of classifying representations of quivers, partially ordered sets and finite dimensional algebras. In Section 3 we get the following geometric characterization of the set of canonical matrices in the
spirit of [17]: if a matrix problem does not ‘contain ’ the canonical form problem for pairs of matrices under simultaneous similarity, then its set of indecomposable canonical m × n matrices in the
affine space k m×n consists of a finite number of points and straight lines (contrary to [17], these lines are unpunched). A detailed introduction is given at the beginning of every section. Each
introduction may be read independently. 1 Belitskiĭ’s algorithm
, 1998
"... The aim here is to emphasise the topological and geometric structure that the Ziegler spectrum carries and to illustrate how this structure may be used in the analysis of particular examples.
There is not space here for me to give a survey of what is known about the Ziegler spectrum so there are ..."
Cited by 6 (5 self)
Add to MetaCart
The aim here is to emphasise the topological and geometric structure that the Ziegler spectrum carries and to illustrate how this structure may be used in the analysis of particular examples. There
is not space here for me to give a survey of what is known about the Ziegler spectrum so there are a number of topics that I will just mention in order to give some indication of what lies beyond
what is discussed here. 1. The Ziegler spectrum 2. Various dimensions 3. These dimensions for artin algebras 4. These dimensions in general 5. Duality 6. The complexity of morphisms in mod-R 7. The
Gabriel-Zariski topology 8. The sheaf of locally definable scalars 1 The Ziegler spectrum 1.1 A reminder on purity and pure-injectives Suppose that M is a submodule of N . Consider a finite system \
Sigma n i=1 x i r ij = a j (j = 1; :::m) of R-linear equations over M : that is, the r ij are in R, the 1 a j are in M and the x i are indeterminates. Suppose that there is a solution b 1 ; ...
, 2003
"... Abstract. We prove that every finite dimensional algebra over an algebraically closed field is either derived tame or derived wild. We also prove that any deformation of a derived wild algebra
is derived wild. ..."
Cited by 4 (0 self)
Add to MetaCart
Abstract. We prove that every finite dimensional algebra over an algebraically closed field is either derived tame or derived wild. We also prove that any deformation of a derived wild algebra is
derived wild.
, 1996
"... this paper we show that mod determines the representation type of . Recall that the algebra is either tame, i.e. all finite dimensional indecomposable-modules belong to oneparameter families, or
is wild, i.e. there are two-parameter families of finite dimensional indecomposable-modules [8]. Of cours ..."
Cited by 3 (3 self)
Add to MetaCart
this paper we show that mod determines the representation type of . Recall that the algebra is either tame, i.e. all finite dimensional indecomposable-modules belong to oneparameter families, or is
wild, i.e. there are two-parameter families of finite dimensional indecomposable-modules [8]. Of course, one feels that this dichotomy should not depend on the deletion of finitely many objects in
the category mod , and this is precisely one of the main results of this paper. More precisely, given another algebra \Gamma and an equivalence mod ! mod \Gamma, then \Gamma is tame if is tame.
Moreover, we show that the equivalence sends the one-parameter families in mod to one-parameter families in mod \Gamma. The fact that mod determines the representation type of also follows, for some
classes of symmetric algebras, from recent work of Assem, de la Pe~na and Erdmann [2, 9]; however their methods are completely different. Equivalences between stable module categories have been
studied by many authors. They naturally occur for instance in representation theory of finite groups. Another source of examples, which includes every algebra of Loewy length 2, is the class of
algebras stably equivalent to a hereditary algebra. Usually the analysis concentrates on homological properties of the category mod which are preserved by an equivalence mod ! mod \Gamma. In this
paper we follow a different approach. We investigate pure-injective modules which are not necessarily finitely presented. Among them the endofinite modules are of particular interest. Recall that a
module is endofinite if it is of finite length when regarded in the natural way as a module over its endomorphism ring. In order to study the non-finitely presented -modules we introduce a new | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=548345","timestamp":"2014-04-16T10:14:43Z","content_type":null,"content_length":"36547","record_id":"<urn:uuid:f9ed309d-6e8b-4b3a-863b-6623a92e9687>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
February 17th 2008, 04:05 PM #1
I am stuck on this problem.
-[-|-3|(-2-m)-4] = -2[(-3-m)-2m]
I am an old duffer trying to learn some math.
Any help would be appreciated.
The answer is supposed to be -8/9
You have to work from the inside out on the parentesis then brackets
I started off with the absolute value sign and the steps are as follows:
-[-(3)(-2-m)-4]=-2[-3-3m] Absolute value of -3 and subtract -2m
-[6+3m-4]=6+6m Distribute -3 through and distribute -2 through
-[2+3m]=6+6m Add like terms together
-2-3m=6+6m Distribute the negative through the equation
-8=9m Put like terms on each side of the equation
-8/9=m Divide -8 by 9 to get your result
Hope this helps you!!
algebra problem
Thank you both so much for your help!
Jerry Cunningham
February 17th 2008, 05:47 PM #2
Feb 2008
February 17th 2008, 06:27 PM #3
Oct 2007
February 18th 2008, 07:54 AM #4 | {"url":"http://mathhelpforum.com/algebra/28491-algebra.html","timestamp":"2014-04-20T13:29:36Z","content_type":null,"content_length":"37546","record_id":"<urn:uuid:9c16d18b-9100-44e6-8ed0-a49bcbaf706e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cottage City, MD SAT Math Tutor
Find a Cottage City, MD SAT Math Tutor
I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a
single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi...
15 Subjects: including SAT math, calculus, physics, GRE
...Therefore, I am very comfortable with the basics of algebra. I took three semesters of calculus at The University of Maryland, and did well in all of them. I went on to use what I'd learned to
obtain my bachelor's and master's degrees in physics.
27 Subjects: including SAT math, calculus, physics, geometry
I have a masters in economics and a strong math background. I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and
algebra problems. I enjoy teaching and working through problems with students since that is the best way ...
14 Subjects: including SAT math, calculus, geometry, statistics
...These days, I do one-on-one and group tutoring in the subjects I know best. In-person tutoring session: $35/hr (depending on travel distance). Sessions at my home are $30/hr. If you want to
boost your grades or get ready for college, give me a call.
31 Subjects: including SAT math, chemistry, calculus, geometry
...Professionally I work as a Systems Engineer for a defense contractor in Washington, DC, and am thus available to meet most evenings and weekends. In addition to my day job, I have five years
of experience coaching tackle football in Northern Virginia, and the players I have coached have ranged a...
30 Subjects: including SAT math, reading, chemistry, physics
Related Cottage City, MD Tutors
Cottage City, MD Accounting Tutors
Cottage City, MD ACT Tutors
Cottage City, MD Algebra Tutors
Cottage City, MD Algebra 2 Tutors
Cottage City, MD Calculus Tutors
Cottage City, MD Geometry Tutors
Cottage City, MD Math Tutors
Cottage City, MD Prealgebra Tutors
Cottage City, MD Precalculus Tutors
Cottage City, MD SAT Tutors
Cottage City, MD SAT Math Tutors
Cottage City, MD Science Tutors
Cottage City, MD Statistics Tutors
Cottage City, MD Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bladensburg, MD SAT math Tutors
Brentwood, MD SAT math Tutors
Capitol Heights SAT math Tutors
Colmar Manor, MD SAT math Tutors
Edmonston, MD SAT math Tutors
Garrett Park SAT math Tutors
Green Meadow, MD SAT math Tutors
Hyattsville SAT math Tutors
Mount Rainier SAT math Tutors
North Brentwood, MD SAT math Tutors
Riverdale Park, MD SAT math Tutors
Riverdale Pk, MD SAT math Tutors
Riverdale, MD SAT math Tutors
Rogers Heights, MD SAT math Tutors
University Park, MD SAT math Tutors | {"url":"http://www.purplemath.com/cottage_city_md_sat_math_tutors.php","timestamp":"2014-04-17T11:07:23Z","content_type":null,"content_length":"24398","record_id":"<urn:uuid:3a602d48-d717-486e-8c0c-23faa0280634>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Syllabus - Biochemistry I, Chem 471 - Spring 2012
Instructor: Michael Mossing, 452 Coulter, 915-5339, mmossing at olemiss dot edu
Schedule: Tuesday and Thursday, 8 - 9:15, 201 Coulter
Text: Principles of Biochemistry- Lehninger 5^th Edition Nelson & Cox, WH Freeman Publishing
ISBN-10: 071677108X ISBN-13: 978-0716771081
Hard Copies: PDF | RTF
Course Schedule (with links and updates)
Last Modified: Tuesday, 07-Feb-2012 06:34:41 CST
The course is the first in a two semester Biochemistry series. Students should leave the course with a working understanding of:
• The noncovalent forces that govern biomolecular structure and interactions.
• The structure and function of biological macromolecules, especially proteins.
• Mathematical descriptions of biochemical equilibria and kinetics.
• The practical and evolutionary implications of nucleic acid and protein sequence analysis.
• The basics of biochemical signal transduction
Students are expected to read the assigned chapter from Lehninger in advance of the each lecture. We will cover approximately one chapter per week. Grades will be based on 10 problem sets, two
midterm examinations and a cumulative final. Problem sets will be posted on the web and graded automatically. We will discuss your answers in class. Final grades will be calculated on the basis of
ten 10-point problem sets, two 100-point midterm exams and a 200-point final.
Scores above 90% are guaranteed A, 80 - 90% B, 70- 80% C, 60 - 70% D. If the class average is less than 80%, the average will determine the B/C boundary, with the A/B cut-off 1 standard deviation
higher, the C/D cut-off 1 standard deviation lower, and the D/F boundary 2 standard deviations below the mean. | {"url":"http://www.olemiss.edu/depts/chemistry/courses/chem471/","timestamp":"2014-04-20T01:04:48Z","content_type":null,"content_length":"2984","record_id":"<urn:uuid:634589f3-3b88-49c8-90ad-9527e960b133>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: wrong result when computing a definite integral
Replies: 4 Last Post: Jan 14, 2013 12:01 AM
Messages: [ Previous | Next ]
Re: wrong result when computing a definite integral
Posted: Jan 11, 2013 10:22 PM
Integrate takes the integration variables in prefix order, so perhaps you
meant the following:
In: Integrate[Exp[I*Sqrt[3]*y], {y, -Pi, Pi}, {x, -2*Pi, 2*Pi}]
Out: (8*Pi*Sin[Sqrt[3]*Pi])/Sqrt[3]
On Thu, 10 Jan 2013, Dexter Filmore wrote:
> hi group,
> i run into this problem today when giving a bunch of easy integrals to mathematica.
> here's a wolfram alpha link to the problem:
> http://www.wolframalpha.com/input/?i=Integrate%5BExp%5BI+Sqrt%5B3%5Dy%5D%2C%7Bx%2C-2Pi%2C2Pi%7D%2C%7By%2C-Pi%2CPi%7D%5D#
> the integrand does not depend on the 'x' variable, the inner integration should only result in a factor of 4Pi, and the correct result is a real number, yet the below integral gives a complex
number which is far off from the correct value:
> Integrate[Exp[I Sqrt[3] y], {x, -2 Pi, 2 Pi}, {y, -Pi, Pi}] -> -((4 I (-1 + E^(2 I Sqrt[3] Pi)) Pi)/Sqrt[3])
> from some trial and error it seems the result is also incorrect for non-integer factors in the exponential.
Date Subject Author
1/11/13 Re: wrong result when computing a definite integral Alex Krasnov
1/12/13 Re: wrong result when computing a definite integral Murray Eisenberg
1/14/13 Re: wrong result when computing a definite integral Alex Krasnov | {"url":"http://mathforum.org/kb/message.jspa?messageID=8059050","timestamp":"2014-04-16T14:06:23Z","content_type":null,"content_length":"19460","record_id":"<urn:uuid:2c9184d5-7c53-44f5-9651-02f723975d8c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
truth function
truth function
A truth function is a function that returns one of two values, one of which is interpreted as “true,” and the other which is interpreted as “false”. Typically either “T” and “F” are used, or “1” and
“0”, respectively. Using the latter, we can write
defines a truth function $f$. That is, $f$ is a mapping from any number ($n$) of true/false (0 or 1) values to a single value, which is 0 or 1.
Mathematics Subject Classification
no label found | {"url":"http://planetmath.org/truthfunction","timestamp":"2014-04-19T09:25:20Z","content_type":null,"content_length":"30372","record_id":"<urn:uuid:b474d81c-af34-4775-8558-62561c0a963c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
When you a write a class, you mostly strive to ensure that the features of that class make sense for that class. But there are occasions when it makes sense to add a feature to allow a class to
conform to a richer interface that it naturally should.
The most common and obvious example of this is one that comes up when you use the composite pattern. Let's consider a simple example of containers. You have boxes which can contain other boxes and
elephants (that's an advantage of virtual elephants.) You want to know how many elephants are in a box, considering that you need to count the elephants inside boxes inside boxes inside the box. The
solution, of course, is a simple recursion.
# Ruby
class Node
class Box < Node
def initialize
@children = []
def << aNode
@children << aNode
def num_elephants
result = 0
@children.each do |c|
if c.kind_of? Elephant
result += 1
result += c.num_elephants
return result
class Elephant < Node
Now the kind_of? test in num_elephants is a smell, since we should be wary of any conditional that tests the type of an object. On the other hand is there an alternative? After all we are making the
test because elephants can't contain boxes or elephants, so it doesn't make sense to ask them how many elephants are inside them. It doesn't fit our model of the world to ask elephants how many
elephants they contain because they can't contain any. We might say it doesn't model the real world, but my example feels a touch too whimsical for that argument.
However when people use the composite pattern they often do provide a method to avoid the conditional - in other words they do this.
class Node
#if this is a strongly typed language I define an abstract
#num_elephants here
class Box < Node
def initialize
@children = []
def << aNode
@children << aNode
def num_elephants
result = 0
@children.each do |c|
result += c.num_elephants
return result
class Elephant < Node
def num_elephants
return 1
Many people get very disturbed by this kind of thing, but it does a great deal to simplify the logic of code that sweeps through the composite structure. I think of it as getting the leaf class
(elephant) to provide a simple implementation as a courtesy to its role as a node in the hierarchy.
The analogy I like to draw is the definition of raising a number to the power of 0 in mathematics. The definition is that any number raised to the power of 0 is 1. But intuitively I don't think it
makes sense to say that any number multiplied by itself 0 times is 1 - why not zero? But the definition makes all the mathematics work out nicely - so we suspend our disbelief and follow the
Whenever we build a model we are designing a model to suit how we want to perceive the world. Courtesy Implementations are worthwhile if they simplify our model. | {"url":"http://www.martinfowler.com/bliki/CourtesyImplementation.html","timestamp":"2014-04-17T21:23:45Z","content_type":null,"content_length":"10502","record_id":"<urn:uuid:876b0ba8-4f08-4e57-aa06-ac92b5299990>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Constrained Random Walk
This Demonstration simulates a random walk of a particle constrained within a square. An impermeable wall divides the square into two equal parts and has a central opening that lets the particle
move between the two halves.
The particle starts at the red locator and ends at the green circle; you can drag the red locator.
Each successive step of the particle is a vector with standard normal components. This results in the angle following a uniform distribution and the norm following a Rayleigh distribution. | {"url":"http://demonstrations.wolfram.com/ConstrainedRandomWalk/","timestamp":"2014-04-19T17:07:45Z","content_type":null,"content_length":"42074","record_id":"<urn:uuid:5fe941ae-7f6d-46d6-9bd4-e0d80080f54b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Modified Question: Manipulating an Unbalanced Panel
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Modified Question: Manipulating an Unbalanced Panel
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Modified Question: Manipulating an Unbalanced Panel
Date Mon, 24 Nov 2008 12:37:50 -0000
==== solution 1
A direct answer to your question is that you could exploit the results
your -count- command by using its result saved in r(N). For example,
gen byte nOK = 0
egen group = group(PanelMember) if consumption > 0 & consumption < .
su group, meanonly
qui forval i = 1/`r(N)' {
count if group == `i'
replace nOK = r(N) if group == `i'
followed by
... if nOK >= 2
But there are other and better ways to do that.
==== solution 2
One is
egen nOK = total(consumption > 0 & consumption < .) , by(PanelNumber)
followed by
... if nOK >= 2 & consumption > 0 & consumption < .
As -consumption > 0- is true, and evaluates to 1, whenever -consumption-
is positive, the -egen- statement counts suitable observations within
each panel.
The extra condition -consumption < .- has been added to exclude any
missings, which also count as positive. That does no harm and may catch
some problems. (The -count- statement above does the same.)
==== solution 3
Another is closer to your original code:
gen byte OK = consumption > 0 & consumption < .
bysort OK PanelMember : gen nOK = cond(OK == 0, 0, _N)
... if nOK >= 2
At this moment, the last is in my view the best way to do what you want.
P.S. In your example, the variable counts observations in each panel,
not the number of panels.
ippab ippab
I just realized that I can generate a variable with the number of
panels with the following command: by PanelNumber: gen CountSessions
= _N .
But, in my data, there is another variable which indicates if that
observation has any positive consumption. I actually need to count
the number of positive consumptions for each panel. There are
sessions without any consumption. For example, I used "by
PanelNumber: gen CountSessions = _N if consumption>0". This is
wrong because this gives CountSessions =2 even if a panel has one
session (observation) with positive consumption and one session
(observation) without any consumption.
I found that the following command gives me the right output on the
screen: "by PanelNumber: count if consumption>0". However, I
don't know how to generate a variable to capture the output for the
command: "by PanelNumber: count if consumption>0". I would really
appreciate some help with this.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-11/msg01041.html","timestamp":"2014-04-17T18:59:34Z","content_type":null,"content_length":"7734","record_id":"<urn:uuid:f750dfee-9caf-4d6a-b285-9fbacd7e5ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
who can work out this?
The important variable is the length of time before you posted your next post.
Exactly, correct and very well-said.
er, right, but that's the opposite of what you seemed to be saying in the OP. You seemed to be saying that it was unlikely to spot a post number like 1111; now you're saying that it's normal.
I am sorry, I didn't mean it to appear like this, I was just waiting for someone to make the calculation and see how many variable he will take in consideration to show that the probability of the
"1111"is not as tiny as it seems.
How old are you, M? Just curious.
I would like to keep it private, so I will tell you in a PM. | {"url":"http://whywontgodhealamputees.com/forums/index.php/topic,9566.58.html","timestamp":"2014-04-23T13:38:54Z","content_type":null,"content_length":"151575","record_id":"<urn:uuid:11cec92b-b067-4c2d-ab0c-54b177f3a118>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bounding Sphere? [Archive] - OpenGL Discussion and Help Forums
11-01-2002, 01:53 PM
Hello, I have implemented a frustum culling algorithm in my engine, using CheckSphere. My problem is that I cant compute the sphere from an ms3d (or any other model, but ms3d is my engine's format
for now) model. How do I compute a=the smallest sphere possible? I have searched in google and flipcode and I didnt get good results.
Anyone can point me to a tutorial or article? | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-131882.html","timestamp":"2014-04-16T19:24:26Z","content_type":null,"content_length":"4334","record_id":"<urn:uuid:ddc51238-f846-4af4-8360-ee2c4a775da3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter Listing | Return to 1997 Topical Index
TABLE V
Five cluster result
Of the seventeen firms in SIC 36, only one firm had a common strategy for a four year period and only two additional firms had a common strategy for as long as three years. Similar results were found
for SIC 38 and SIC 73. Out of the sixty-two firms only an additional thirteen firms had consistent two year strategies. Thus, the preponderance of firms appeared to change strategy on an annual
basis. Therefore it appears that most firms do not have stable strategies.
A somewhat different result was found in a recent study in the banking industry, which concluded that six years after the IPOs, the initial strategies were still in evidence (Bamford et. al, 1996).
These findings were the result of a longitudinal study, in an industry which likely has had less revolutionary changes during that time period than healthcare (SIC 38) and microelectronics (SIC 36
and 73) firms have undergone due to healthcare reform and the introduction of Windows 95, respectively.
Hypothesis Three: Faster growth firms have distinctly different strategies than firms with slower growth
The third hypothesis was tested using two statistical techniques: multiple discriminant analysis and multiple regression analysis. For the discriminant analysis the data were again separated by the
three SIC codes and each group was tested separately. Within each SIC category, the rate or percentage of gain (or loss) in sales for each two year time period was calculated for each firm.
The reported corporate strategies were associated not with the current year's sales gain, but with the percent gain (or loss) in the following year. This time lag was used to measure the long term
affect of strategy selection. The choice of the percentage sales gain (or loss) variable is not inconsistent with the literature which suggests that the use of gain is appropriate as new ventures
focus more on sales gain than profitability (Timmons, 1994).
The gain in year (t+1) could not be computed for any firm's last observation. The gains were normalized and any gain in excess of 2.5 standard deviations above the average gain was removed from all
subsequent analysis. Five observations fell outside the 2.5 standard deviation limit.
The sets of firms were then separated into three sets of approximately equal numbers, representing high, medium and low gains for all firms for all years - not by the average annual gain for each
firm. Since the analyses were used to discriminate between the high and low performers, the middle groups were omitted from the discriminant analysis. The resulting data set had a total of 47
observations for SIC 36, 55 observations for SIC 38 and 69 observations for SIC 73. Table VI contains the break points in gains for high and low performers.
TABLE VI
Break points for high and low performers
It was possible for a firm to have been classified as a high performer in one year and to have been classified as a low performer in another year. For the three SIC codes together, the discriminant
analyses of high and low gain data, resulted in sixty-nine percent of the data being correctly classified as a high gain or a low gain firm. The results with the highest correct classifications
occurred when the data was split into the three SIC groupings and the high and low performers were analyzed. The results of the discriminant analyses are shown in Table VII, which show that firms in
higher gain years have identifiably different strategies than firms in lower gain years. This result confirms the hypothesis that high growth firms have different strategies than low growth firms.
TABLE VII
Discriminant analysis for high and low gain firms
However, this is an interesting, but somewhat weak test, because the dependent variable-high gain vs. low gain is dichotomous and the analysis was performed only on the top third and bottom third of
the observations. Multiple regression is a stronger test of the relationships, because the dependent variable is continuous. Further, the results of a multiple regression analysis by SIC code would
allow for the definition of the relationship between gain and the discrete strategy variables through the beta coefficient, thus linking a change in performance to discrete strategies.
The dichotomous strategy variables for the 62 firms for the three to seven years of firm data (62 firms with 317 observations less the five outliers), were used as the independent variables, and the
percentage gain from the base year to the next year was used as the dependent variable. A stepwise multiple regression analysis using backward elimination was used with a one-tailed significance test
of the results which are shown in Table VIII.
TABLE VIII
Regression results
To determine if the year of measurement had any effect on the results, an additional regression was run. The years the data were generated, were coded as dummy variables, and regressed against the
percentage gain (or loss). The year of data collection proved to be inconsequential except for SIC 7372, prepackaged software firms. In this case, 1992 proved to be significant in a bivariate
regression, but in a stepwise procedure the variable did not enter the equation.
As expected, the regression analyses (Table VIII) resulted in a number of both positive and negative betas (Stearns et. al, 1995, Carter et. al., 1994). What was not expected were the negative betas
on the formal planning variable. Formal planning appears to be a reaction to competitive challenges. A regression analysis of gain in period (t=0) on formal planning resulted in a negative
relationship significant at the 0.004 level for SIC 73, thus supporting the contention that strategic planning may have been a reactive response to poor performance in a prior year.
Other strategy variables also had negative betas. Those strategies produced less than average performance. It could be argued that those strategies required more time to produce a positive result. It
could also be argued that those strategies could have been poorly implemented or were the wrong strategies for the firms in their then current environment.
The regression of the strategy variables on gain revealed that significant relationships existed between certain strategy variables and performance. For SIC 36: Semiconductors, Related Devices,
Components, cost leadership, innovation and product development strategies resulted in above average gain, while high price, joint venturing/licensing and market development strategies produced below
average gains. For SIC 38: Medical Equipment, Surgical Equipment, Medical Supplies, innovation and market focus resulted in above average gain, and product differentiation lead to below average gain.
For SIC Code 73: Prepackaged Software, differentiation, joint venture/licensing, and quality strategies lead to above average gain, while uniqueness and cost leadership strategies resulted in below
average gains. Therefore, the third hypothesis: "Higher growth firms have distinctly different strategies than firms with slower growth" was strongly supported. Six of the strategy variables had no
significant relationship to gain.
Top of page | Chapter Listing | Return to 1997 Topical Index
© 1997 Babson College All Rights Reserved
Last Updated 03/03/98 | {"url":"http://fusionmx.babson.edu/entrep/fer/papers97/schwartz/sch5.htm","timestamp":"2014-04-16T16:21:47Z","content_type":null,"content_length":"9929","record_id":"<urn:uuid:d08bf42f-0a46-40d5-985a-d0c62d87721f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equal Area and Perimeter: Rectangles
Date: 09/09/2001 at 13:56:35
From: Jessica
Subject: Area and perimeter
There are only two rectangles whose area is exactly the same as their
perimeter if the dimensions of each are whole numbers. What are the
Date: 09/10/2001 at 12:46:10
From: Doctor Ian
Subject: Re: Area and perimeter
Hi Jessica,
Let's look at a rectangle for a moment:
| |
| | b
The area will be a*b; the perimeter will be 2a + 2b. So we want to
find pairs of numbers (a,b) such that
a*b = 2a + 2b
More specifically, we want to find pairs of whole numbers for which
this equation is true. One simple way to attack a problem like this
is to start making a table:
a b a*b 2a+2b
--- --- --- -----
So we've found one solution already. A 0 by 0 rectangle has an area
equal to its perimeter.
(This seems kind of strange, I know, but it fits the definition. And
note that the problem explicitly says that the numbers must be 'whole
numbers', not 'counting numbers'. The only difference between
counting numbers and whole numbers is that the whole numbers include
zero. So whenever you see a problem that talks about 'whole numbers',
that's a tip that zero might turn out to be important.)
Now, the tough part about constructing a table like this is to make
sure that you cover all the possibilities. How can we guarantee that
we get all the possible (a,b) pairs? What if we miss one, and that
turns out to be the answer? We could keep looking forever.
Well, here is a trick for ordering these pairs:
The coordinates of the points, in order, are:
1: (0,0)
2; (0,1)
3: (1,1)
4: (1,0)
5: (0,2)
6: (1,2)
7: (2,2)
8: (2,1)
9: (2,0)
10: (0,3)
11: (1,3)
and so on. Do you see why this will get _every_ pair of whole numbers?
Of course, it will get some of them twice (for example, (2,1) and
(1,2) actually describe the same rectangle), which creates some extra
work. (I included both (0,1) and (1,0) in the table above before I
realized this. I didn't need to work out both of them.)
If we want to be lazy (which in math is often a good thing), we can
just look at the points above the diagonal:
(Do you see why this works?)
If we want to be even lazier, we can note that since
a*b = 2a + 2b
= 2(a + b)
the product a*b must be an even number. This is a break, because it
means that we can ignore any (a,b) pair in which both a and b are odd.
So we can forget about possibilities like (1,3), or (3,5), since there
is no way to multiply two odd numbers to get an even number.
Anyway, now you can make a table, which will let you find the other
answer to the problem.
Now, when making a table like this, there are two possibilities:
1. You'll run across the answer very quickly. If that happens,
then using a table will turn out to have been a good idea.
2. You won't run across the answer very quickly. If that happens,
there are two more possibilities.
a. You'll notice some pattern in the table that will
tell you where the answer has to be, without your
having to construct all the table entries in between.
If that happens, then using a table will turn out to
have been a good idea.
b. You won't notice any pattern like that. In that
case, you'll want to start looking around for another
way to solve the problem.
Fortunately, in this case, you should come across the answer pretty
quickly, if you use both of the shortcuts that I mentioned.
Does this help? Write back if you'd like to talk about this some
more, or if you have any other questions.
- Doctor Ian, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/55353.html","timestamp":"2014-04-17T13:21:36Z","content_type":null,"content_length":"9382","record_id":"<urn:uuid:b89798e1-e2c3-44b6-8b37-02086e81c2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the space of directions an inner metric space for inner metric space of curvature $\ge k$?
up vote 1 down vote favorite
Let $X$ be an inner metric space with curvature bounded from below by $k$ in the sense of Toponogov. $\Sigma_p$ be the space of directions at point $p$. In the note by Plaut "Metric spaces of
curvature bounded from below", the author mentioned thesis of Stephanie Gloor (1998, Zurich), which contains an example of an inner metric space with curvature $\ge k$ such that the space of
directions at some point is not an inner metric space.
Does any one know this example?
add comment
1 Answer
active oldest votes
I'm not familiar with that reference but a standard example for this is by Stephanie Halbeisen "On tangent cones of Alexandrov spaces with curvature bounded below".
The example is necessarily infinite dimensional as it's well known that this can not happen in finite dimensions. Also, it's worth noting that there is no canonical definition of a
up vote 3 space of directions for infinite dimensional Alexandrov spaces. The definition that Halbeisen uses is a metric completion of equivalence classes of geodesic segments starting at $p$.
down vote But there are other natural definitions possible (say, by looking at ultralimit of pointed blow ups of $X$ at $p$). All these definitions agree for finite dimensional Alexandrov spaces
accepted but not for infinite dimensional ones.
add comment
Not the answer you're looking for? Browse other questions tagged mg.metric-geometry or ask your own question. | {"url":"https://mathoverflow.net/questions/90419/is-the-space-of-directions-an-inner-metric-space-for-inner-metric-space-of-curva","timestamp":"2014-04-16T19:55:59Z","content_type":null,"content_length":"49897","record_id":"<urn:uuid:53b715cf-9f44-4bc9-af77-fd88145089c6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Quanting Zhao
bio website
location Wuhan, Hubei Province, China
age 26
visits member for 3 years, 3 months
seen Feb 12 at 1:54
stats profile views 290
My interest lies in differential geometry,analytic and algebraic geometry, mathematical physics, especially moduli of curves and calabi-yau manifold, kahler and non-kahler geometry.
2 Votes Cast
all time by type
2 up 1 question
0 down 1 answer | {"url":"http://mathoverflow.net/users/11901/quanting-zhao","timestamp":"2014-04-21T13:09:56Z","content_type":null,"content_length":"52741","record_id":"<urn:uuid:0fc5ae00-0d6b-44ec-a874-83f1ac6aa6dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential Equations vs Linear Algebra
What applications are you interested in? Some fields use linear algebra heavily (computer vision) and some field use differential equations heavily (control theory).
I am still not sure yet. But I believe as a computer engineering major (and also intended doing physics as well) I'd like to work in fields like electronic devices, such as microprocessor.
I guess I should say physics in general, but for career-wise, solid states, quantum physics. | {"url":"http://www.physicsforums.com/showthread.php?t=408139","timestamp":"2014-04-19T12:32:23Z","content_type":null,"content_length":"37800","record_id":"<urn:uuid:65bc4911-4068-4c93-9be6-a784eacf8ea8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to solve equation with many exp
January 29th 2011, 11:17 AM #1
Jan 2010
How to solve equation with many exp
I'm trying to understand how to solve this type of equation and don't know.
I need to find x.
Thanks for your help,
Welcome to the forum.
If one of B, D, G, L, N is a multiple of the rest, then this problem reduces to finding roots of a polynomial. For example, suppose that B = d * D = g * G = l * L = n * N for some positive
integer d, g, l, n. Then $e^{x/D}=e^{(x/B)\cdot d}=(e^{x/B})^d$. So, the equation becomes $Ay+Cy^d+Fy^g+Ky^l+My^n=0$ for $y=e^{x/B}$.
Do you need to solve this equation symbolically or numerically? Is there some background knowledge that you need to use? Also, depending on these answers, you need to post this question to the
right forum (this one is about discrete mathematics).
Thank you, that is what I needed. I'll try to work it out
January 29th 2011, 11:55 AM #2
MHF Contributor
Oct 2009
January 29th 2011, 12:06 PM #3
Jan 2010 | {"url":"http://mathhelpforum.com/discrete-math/169650-how-solve-equation-many-exp.html","timestamp":"2014-04-17T18:54:40Z","content_type":null,"content_length":"35003","record_id":"<urn:uuid:91d42117-0cc9-441b-823b-fe3f81b8ca0e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berthelot P., Ogus A. Примечания относительно прозрачной когомологии (Принстон, 1978) MAh
Home / lib / M_Mathematics / MA_Algebra / MAh_Homology /
Berthelot P., Ogus A. Примечания относительно прозрачной когомологии (Принстон, 1978) MAh
Berthelot P., Ogus A. Notes on crystalline cohomology (Princeton, 1978)(T)(264s)_MAh_.djvu
Size 2.0Mb
Date Jun 22, 2005
Cites: This gives us the desired extension, establishes the iso-
isomorphism OT00 DY/SA) * P^ Y(TxY), and proves the proposition...
The essential point of the calculation is the cocycle condi
tion for Eg «
First we must give a more explicit formula for the HPD-
stratification of the L-construction...
On the other hand, E1 = g*E is a crystal on X1,
and :EfAt,eiE'is a bounded complex on S1, which is reasonable to
compare with Lu*B
In fact it is easy to obtain а шар (in the derived category
Lu*IR fx/s E +Kf',.g,E1, i.e...
Of course, this arrow is the same as our fancy
looking Ь^е changing arrow, and the theorem is proved in the
special case...
But Kfy ,g (К)ЭМ is
a direct summand of Rf^,-s (Е)вОд,, and hence it too is uniformly
There is a natural morphism of complexes:
JX/S E "* FXL(E-8fiD /S5 where the latter in
V*E.8flg /s) (c.f...
Then Lemma (8.11) implies that 4? is a
quasi-isomorphism, and the proof of Theorem (8.8) is therefore
It is easy to see that if c! is Deligne's "filtration
canonique" [1, 1.4.6], there is a canonical filtered quasi-
isomorphism с!к* ->C.K" ...
Since Z has finite
projective dimension, this makes sense even if F* is unbound
we can take L"(F*) to be bounded below if F'is...
•'•The reader who so desires can now skip to Katz's conjecture, p
Here we shall develop only those properties of this structure
we need for the applications, and refer the reader to Mazur's
papers for more details...
There is a
natural morphism of filtered-complexes : j: (Fefly .g ,F*)-<- (Юу -s ,F" )
and hence a morphism of spectral sequences:
3r • br u "y/S'F ' г UiY/S' ; *
Let S1 = S*<, _ SpQ , and observe that Jeid0 is an iso-
in the lemmas which follow, v.-e assume only that >;/sQ i&
and smooth, and state the additional hypotheses as we nee
If X does satisfy all the hypotheses of the theore
base changing theorem for crystalline cohomology and the
ness assumption show that H*rig(X/SHo 0s =sH*R(X/SQ), an
particular, the latter is locally free...
(X'/S) ¦»• H^ is^/s> deters
the (mod p) Hodge filtration of X'/SQ and conjugate fil
of X/?p, assuming the stated degeneracy and torsion hypo
Even without These hypotheses we car...
Plot the points
(i, ord (a.)); then the Newton polygon of T is th-з convex hull
of this graph...
We want to show that there is a PD structure у on the
ideal r+(M) = ffl Г.(М), with у (xCl]) = xCn]...
Now any p e P(M,N) determines 'an A-linear
evaluation map e :?f-* N, sending any D to D(p).@)...
Then if M
A-module, we have by (A3) that ГД(М) а Гр(М)врА, and hen
there is an exact sequence: Г (M)9 I ¦* Гр(К) * Г^<М) -* 0
must show that 1Гр(М) П Гр(М) is a sub PD-ideal of Гр(И
Clearly 1Гр(М) П Гр(М) = 1Гр(М) is generated by the set
elements of the form ax, where a e I and x € Гр(М)...
Our aim is to compare the categories D(U,A.) and D(A)
We have defined a functor F.lim: D(TJ,A.) -*D(A), and there
an obvious functor back... | {"url":"http://www.eknigu.org/info/M_Mathematics/MA_Algebra/MAh_Homology/Berthelot%20P.,%20Ogus%20A.%20Notes%20on%20crystalline%20cohomology%20(Princeton,%201978)(T)(264s)_MAh_.djvu","timestamp":"2014-04-20T19:45:35Z","content_type":null,"content_length":"6400","record_id":"<urn:uuid:75c9c4b5-789f-4a66-ab82-aaafa7ddfedc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integration - Continuous Distributions
I am having real trouble with this, the topic is not on this concept itself, but we have to use knowledge of it to answer questions in our area.
I am having real trouble understanding it.
So we have the general rule in our example:
G(x) = E[Y1 | Y1 < x] (this is apparantly the expected value, given the assumptions which i dont really understand how they got to this statement itself)
But anyway, so this is the rule we should use. Then in our example, we have:
F(x)=x and G(x)=x^N-1
Hence: expected value = (x^N-1) * ((N-1)/N) * x
= ((N-1)/N)* x^N
Now I understand how to simplify the above to get the bottom answer, but I do not understand the condition probability, then with the example, how to get the answer....
Please help me, at the moment i am just searching the internet for integration examples, but everything is so different in format/notation etc. Please help! Here to discuss ! | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=19205","timestamp":"2014-04-16T04:36:44Z","content_type":null,"content_length":"10633","record_id":"<urn:uuid:b367e8f6-634b-4969-82f8-94a2e118cb8f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
2008 NFL Week 3 Discussion
Miscellaneous observations:
Last week’s power rankings are [S:10-5:S] 9-6 so far this week.
The only two teams with beatpaths to every other team in their division: Denver and Arizona.
#31 beat #1, with some awesome direct snap misdirection.
And it’s time to start thinking of tiebreaker and rank strategies.
There are two stages to determining rankings. First is the beatloop resolution strategy. That is pretty stable, although doktarr and moose have written about possible ways to enhance it. The general
principle of beatloops is not to imply that the teams in a beatloop are tied – it’s more just that it is the smallest set of data that can be seen as ambiguous/confusing, and thus should be removed.
That way we rely on the rest of the graph to imply rankings. I think that trying to divine too much data from a beatloop just introduces too many judgment calls into a graph. We always remove
smallest beatloops first, starting with splits, and then recalculate.
We’ve tried some methods to bust beatloops here in the past. One that I was fond of was called the beatfluke method, defined as: If Team A’s loss to Team B was beatlooped away, and Team A also has an
entirely different remaining alternate beatpath to Team B, then it contradicts Team A’s loss, and thus the A-beats-C-beats-B part of the beatloop can be restored to the graph.
I found this made the graph more vertical, and also slightly more accurate, but I didn’t like how it would lead to more dramatic shifts in the power rankings each week. It made the graphs vary more
from week to week. Perhaps if it were combined with a more stabilizing tiebreaker, it could be used again.
The other two approaches of busting beatloops were doktarr’s “iterative” method, and Moose’s score method. The “iterative” method breaks shared beatloops at their shared link, like if one game is
responsible for the existence of several beatloops. It is another effort to try to identify one link of a beatloop (a game outcome) as flukey. I do have trouble justifying that one intuitively,
though – I feel like I need another reason to believe that link actually is flukey, other than it just being part of several beatloops. The other is a weighted system having to do with score
differentials. I believe this ended up accurate and perhaps superior, although I’m trying to keep the main system here free of extra data like points (as opposed to just wins and losses).
After that, there’s how to determine rankings from the resultant beatpath graph. So far this season, I’ve been breaking ties based off of the rankings of the previous week. But the usual tiebreaker
for later in the season is to compare the strength of the teams’ direct beatwins – for instance, if every team in a tied set has at least three beatwins, it averages the strength of the top three
beatwins of each of those teams, and picks the top team. Finally, I think Moose came up with a tiebreaker having to do with counting all the links in a resultant beatpath graph. This is somewhat
similar to what I used in the first and second year here, which counted number of teams above and below each team, but it yields more information in that it counts every link of every possible path,
thereby giving extra weight to stronger paths. I probably have this explanation wrong but Moose will correct me in the comments. This is a good candidate to apply as a tiebreaker to the official
rankings this season.
5 Responses to 2008 NFL Week 3 Discussion
1. I love the Beatpaths site… please check your email.
2. As far as I can tell the only beatloops are NE->NYJ->MIA->NE and CAR->CHI->IND->MIN->CAR. Since they are non-intersecting, my beatloop removal approach would give the same results as yours.
The way I would explain the iterative approach is that it is trying to remove the minimum amount of information from the system to give us non-contradictory results. I dount that works better for
you on a gut level than the way you explained it, but there it is.
All the possible “tiebreakers” work for me. One thought is that you could average in last week’s ranking into another tiebreaker method for the next few weeks, and move over to a pure year 2008
tiebreaker once you have more data.
3. I’m pretty sure through last season we mostly didn’t like the weighted method because one big victory could throw the chart off permanently. Detroit ended up relatively high last year on the
weighted graph because of one blowout, and some other teams were jumbled in a way that most people would have disagreed with. The only place where the weighted system had merit was determining
the winner of the Super Bowl when it felt one team was significantly stronger than the other. This is too small of an area to consider the entire method superior. I run the weighted graph mostly
because it’s another perspective, but if we come up with another way to resolve loops, this will be the one to get dropped.
I also agree that what we’re trying to do here is to find a way to make the best relational graph with the least amount of data, so including score goes against that goal.
With regards to my ranking system, it isn’t so much of a “tiebreaker” as the scores themselves determine the ranks. You essentially describe it correctly though. Each team has all possible paths
measured coming in and going out. Every link above them costs a point, every link below gains a point. So for the NYG->WAS->NO->TB->ATL->DET path from last week, NO would get 3 points for the
paths below them but lose 2 for the ones above for a net of +1 point. It gets a lot more complicated when branches are involved, but it works out. After the point totals for each team is
determined, I run the teams through a normalizing formula to put them all on a scale from -10 to 10 where 0 means the team has the same number of paths going in and out. A high score not only
represents being high on the graph, but also having a breadth of teams below and several direct wins to high ranking teams.
4. In terms of the Iterative method, essentially it gives you the taller graphs that you like seeing from the BeatFluke method, but with more stability from week to week.
For my part, I’d rather do away with any reference to last year’s information as soon as possible. Now that Week 3 is over, my rankings will no longer use last season for a tiebreaker.
The weighted method this week breaks the NYJ/MIA/NE loop to be MIA->NE->NYJ because the NYJ win over MIA was the closest. The other loop gets broken in two places because IND->MIN and CAR->CHI
were both 3-point games. As the graph stands right now, a SD win over NYJ tonight won’t change anything but will give SD a direct arrow to NYJ. If NYJ wins, the graph will get a lot taller.
In the Standard and Iterative methods though, NYJ is currently detached from the graph due to the loop it is in, so tonight’s game will simply decide if they appear above or below SD.
5. I like your explanation better doktarr. Similar to how we were talking last season, finding the power rankings that contradict the fewest game results. Although it occurred to me that just
because one power ranking has fewer ignored victories than another, doesn’t mean that it’s more accurate.
I like the iterative approach but I have trouble reconciling it with the principle of punting on ambiguity and using the non-ambiguous parts of the graph to pick up the slack. Like, within a
beatloop, it probably isn’t true that the teams are “tied” in quality – but, that it is probably more informative to use information outside of the beatloop to determine which of the teams are
stronger, than it is to use information inside of the beatloop. Because if you use information inside of the beatloop to break the tie, then another team in the beatloop is always going to have a
good reason to disagree.
This goes back to when I was playing around a lot with Condorcet voting. People would always talk about how the likelihood of Schwartz sets and Smith sets were evidence of a “flaw” in the voting
system, because any system to tiebreak one of those sets would screw one of the candidates. When the truth was, the voting population really just did have ambiguous preferences of who they
preferred in the resultant Schwartz/Smith set, so the correct approach in those case would have been to work out either a power-sharing agreement, or reduce the collection of candidates down to
those in the set, and then stage another vote later (after collecting more data or giving voters more time to consider). | {"url":"http://beatpaths.com/2008/09/21/2008-nfl-week-3-discussion/","timestamp":"2014-04-18T08:57:23Z","content_type":null,"content_length":"28679","record_id":"<urn:uuid:f512ac88-6dec-4331-958a-60eb1088320c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction To Algorithms
Results 11 - 20 of 53
- Journal of Parallel and Distributed Computing, Elsevier Science
"... The utilization of parallel computers depends on how jobs are packed together: if the jobs are not packed tightly, resources are lost due to fragmentation. The problem is that the goal of high
utilization may conflict with goals of fairness or even progress for all jobs. The common solution is to us ..."
Cited by 15 (7 self)
Add to MetaCart
The utilization of parallel computers depends on how jobs are packed together: if the jobs are not packed tightly, resources are lost due to fragmentation. The problem is that the goal of high
utilization may conflict with goals of fairness or even progress for all jobs. The common solution is to use backfilling, which combines a reservation for the first job in the interest of progress
with packing of later jobs to fill in holes and increase utilization. However, backfilling considers the queued jobs one at a time, and thus might miss better packing opportunities. We propose the
use of dynamic programming to find the best packing possible given the current composition of the queue, thus maximizing the utilization on every scheduling step. Simulations of this algorithm,
called LOS (Lookahead Optimizing Scheduler), using trace files from several IBM SP parallel systems, show that LOS indeed improves utilization, and thereby reduces the mean response time and mean
slowdown of all jobs. Moreover, it is actually possible to limit the lookahead depth to about 50 jobs and still achieve essentially the same results. Finally, we experimented with selecting among
alternative sets of jobs that achieve the same utilization. Surprising results indicate that choosing the set at the head of the queue does not necessarily guarantee best performance. Instead,
repeatedly selecting the set with the maximal overall expected slowdown boosts performance when compared to all other alternatives checked. 1
- Mathematical Programming , 2002
"... We design an algorithm for the high-multiplicity job-shop scheduling problem with the objective of minimizing the total holding cost by appropriately rounding an optimal solution to a fluid
relaxation in which we replace discrete jobs with the flow of a continuous fluid. The algorithm solves the flu ..."
Cited by 14 (4 self)
Add to MetaCart
We design an algorithm for the high-multiplicity job-shop scheduling problem with the objective of minimizing the total holding cost by appropriately rounding an optimal solution to a fluid
relaxation in which we replace discrete jobs with the flow of a continuous fluid. The algorithm solves the fluid relaxation optimally and then aims to keep the schedule in the discrete network close
to the schedule given by the fluid relaxation. If the number of jobs from each type grow linearly with N,then the algorithm is within an additive factor O�N � from the optimal (which scales as O�N
2�); thus,it is asymptotically optimal. We report computational results on benchmark instances chosen from the OR library comparing the performance of the proposed algorithm and several commonly used
heuristic methods. These results suggest that for problems of moderate to high multiplicity,the proposed algorithm outperforms these methods,and for very high multiplicity the overperformance is
dramatic. For problems of low to moderate multiplicity,however,the relative errors of the heuristic methods are comparable to those of the proposed algorithm,and the best of these methods performs
better overall than the proposed method. Received December 1999; revisions received July 2000,September 2001; accepted September 2002. Subject classifications: Production/scheduling,deterministic:
approximation algorithms for deterministic job shops. Queues,optimization: asymptotically optimal solutions to queueing networks. Area of review: Manufacturing,Service,and Supply Chain Operations. 1.
- In Proceedings of the 10th European Symposium on Algorithms (ESA , 2002
"... Abstract. We introduce the online scheduling problem for sorting buffers. A service station and a sorting buffer are given. An input sequence of items which are only characterized by a specific
attribute has to be processed by the service station which benefits from consecutive items with the same a ..."
Cited by 13 (2 self)
Add to MetaCart
Abstract. We introduce the online scheduling problem for sorting buffers. A service station and a sorting buffer are given. An input sequence of items which are only characterized by a specific
attribute has to be processed by the service station which benefits from consecutive items with the same attribute value. The sorting buffer which is a random access buffer with storage capacity for
k items can be used to rearrange the input sequence. The goal is to minimize the cost of the service station, i.e., the number of maximal subsequences in its sequence of items containing only items
with the same attribute value. This problem is motivated by many applications in computer science and economics. The strategies are evaluated in a competitive analysis in which the cost of the online
strategy is compared with the cost of an optimal offline strategy. Our main result is a deterministic strategy that achieves a competitive ratio of O(log 2 k). In addition, we show that several
standard strategies are unsuitable for this problem, i.e., we prove a lower bound of Ω ( √ k) on the competitive ratio of the First In First Out (FIFO) and Least Recently Used (LRU) strategy and of Ω
(k) on the competitive ratio of the Largest Color First (LCF) strategy. 1
"... A stencil computation repeatedly updates each point of a d-dimensionalgridasafunctionofitselfanditsnearneighbors. Parallel cache-efficient stencil algorithms based on “trapezoidal
decompositions” are known, but most programmers find them difficult to write. The Pochoir stencil compiler allows a prog ..."
Cited by 10 (0 self)
Add to MetaCart
A stencil computation repeatedly updates each point of a d-dimensionalgridasafunctionofitselfanditsnearneighbors. Parallel cache-efficient stencil algorithms based on “trapezoidal decompositions” are
known, but most programmers find them difficult to write. The Pochoir stencil compiler allows a programmer to write a simple specification of a stencil in a domain-specific stencil language embedded
in C++ which the Pochoir compiler then translates into high-performing Cilk code that employs an efficient parallel cache-oblivious algorithm. Pochoir supports general d-dimensional stencils and
handles both periodic and aperiodic boundary conditions in one unified algorithm. The Pochoir system provides a C++ template library that allows the user’s
stencilspecificationtobeexecuteddirectlyinC++withoutthePochoir compiler(albeitmoreslowly),whichsimplifiesuserdebuggingand greatly simplified the implementation of the Pochoir compiler itself. A host
of stencil benchmarks run on a modern multicore machine demonstrates that Pochoir outperforms standard parallelloop implementations, typically running 2–10 times faster. The algorithm behind Pochoir
improves on prior cache-efficient algorithms on multidimensional grids by making “hyperspace ” cuts, which yield asymptotically more parallelism for the same cache efficiency. Categories
- Information and Computation , 1999
"... We introduce a new class of scheduling problems in which the optimization is performed by the worker (single \machine") who performs the tasks. The worker's objective may be to minimize the
amount of work he does (he is \lazy"). He is subject to a constraint that he must be busy when there is work ..."
Cited by 8 (1 self)
Add to MetaCart
We introduce a new class of scheduling problems in which the optimization is performed by the worker (single \machine") who performs the tasks. The worker's objective may be to minimize the amount of
work he does (he is \lazy"). He is subject to a constraint that he must be busy when there is work that he can do; we make this notion precise, particularly in the case in which preemption is
allowed. The resulting class of \perverse" scheduling problems, which we term \Lazy Bureaucrat Problems," gives rise to a rich set of new questions that explore the distinction between maximization
and minimization in computing optimal schedules. 1 Introduction Scheduling problems have been studied extensively from the point of view of the objectives of the enterprise that stands to gain from
the completion of the set of jobs. We take a new look at the problem from the point of view of the workers who perform the tasks that earn the company its prots. In fact, it is natural to expect that
- in IEEE Infocom 2004, (Hongkong , 2004
"... In multiuser wireless networks, opportunistic scheduling can improve the system throughput and thus reduce the total completion time. In this paper, we explore the possibility of reducing the
completion time further by incorporating traffic infor-mation into opportunistic scheduling. More specifical ..."
Cited by 8 (1 self)
Add to MetaCart
In multiuser wireless networks, opportunistic scheduling can improve the system throughput and thus reduce the total completion time. In this paper, we explore the possibility of reducing the
completion time further by incorporating traffic infor-mation into opportunistic scheduling. More specifically, we first establish convexity properties for opportunistic scheduling with file size
information. Then, we develop new traffic aided opportunistic scheduling (TAOS) schemes by making use of file size information and channel variation in a unified manner. We also derive lower bounds
and upper bounds on the total completion time, which serve as benchmarks for examining the performance of the TAOS schemes. Our results show that the pro-posed TAOS schemes can yield significant
reduction in the total completion time. The impact of fading, file size distributions, and random arrivals and departures, on the system performance, is also investigated. In particular, in the
presence of user dynamics, the proposed TAOS schemes perform well when the arrival rate is reasonably high.
- IN SODA 09: PROCEEDINGS OF THE TWENTIETH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS , 2009
"... In this paper two scheduling models are addressed. First is the standard model (unicast) where requests (or jobs) are independent. The other is the broadcast model where broadcasting a page can
satisfy multiple outstanding requests for that page. We consider online scheduling of requests when they h ..."
Cited by 8 (2 self)
Add to MetaCart
In this paper two scheduling models are addressed. First is the standard model (unicast) where requests (or jobs) are independent. The other is the broadcast model where broadcasting a page can
satisfy multiple outstanding requests for that page. We consider online scheduling of requests when they have deadlines. Unlike previous models, which mainly consider the objective of maximizing
throughput while respecting deadlines, here we focus on scheduling all the given requests with the goal of minimizing the maximum delay factor. The delay factor of a schedule is defined to be the
minimum α ≥ 1 such that each request i is completed by time ai + α(di − ai) where ai is the arrival time of request i and di is its deadline. Delay factor generalizes the previously defined measure
of maximum stretch which is based only the processing times of requests [9, 11]. We prove strong lower bounds on the achievable competitive ratios for delay factor scheduling even with unit-time
requests. Motivated by this, we consider resource augmentation analysis [24] and prove the following positive results. For the unicast model we give algorithms that are (1 + ɛ)speed O ( 1 ɛ)
-competitive in both the single machine and multiple machine settings. In the broadcast model we give an algorithm for same-sized pages that is (2 + ɛ)-speed O ( 1 ɛ 2)-competitive. For arbitrary
page sizes we give an algorithm that is (4 + ɛ)-speed O ( 1 ɛ 2)-competitive.
- ACM SIGMOBILE Mobile Computing and Communications Review , 2002
"... this paper, we analyze different packet scheduling algorithms to find those that most improve performance in congested networks ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=407688&sort=cite&start=10","timestamp":"2014-04-21T02:46:21Z","content_type":null,"content_length":"39227","record_id":"<urn:uuid:696e8c91-def2-4527-9297-bc49f4b6c3b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Repeated Addition
Lesson Ideas
When you multiply, you put equal groups together to find the total. In this math movie, Annie and Moby show how to solve multiplication number sentences by using repeated addition, or adding the
same number over and over again. Learn how to use different strategies like drawing pictures, skip-counting, and using a hundred chart to solve multiplication sentences.
In this lesson plan which is adaptable for grades 2-7, students will explore authentic contexts for measurement skills. Students will use a free online math game to take on the role of a carpenter
who needs to cut planks of wood. Younger students will practice equal grouping in early levels of the game, while older students practice measurement, factoring, division, and algebra as they cut
slabs of wood in the advanced game levels. Students then work collaboratively to create math word problems that include real world measurement scenarios.
This lesson plan is aligned to Common Core State Standards. See more »
In this lesson plan which is adaptable for grades 1-5, students use BrainPOP resources (including a free online interactive game) to practice combining numbers to create target amounts. Students will
apply a variety of mental math strategies as they add during online game play, as well as in a hands-on addition game they create with a partner.
This lesson plan is aligned to Common Core State Standards. See more »
In this lesson plan which is adaptable for grades 1-5, students will use an online math game to practice creating number combinations (such as whole numbers which add up to 10, or decimals which add
up to 1.) Students then create their own version of the game using hands-on materials.
This lesson plan is aligned to Common Core State Standards. See more »
In this lesson plan which is adaptable for grades 2-5, students will use BrainPOP Jr. and BrainPOP resources (including an online math game) to practice multiplying whole numbers and/or decimals.
Students will identify patterns within a multiplication table and create their own multiplication tables with unique patterns.
This lesson plan is aligned to Common Core State Standards. See more »
In this multi-day lesson plan for grade K through 3, students use a BrainPOP Jr. movie, graphic organizer, and flip chart to deepen their understanding of basic addition and mathematical thinking
This lesson plan is aligned to Common Core State Standards. See more »
This page contains information to support educators and families in teaching K-3 students about repeated addition. The information is designed to complement the BrainPOP Jr. movie Repeated Addition.
It explains the type of content covered in the movie, provides ideas for how teachers and parents can develop related understandings, and suggests how other BrainPOP Jr. resources can be used to
scaffold and extend student learning.
In this set of activities adaptable for grades K-3, parents and educators will find ideas for teaching about repeated addition as a strategy for multiplication. These activities are designed to
complement the BrainPOP Jr. Repeated Addition topic page, which includes a movie, quizzes, online games, printable activities, and more. | {"url":"http://www.brainpop.com/educators/community/bp-jr-topic/repeated-addition/","timestamp":"2014-04-17T09:52:27Z","content_type":null,"content_length":"56353","record_id":"<urn:uuid:5d433563-169a-446a-b817-1c622c0a4aaf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
The CompoundCurve geometry in SQL Server 11 (Denali)
In my last post, I looked at the CircularString geometry introduced in SQL Server Denali, which allows you to define circular curve segments between points in a series, with each circular arc segment
defined by a anchor point between the two endpoints.
In this post, I’ll look at another new geometry type – the CompoundCurve. Whereas the LineString defines a one dimensional geometry by linear interpolation between a series of points, and the
CircularString uses circular interpolation, the CompoundCurve can include a mixture of both of these types.
Here’s the WKT syntax for a CompoundCurve consisting of two line segments and two circular arc segments. Notice that circular arc segments within the curve must be prefixed by CIRCULARSTRING, whereas
linear segments do not require (and, in fact, must not have) the corresponding LINESTRING keyword.
DECLARE @CompoundCurve geometry = 'COMPOUNDCURVE((5 3, 5 13), CIRCULARSTRING(5 13, 7 15, 9 13), (9 13, 9 3), CIRCULARSTRING(9 3, 7 1, 5 3))'
What’s interesting is that we don’t actually need to use the CompoundCurve to define this geometry – the CircularString geometry alone can contain a mixture of curved and straight edges: for straight
line edges, the anchor point simply needs to lie on the straight line between the start and end point. We therefore could create exactly the same geometry using the CircularString by defining one
additional anchor point on each of the straight edge sides as follows:
DECLARE @CircularString geometry = 'CIRCULARSTRING(5 3, 5 8, 5 13, 7 15, 9 13, 9 8, 9 3, 7 1, 5 3)';
In fact, as stated earlier, the CompoundCurve is much like a GeometryCollection containing a mixture of LineStrings and CircularStrings. We could therefore also define this same geometry as follows:
DECLARE @GeometryCollection geometry = 'GEOMETRYCOLLECTION(LINESTRING(5 3, 5 13), CIRCULARSTRING(5 13, 7 15, 9 13), LINESTRING(9 13, 9 3), CIRCULARSTRING(9 3, 7 1, 5 3))';
All three of these instances contain exactly the same set of points, as can be demonstrated with the following code:
SELECT @CompoundCurve.STEquals(@CircularString); -- returns 1 (true)
SELECT @CompoundCurve.STEquals(@GeometryCollection); -- returns 1 (true)
You can also test the length of each instance:
SELECT @CompoundCurve.STLength(); --32.5663706143592
SELECT @CircularString.STLength(); --32.5663706143592
SELECT @GeometryCollection.STLength(); --32.5663706143592
(The two line segments are 10 units long, and the two circular arcs combined form a circle of radius 2 units. (10 * 2) + (2 * PI * 2) = 32.5663706143592)
So why would you ever want to use the CompoundCurve class? Well, although it provides no greater functionality than the two alternatives presented here, and represents exactly the same set of points,
it does provide a more efficient way of representing that geometry, certainly in terms of storage space required. Comparing the size of each geometry in this example:
SELECT DATALENGTH(@CompoundCurve), -- 152
DATALENGTH(@CircularString), --176
DATALENGTH(@GeometryCollection) --243
This is because the CompoundCurve doesn’t require the additional overhead of the GeometryCollection (which is capable of storing any type of geometry), nor the additional points required by the
CircularString to define the straight line segments.
I don’t know if performing operations against the CompoundCurve would be significantly faster than the other two options but, in the absence of any other factors, it still seems to be the best choice
in this situation.
This entry was posted in Spatial, SQL Server and tagged Compound Curve, Denali, Spatial. Bookmark the permalink. | {"url":"http://alastaira.wordpress.com/2011/02/02/the-compoundcurve-geometry-in-sql-server-11-denali/","timestamp":"2014-04-18T18:12:51Z","content_type":null,"content_length":"59569","record_id":"<urn:uuid:97c5f785-e781-46b3-a530-74bbbd57d32e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Hindustan Book Agency
2013; 244 pp; softcover
ISBN-13: 978-93-80250-53-3
List Price: US$52
Member Price: US$41.60
Order Code: HIN/63
This self-contained book is intended to be read with profit by beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some
connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from
commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction
expansions, including the mixing property of the Gauss map, is given.
Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet
series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of
analytic functions in a half-plane. Finally, chapter seven presents the Bagchi-Voronin universality theorems, for the zeta function, and \(r\)-tuples of \(L\)-functions. The proofs, which mix
hilbertian geometry, complex and harmonic analysis, and ergodic theory, are a very good illustration of the material studied earlier.
A publication of Hindustan Book Agency; distributed within the Americas by the American Mathematical Society. Maximum discount of 20% for all commercial channels.
Researchers interested in number theory with an emphasis on Diophantine approximation and the anyalytic theory of Dirichlet series.
• A review of commutative harmonic analysis
• Ergodic theory and Kronecker's theorems
• Diophantine approximation
• General properties of Dirichlet series
• Probabilistic methods for Dirichlet series
• Hardy spaces of Dirichlet series
• Voronin type theorems
• Index | {"url":"http://ams.org/bookstore?fn=20&arg1=whatsnew&ikey=HIN-63","timestamp":"2014-04-21T08:50:31Z","content_type":null,"content_length":"15363","record_id":"<urn:uuid:03bc2888-c514-49a6-a999-596ddba9f7ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math - Prime number - Page 6 - New Logic/Math PuzzlesMath - Prime number - Page 6 - New Logic/Math PuzzlesMath - Prime number - Page 6 - New Logic/Math Puzzles
To do this proof, we need to show that for any prime number 'a', ((a*a) + 26)/3 is an integer. We do this by the modulus of each number.
26 is congruent to 2 mod 3. Which means if 26 is divided by 3, the remainder is 2.
It then suffices to show that a*a is congruent to 1 mod 3. Which means if a*a is divided by 3, the remainder is 1.
...... blah blah blah
-Mathematics Graduate Student
University of Missouri
Here is another way to look at this proof:
The proof requires bases. I will be working in base three (number set 0, 1, 2)
Here is a base 3 multiplication table (leaving out 0) 1*1=1, 1*2=2, 2*2=11.
So, any number divisible by 3 will end in 0 (base 3).
A prime number is not divisible by 3, so it will end in either 1 or 2.
When I square that prime number, it will end in 1, always. (see multiplication table).
So, I just need to add 2, or 3n+2 to the square to make it divisible by 3.
I can use 3n + 2 in the place of "26" to make this puzzle work. So that the addend is not obvious, n is even.
Therefore, the problem works for a*a+2 as well as a*a+38 | {"url":"http://brainden.com/forum/index.php/topic/705-math-prime-number/page-6","timestamp":"2014-04-17T16:18:03Z","content_type":null,"content_length":"93631","record_id":"<urn:uuid:0dbfbaa5-2236-4fba-bf6f-6b079fd8e588>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Devon Calculus Tutor
Find a Devon Calculus Tutor
...My expertise is in Algebra 1, Algebra 2, Geometry, Trigonometry, Pre-Calculus, Calculus and SAT/Act preparations. I am a flexible, enthusiastic, and encouraging tutor. My experience allows me
to identify the weak areas of students and then effectively finding ways to explain mostly by relating to real life situations.
15 Subjects: including calculus, physics, geometry, algebra 1
...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair!
14 Subjects: including calculus, physics, geometry, ASVAB
...I can provide that. A continuation of Algebra 1 (see course description). Use of irrational numbers, imaginary numbers, quadratic equations, graphing, systems of linear equations, absolute
values, and various other topics. May be combined with some basic geometry.
32 Subjects: including calculus, English, geometry, biology
...I have taught students from kindergarten to college age, and I build positive tutoring relationships with students of all ability and motivation levels. All of my students have seen grade
improvement within their first two weeks of tutoring, and all of my students have reviewed me positively. T...
38 Subjects: including calculus, Spanish, English, reading
...Students find me friendly and supportive, and I have innovative ways, based on each student's learning style, to help them understand each topic in math. I can provide references from satisfied
clients. My bachelor's degree in is math, and I have a master's of education (M.Ed) from Temple University.
22 Subjects: including calculus, writing, geometry, statistics | {"url":"http://www.purplemath.com/devon_calculus_tutors.php","timestamp":"2014-04-20T21:27:29Z","content_type":null,"content_length":"23624","record_id":"<urn:uuid:14c58dea-14f4-4988-8629-03d3a59d19e7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Frequent Flyers' Brain Teaser
Frequent Flyers
Logic Grid puzzles come with a handy interactive grid that will help you solve the puzzle based on the given clues.
Puzzle ID: #24144
Category: Logic-Grid
Submitted By: cinnamon30
Garth and four other members of the Benton Remote Control Model Airplane Club took their hobby indoors and began working on long-term winter projects. Each member (each of whom holds a different
executive office in the club) set up a new, uncluttered workbench in a different room of his or her home to construct a scale model of a different airplane. The modelers worked at different speeds.
No two modelers spent the same amount of time on their projects. From the info provided, can you determine the model (one is a Bleriot XI) each modeler constructed, the office each modeler holds in
the club, the room each one worked in and the number of weeks (6, 7, 8, 9, or 10) each modeler took to finish his or her project?
Modelers - Daniel, Garth, Gilda, Jerry, Thea
Office - alternate officer, president, secretary, treasurer, vice-president
Model - A6M2 Zero, Bleriot XI, Fokker D VII, Messerschmitt Me 109, P-51D Mustang
Room - Bedroom, Den, Family, Sunroom, workroom
Weeks - 6, 7, 8, 9, 10
1. The vice-president, who isn't the one who built a model in six weeks, isn't the one who built a model in the bedroom.
2. The secretary's model took fewer weeks to construct than did the A6M2 Zero. Neither Jerry nor Thea is the secretary; Thea constructed a model of a Fokker D VII.
3. The club's alternate officer took exactly eight weeks to complete a model.
4. Daniel took exactly one week fewer to construct his model than did the person who built a model in the den, who took exactly one week fewer to build a model than did the secretary.
5. The model that was built in the sunroom, which is not the one that was constructed by the president, didn't take ten weeks to construct.
6. The five modelers are the one who built a model in the family room, the treasurer, the one who built the P-51D Mustang, the vice-president, and the one who took seven weeks to finish a model.
7. Gilda's model took fewer weeks to finish than did the Messerschmitt Me 109.
Show Answer
What Next? | {"url":"http://www.braingle.com/brainteasers/24144/frequent-flyers.html","timestamp":"2014-04-21T00:12:37Z","content_type":null,"content_length":"26400","record_id":"<urn:uuid:7bc5ccd7-b428-4069-bbac-708c0105ab5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
A discontinuous finite volume method for the Stokes problems.
(English) Zbl 1112.65125
The author extends the ideas developed in recent work to solve the Stokes equations on both triangular and rectangular meshes. In the author’s methods, the velocity is approximated by discontinuous
piecewise linear functions on a triangular mesh and by discontinuous piecewise rotated bilinear functions on a rectangular mesh. Piecewise constant functions are used as test functions for the
velocity in the discontinuous finite volume method. Therefore, after multiplying the differential equations by the test functions and integrating by parts, the area integrals in the formulations will
disappear, which gives the simplicity of the finite volume method.
One of the advantages of using discontinuous approximation functions is, that it is easy to build high order elements. Main result: Optimal error estimates for the velocity in the norm $|||·|||$ and
for the pressure in the ${L}_{2}$-norm are derived. An optimal error estimate for the approximation of the velocity in a mesh-dependent norm is obtained. Precise proofs of all theorems and lemmas are
65N30 Finite elements, Rayleigh-Ritz and Galerkin methods, finite methods (BVP of PDE)
65N15 Error bounds (BVP of PDE)
76D07 Stokes and related (Oseen, etc.) flows
35J25 Second order elliptic equations, boundary value problems
35Q30 Stokes and Navier-Stokes equations
76M12 Finite volume methods (fluid mechanics) | {"url":"http://zbmath.org/?q=an:1112.65125","timestamp":"2014-04-20T03:24:14Z","content_type":null,"content_length":"22186","record_id":"<urn:uuid:edb66d87-f6c9-47e9-91c2-ab6a3aaa49f7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total # Posts: 5
A random variable x has a following probability distribution. x: 0 1 2 3 P(X=x): 1/6 1/2 1/5 2/15 How to calculate E(1/(X+1))?
A random variable x has a following probability distribution. x: 0 1 2 3 P(X=x) 1/6 1/2 1/5 2/15 How to calculate E(1/(X+1))?
A random variable x has a probability distribution. How to calculate E(1/(X+1))?
A bucket contains 7 red, 3 blue, 2 black and 1 yellow ball of the same size. A random selection of 5 balls is taken from the bucket. find the probability that the random selection will contain
atleast 1 red ball?
1- Which entrepreneurial/small business owner characteristics does Bill have that may be important to his success? Which characteristics could lead to his failure? 2- What steps should Bill take to
avoid the pitfalls common to a small business? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Mahalakshmi","timestamp":"2014-04-19T21:06:38Z","content_type":null,"content_length":"6904","record_id":"<urn:uuid:46b37a1c-07e0-4e9f-9abf-8a3b747fc5a4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random number generator
From RogueBasin
A random number generator, or RNG for short, is a method of generating numerical values that are unpredictable and lacking in any sort of pattern. In game development, accessing "true" randomness is
inconvenient at best, so programmers resort to using pseudo-random number generators, or PRNGs, which use specially crafted mathematical algorithms to allow computers to simulate randomness. PRNGs
are never truly random, but they are unpredictable enough for practical purposes.
Roguelike games often use PRNGs to compute dice rolls and other situations that require random generation. Some RNG algorithms evaluate simple polynomials, while others use techniques derived from
fractals or chaos theory.
Many players jokingly refer to the RNG as the Random Number God, or simply RN God, the entity inside the game that provides random functionality. This deity seems to leave great items and easy
monsters for some players, while granting seemingly unfair deaths to others. Some players superstitiously avoid offending the Random Number God.
Typical PRNG algorithms start with an initial seed. Each random value is generated by running the last generated value through a special algorithm, starting with the seed. For example, if the seed is
5 and the algorithm is f(n) = 3n + 1 mod 10, we generate the sequence 5 6 9 8 5 6 9 8... (This is an extremely poor PRNG, since it repeats every four values and does not generate all the numbers mod
Some PRNGs produce integers, while others produce floating-points. Many roguelikes that use D&D-style systems use integers for die rolls; conversion floats to integers is often done by multiplying
and applying the floor function.
The above PRNG example forms a repeating sequence, as do all PRNGs. Since it repeats every four values, we say that it has period 4. A good PRNG has a very large period, so the values will not repeat
for a long time. One CMWC (complementary multiply with carry) generator has a period of approximately 10^13101!
If an eavesdropper, given a sequence of outputs from a certain PRNG, cannot determine the next value, then the RNG is called cryptographically secure. Cryptographically secure PRNGs do exist, but
most are slow and suitable only for cryptography. Fortunately, random number security is not a major concern in nearly all roguelikes.
When using a PRNG, one must be careful with seeding. Since the same seeds will produce the same string of values, if every game uses the same seed, then it could roll the same character and start
with the same map every time! The simplest and most common way to ensure proper seeding is to use the current time. One can also use player behavior as a source of randomness, but the programmer must
be careful to keep the player from directly controlling the game's randomness.
Desired features
Pseudorandom number generators, in order to be considered "good", must offer the following:
PRNG algorithms have different calculation speeds. Linear congruential generators (generators of the form f(n) = an + b mod m) are currently the fastest generators that exhibit decent randomness. In
general, however, speed is usually at a loss of quality.
Unfortunately, not all algorithms offer a uniform distribution. For instance, if we had a random number generator that returned values from 0 to 3, we cannot get random numbers from 0 to 2 by
computing the latter's output modulo 3, or the number 0 would appear twice as often as 1 or 2!
Generally, the period of a PRNG should be maximal.
PRNGs in programming languages
Most programming environments provide their own PRNG. In many of these cases, the programmer cannot control the type of generator used, which may be undesirable in certain situations. Fortunately,
there are a handful of PRNGs out there that are relatively easy to hand-code, although they are not provided here.
C and C++
In C and C++, the random number generator is the rand() function. One sets the seed with srand(). Some call these the "ANSI C" (or equivalently "ISO C") functions because they are part of the C
standard and to distinguish them from other RNGs that some systems provide.
The rand() function returns an int in the range from 0 to RAND_MAX, a platform-dependent value. Here is a simple program to show one random value:
#include <stdlib.h> /* rand, srand */
#include <stdio.h> /* printf (for this example) */
#include <time.h> /* time */
int main() {
srand( time(NULL) );
printf("%d\n", rand());
return 0;
Many programmers like to seed srand() with the return value of time(), as we do above. In fact, if we run this program multiple times within one second, it may print the same number again. (Some will
write "srand( time(0) )", which is considered bad by some because the time function takes a pointer. Using NULL reminds you that it is a pointer. However, using 0 does the same thing, because NULL is
almost always #defined to be 0. (See http://www.lysator.liu.se/c/c-faq/c-1.html#1-3.) If you want, you can add a cast to unsigned int. Adding it is necessary to satisfy lint.)
Now we come to an important note: Many platforms have poor-quality versions of the rand() function. Above GNU platforms, rand() and srand() work relatively well, and The GNU C Library Reference
Manual recommends their use. If your favorite platform is GNU or Linux, then you could program with rand() and srand() and have your game at least working above other C platforms. But the most
popular roguelike games support many platforms well, and their developers avoid rand() when they can.
BSD platforms implement rand() and srand() in terms of rand_r(), which limits the state of the RNG to 32 bits (implying a period of 2**31-1 or less). BSD calls this a "bad random number generator",
while GNU states that 32 bits is "far too few to provide a good RNG."
Meanwhile, though RAND_MAX is platform dependent, BSD and GNU use 2,147,483,647, the largest possible signed int. One should be warned that on MSVC, RAND_MAX is 32,767, which may be a lot smaller
than you expect. (All of the C and C++ standards, through C1X and C++0X, merely require that RAND_MAX be at least 32,767.)
If in doubt, you should implement a new RNG. However, you should not try to create your own algorithm, as it would almost certainly be even worse than the one you're trying to replace! Many people
think the Mersenne twister is best, and there are existing implementations of it.
A very good random number generator for C/C++ is RandomLib (I am not the author, and I have tested this library and found it excellent) at http://sourceforge.net/projects/randomlib/ . It is LGPL and
supports a variety of algorithms, including Mersenne Twister.
The newest C++11 standard features standard random number facilities.
The Random class in the java.util package is used to generate random numbers. The constructor, new Random(), will seed that instance with a value based on the current time. The method random.nextInt
(n) will generate a random number from 0 to n - 1 inclusive. This is preferable to using nextDouble() as it will returns a double value between 0 and 1, in which rounding error may occur. Example:
import java.util.Random;
private Random random = new Random();
//Tabletop style randomness
//2d6 - dice(2,6)
public int dice(int number, int sides) {
int total = 0;
for(int i = 0; i < number; i++)
total += random.nextInt(sides) + 1;
return total;
Python includes a very handy module for generating random numbers, namely random:
#!/usr/bin/env python2.7
import random
# Float in the range (0, 1]
print random.random()
# Float in the range [a, b]
print random.randint(1, 10)
# Random choice from a list
print random.choice(["Foo", "Bar", "Baz"])
# Shuffling a list
a = [1, 2, 3, 4, 5]
print a
# Standard normal
random.gauss(0, 1)
For more information, see the documentation for the module.
Pseudorandom number generators, or PRNGs, struggle against difficulties in generating numbers in as few CPU ticks as possible, while retaining a decent periodicity and quality. George Marsaglia
created an application used for testing the quality of PRNG algorithms, called Diehard. He then brought up criticisms against many commonly used algorithms, mainly the Multiplicative Congruential
Generator (used in most rand() function implementations). His comments can be found, for instance, on comp.lang.c Usenet group.
As Marsaglia has proven, MCGs create non-uniform pseudorandom numbers, falling into parallel hyperplanes. One of the examples of this undesired behaviour is the infamous RANDU. Additionally, most
PRNGs of this type are predictable, as it is enough to observe 2 or 3 subsequently generated numbers to discover all the others. Also, this algorithm group has an undesired tendency to offer
extremely short periods for lower order bits of the generated numbers (for instance, the last bit often has a period of 2^0: it simply alternates between 0 and 1).
The Mersenne twister (MT) was also criticised for being difficult to implement, even though it generates fast and high quality numbers. It also stores the last 624 generated numbers. Knowing this
sequence can reveal all future iterates. Dungeon Crawl Stone Soup abandoned MT after a player demonstrated the ability to divine the generator state and cheat.
A PRNG's periodicity is often subject to criticisms as well. While congruential generators have a periodicity at most equal to 2^32 on most 32 bit platforms (although there are examples of better
periodicity: Java implements such a generator with a period of 2^48, and Donald Knuth's MMIX implementation has a period of 2^64), they often offer a ridiculously low RAND_MAX value, shortening the
period to 2^15 (for instance, MSVC). Such a low periodicity is unacceptable in many cases.
PRNG algorithms
Several PRNG algorithms have been devised. Here is a brief list:
Mersenne twister
A recently introduced family of PRNGs is known as the Mersenne twister. The most popular Mersenne twister is known as MT19937, which produces fast and high-quality random numbers. It has a long
period of exactly 2^19937 - 1, a Mersenne prime from which the algorithm derives its name. See the official site for more info about it.
MT19937 is currently the default in PRNG Python's random module.
A new and improved variant of the Mersenne twister from Hiroshima University can be found at http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/
The Mersenne twister algorithm is not provided here due to its complexity.
Linear congruential generator
Linear congruential generators, or LCGs, are extremely fast and simple to implement, but they have received much criticism for producing poor-quality and easily predictable values. Still, they can be
useful when resources are limited and security is not an issue.
An LCG has three constant values: the modulo m, the multiplicative term a, and the additive term b. The iteration function is f(x) = ax + b mod m.
LCGs are very picky about the values m, a, and b; the programmer must make careful choices to obtain a maximal period (that is, m.) The case b = 0 is called a Lehmer generator or a Park-Miller
Below is an example C99/C++0X implementation.
#include <stdint.h>
struct linear_congruential_data
uint32_t seed;
uint32_t m; /* modulo */
uint32_t a; /* multiplicative term */
uint32_t b; /* additive term */
/* implements seed' = a*seed+b mod m without incurring unsigned wraparound */
inline struct linear_congruential_data iterLCG(struct linear_congruential_data src)
uint64_t newseed = src.seed;
newseed *= src.a;
newseed += src.b;
src.seed = newseed % src.m;
return src;
The most well-known LCGs are shown below. All three are Lehmer generators.
Name m a b MINSTD 2147483647 16807 0 MINSTD2 2147483647 48271 0 MINSTD3 2147483647 69621 0
Multiply-with-carry, or MWC, is a family of PRNGs devised by George Marsaglia. MWCs are known for being very fast and simple, along with producing high-quality values with very long periods (a
lag-256 MWC would have a period of 2^8222.)
An MWC has the following constants (all nonnegative integers):
• lag r
• base b
• multiplier a
• seeds x[0], x[1], x[2], ..., x[r - 1] < b
• initial carry c[r - 1] < a
The iteration function is x[n] = (ax[n - r] + c[n - 1]) mod b, where c[n] = ⌊(ax[n - r] + c[n - 1])/b⌋.
When using an MWC, it is only necessary to remember the last r values. Any old values can be erased.
Complementary multiply-with-carry
If we take the MWC algorithm and modify the iteration function to x[n] = (b - 1) - ((ax[n - r] + c[n - 1]) mod b), we get an MWC variant known as the complementary multiply-with-carry (CMWC.) They
solve a slight bias in he original MWC.
CMWCs require slightly more calculation than MWCs, but they have many more advantages.
The CMWC4096, described at http://school.anhb.uwa.edu.au/personalpages/kwessen/shared/Marsaglia03.html, has a near-record period of 2^131104. It is informally known as the Mother of All PRNGs, and
has the following parameters:
• r = 4096
• b = 2^32
• a = 18782
In libtcod, CMWC4096 replaced MT19937 as the default PRNG.
Generalised Feedback Shift Register
Also known as GFSR, it is a very fast generator with good randomness and relatively high periods. The basic concept is the following:
rn[n] = rn[n-A] XOR rn[n-B] XOR rn[n-C] XOR rn[n-D]
In this particular case (K=4), the period is ~2^1000. | {"url":"http://www.roguebasin.com/index.php?title=Random_number_generator&oldid=26489","timestamp":"2014-04-18T00:19:34Z","content_type":null,"content_length":"33760","record_id":"<urn:uuid:c2845112-1145-4101-8708-2ddc6f73155a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Have mathematicians finally discovered the hidden relationship between prime numbers?
Okay, math lovers, this is the one you've been waiting for: Shinichi Mochizuki of Kyoto University in Japan is claiming to have found proof (divided into four separate studies with 500+ pages) of the
so-called abc conjecture, a longstanding problem in number theory which predicts that a relationship exists between prime numbers. The tricky part? Now other mathematicians need to dig into his
extensive work, and confirm that he's right.
Now, because I failed grade 9 math, I'm going to let Philip Ball of Nature News explain this one to you:
Like Fermat's theorem, the abc conjecture refers to equations of the form a+b=c. It involves the concept of a square-free number: one that cannot be divided by the square of any number. Fifteen
and 17 are square free-numbers, but 16 and 18 - being divisible by 42 and 32, respectively - are not.
The 'square-free' part of a number n, sqp(n), is the largest square-free number that can be formed by multiplying the factors of n that are prime numbers. For instance, sqp(18)=2×3=6.
If you've got that, then you should get the abc conjecture. It concerns a property of the product of the three integers axbxc, or abc - or more specifically, of the square-free part of this
product, which involves their distinct prime factors. It states that for integers a+b=c, the ratio of sqp(abc)r/c always has some minimum value greater than zero for any value of r greater than
1. For example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)2/c = 900/128. In this case, in which r=2, sqp(abc)r/c is nearly always greater than 1, and always greater than zero.
If you don't get any of that or what Mochizuki has done, don't worry — many mathematicians don't either. And in fact, Mochizuki is considered somewhat of a genius and a guy who's in a league of his
own. He thinks in terms of mathematical 'objects' — abstract entities like geometric objects, sets, permutations, topologies, and matricies. Ball quotes mathematician Dorian Goldfeld as saying, "At
this point, he is probably the only one that knows it all."
Read more at Nature News, and check out the three studies — if you dare: I, II, III, IV.
Image via Shutterstock.com/ronstik.
1 90Reply | {"url":"http://io9.com/5942268/have-mathematicians-finally-discovered-the-hidden-relationship-between-prime-numbers","timestamp":"2014-04-16T11:44:06Z","content_type":null,"content_length":"86253","record_id":"<urn:uuid:6235d78d-ab2e-4ef1-8e1a-f77c3319c4c0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Powder Springs, GA Statistics Tutor
Find a Powder Springs, GA Statistics Tutor
...In adddition I have tutored hundreds of students preparing for the ACT and SAT entrance exams. I have taught these study skills for more than 20 years as an educator. I have taught in the RESA
psychiatric/special needs program in Georgia for 8 years.
47 Subjects: including statistics, reading, English, biology
...I've taught high school and college algebra for over 10 years. Algebra 2 was one of the first high school math classes I taught and I continue to teach the same concepts in college. Whether
Algebra or "Math 1/2" or CCGPS, the algebra remains the same.
13 Subjects: including statistics, calculus, geometry, algebra 1
I have been tutoring at the college level for more than three years at Georgia Gwinnett College. I enjoy the interaction with students, and definitely prefer the challenge of working on problems
with the higher-level material such as calculus 1 & 2, linear algebra, discrete math, operations managem...
13 Subjects: including statistics, calculus, SAT math, ACT Math
...Not only am I a certified ESL teacher (by the London Teacher Training School), I have been a student of 4 foreign languages (Arabic, French, Spanish and Chinese) myself - so I know what it is
like to learn a new language and to face the challenge of communicating in a language that is not one's m...
14 Subjects: including statistics, Spanish, geometry, algebra 1
...This includes significant experience learning and teaching clients new ways of using their technologies, like Excel and PowerPoint. While at Georgia Tech, I spent significant time tutoring in
the College of Business for all Accounting courses, and some statistics. I also tutored a variety of subjects for our varsity athletes on behalf of the GT Athletics Association.
49 Subjects: including statistics, reading, English, writing
Related Powder Springs, GA Tutors
Powder Springs, GA Accounting Tutors
Powder Springs, GA ACT Tutors
Powder Springs, GA Algebra Tutors
Powder Springs, GA Algebra 2 Tutors
Powder Springs, GA Calculus Tutors
Powder Springs, GA Geometry Tutors
Powder Springs, GA Math Tutors
Powder Springs, GA Prealgebra Tutors
Powder Springs, GA Precalculus Tutors
Powder Springs, GA SAT Tutors
Powder Springs, GA SAT Math Tutors
Powder Springs, GA Science Tutors
Powder Springs, GA Statistics Tutors
Powder Springs, GA Trigonometry Tutors
Nearby Cities With statistics Tutor
Austell statistics Tutors
Chamblee, GA statistics Tutors
Clarkdale, GA statistics Tutors
Cumming, GA statistics Tutors
Dallas, GA statistics Tutors
Douglasville statistics Tutors
Hiram, GA statistics Tutors
Holly Springs, GA statistics Tutors
Lilburn statistics Tutors
Lithia Springs statistics Tutors
Mableton statistics Tutors
Marietta, GA statistics Tutors
Tyrone, GA statistics Tutors
Villa Rica, PR statistics Tutors
Winston, GA statistics Tutors | {"url":"http://www.purplemath.com/Powder_Springs_GA_statistics_tutors.php","timestamp":"2014-04-21T12:40:52Z","content_type":null,"content_length":"24389","record_id":"<urn:uuid:02f20908-ac23-460f-8ba2-6cb8f50f18d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is every model of ZF countable "seen from the outside"?
up vote 4 down vote favorite
I'm not sure if my question make sense, but it would also be interesting to know if it didn't, so I will ask anyway:
There exist a countable model of ZF. This means (if I understand it correctly) that we can find a model of ZF in ZF, such that the set of elements in the model is countable seen from the outer ZF.
My question is: Can every model of ZF be "seen from the outside" in a way that makes it countable?
It seems to me, that if we have a model A of ZF, the model will be a element in a countable model B of ZF. Now, if you look at B from "the outside" A is countable.
add comment
2 Answers
active oldest votes
Technically, a model of ZF consists of a set with some relation, representing "being an element of". So the literal answer is no. An uncountable model is simply uncountable. Your
argument that one can see every model as a model in a model which is countable doesn't work. Cardinality in a model depends on whether there exists certain bijections in the model.
This has nothing to do with "outside cardinality".
up vote 7 down
vote accepted However, the first-order statements true in some model of ZF are true in some countable model by the Löwenheim-Skolem theorem. Since we are usually not interested in differences
between models that cannot be formulated in the first order theory of sets, we can say that every model of ZF has an, for practical purposes, equivalent countable model.
add comment
Michael Greinecker's answer is correct. But there is another subtler weaker sense in which what you asked for could be true.
Namely, the method of forcing shows that every set that exists in any model of set theory can become countable in another larger model of set theory, the forcing extension obtained by
collapsing the cardinality of that set to ω. Thus, the concept of countability loses its absolute meaning; whether a set is countable or not depends on the set theoretic background.
So if X is any set, then in some forcing extension of the universe, the set X becomes countable.
In particular, this will be true when X is itself a model of set theory.
up vote 5 There are various ways of viewing the nature of existence of these forcing extensions, among them Boolean-valued models and their quotients, the Boolean ultrapowers, and I refer you to
down vote the set-theoretic literature. If one uses the Boolean ultrapower, then stating the theorem from inside V, one attractive way to describe the situation is as follows:
Theorem. If V is the universe of sets and X is any particular set, then there is a class model (W,E) of set theory, and an elementary embedding of V into a submodel W[0] of W, such that
the image of X in W is thought by W to be countable.
Basically, the model W[0] is the Boolean ultrapower of V by the forcing B to collapse X, using any ultrafilter on B, and W is the quotient of the Boolean valued model V^B. The elementary
embedding maps every object y to the equivalence class of the check name of y.
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question. | {"url":"http://mathoverflow.net/questions/12566/is-every-model-of-zf-countable-seen-from-the-outside?sort=oldest","timestamp":"2014-04-18T18:17:24Z","content_type":null,"content_length":"55418","record_id":"<urn:uuid:35518f7a-5083-4d7c-8dac-b0b190cfa57b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Terminology question on covering spaces
up vote 4 down vote favorite
I'm teaching an elementary class about fundamental groups and covering spaces. It was very useful to use "fool's covering spaces" of a space $X$, defined as functors $\Pi_1(X)\to Sets$, where $\Pi_1
(X)$ is the fundamental groupoids of $X$. In a more "covering space way", a fool's covering space can be described as a set $Y$, a map $p:Y\to X$, and a map $p^{-1}(x_1)\to p^{-1}(x_2)$ for any path
between $x_1, x_2\in X$, satisfying the obvious properties.
Is there a standard name for "fool's covering spaces"? Calling them "functors $\Pi_1(X)\to Sets$ " is a bit heavy for the class.
at.algebraic-topology homotopy-theory terminology teaching
What exactly is the covering space among your "fool's covering spaces"? I'm confused as to why you're talking about sets rather than spaces and maps. – Ryan Budney Dec 15 '10 at 13:36
@Ryan: If $p: \hat{X} \to X$ is the universal covering space, then $\Pi_1(X)$ acts on $p$ in the category of etale spaces over $X$, and given a functor $F: \Pi_1(X) \to Set$, the corresponding
covering space is obtained as the tensor product $p \otimes_{\Pi_1(X)} F$. @Trial: why "fool's"? If we relax the usual surjectivity condition of covering spaces and allow empty fibers, isn't the
category of covering spaces over $X$ (for nice $X$) equivalent to the toppos of functors $\Pi_1(X) \to Set$? – Todd Trimble♦ Dec 15 '10 at 14:08
Sorry for being unclear. Any covering space is also a "fool's covering space". For a locally path-connected and semilocally 1-connected spaces the two notions are equivalent (this is considered a
"difficult theorem" in the cours). In a fool's covering space the set $Y$ is just a set, with no topology (if it's uclear, take "functor $\Pi_1(X)\to Sets$ " as a definition of fool's covering
space, and forget the other description). – Pavol S. Dec 15 '10 at 14:15
1 I think it would be misleading to mistake the functor you're talking about with the associated covering space. Generally I'm not aware of a standard name for this process. I suppose I'd call it
the monodromy classification of covering spaces or the action of $\pi_1$ on the fibre, or something like that -- I don't think this categorical perspective is much more than a "repackaging" of a
classical theorem, so I just call these things by their classical names. – Ryan Budney Dec 15 '10 at 14:47
@Ryan: here are 2 pedagogical reasons for giving "fool' covering spaces" independent life and name: 1. the fact that they are equivalent to ordinary covering spaces (if the base is nice) can be
2 described by lifting the topology from $X$ to $Y$, which is an easy operation. 2. Using the classification of covering spaces (for nice base), van Kampen theorem (for nice spaces) is the trivial
statement that if something is locally (in $X$) a covering space then it is a covering space. Without this classification and for arbitrary spaces it is the same locality statement for fool's
covering spaces. – Pavol S. Dec 15 '10 at 16:13
show 4 more comments
2 Answers
active oldest votes
"Fool's covering spaces" are very close to overlays of R. H. Fox (see this paper in the first place and also this one), which I think are still better: they retain all nice properties of
"fool's covering spaces" and have additional ones. An equivalent (see "Steenrod homotopy", Lemma 7.3 or Mardesic-Matijevic) definition of an overlay is that it is
a covering that is induced from some covering over a polyhedron (or equivalently from some covering over a locally connected semi-locally simply-connected space).
Fox's original (equivalent) definition is that it is
a map $p:Y\to X$ such that there exists a cover $\{U_\alpha\}$ of $X$ satisfying
(i) each $p^{-1}(U_\alpha)=\bigsqcup_\lambda U_\alpha^\lambda$, where each $p$ restricted over $U_\alpha^\lambda$ is a homeomorphism onto $U_\alpha$; and
(ii) if $U_\alpha^\lambda\cap U_\beta^\mu$ and $U_\alpha^\lambda\cap U_\beta^\nu$ are both nonempty, then $\mu=\nu$.
Condition (i) of course amounts to a definition of a covering in the usual sense.
A third definition of overlays is by their monodromy. $d$-Sheeted overlays over a connected base $X$ (possibly $d=\infty$) are identified with
the homotopy set $[X,BS_d]$.
This is essentially the monodromy classification theorem of Fox; for a shorter proof and the above formulation see "Steenrod homotopy", Theorem 7.4. Another reformulation: overlays are
functors $pro$-$\Pi_1(X)\to Sets$, where $pro$-$\Pi_1$ is the fundamental pro-groupoid.
This is due to Hernandez-Paricio (but note that his claim that Fox did his theory only for finite-sheeted overlays is not only incorrect but misleading; in fact, for finite-sheeted ones
up vote 6 Fox shows that they reduce to coverings). I'm not fully happy with the pro-groupoid definition because a pro-groupid is a whole diagram of groupoids. I would prefer something like
down vote "overlays are functors $\Pi_1\to Sets$, where $\Pi_1$ is the topologized Steenrod fundamental groupoid (which combines Steenrod $\pi_0$ and Steenrod $\pi_1$)" Such formulation is possible,
accepted at least, in a special case (see Corollary 7.5. in "Steenrod homotopy"). Over a base that is compact and Steenrod-connected (aka "pointed 1-movable"; in particular, this includes compact
spaces that are connected and locally connected), overlays are identified with functors $\check\pi_1(X)\to Sets$, where $\check\pi_1$ is the topologized Cech (or Steenrod) fundamental
group. Note that $\check\pi_1(X)=\pi_1(X)$ if $X$ is locally connected and semi-locally simply-connected.
Finally, I should mention that over a compact (metric) base, overlays can also be defined (Theorem 7.6 in "Steenrod homotopy") as
coverings in the category of uniform spaces.
Such uniform coverings have been studied by I. M. James in his book "Introduction to Uniform spaces"; see Brodsky-Dydak-Labuz-Mitra for a clarification of James' definition (the latter
paper also has some relevant followups). This is really saying that overlays are precisely those coverings for which a metric on the base can be "lifted" to a metric in the total space.
(Note that the compact base has a unique uniformity: as everyone might remember, every continuous function on a compact space is uniformly continuous.)
DISCLAIMER: Following Fox, I have been assuming all spaces to be metrizable :) It is known that this is not a real restriction, and everything extends to arbitrary spaces, perhaps with
minor modifications (see Mardesic-Matijevic's paper, which also has many additional references about overlays; also the papers by Dydak-et-al. and Hernandez-Paricio may be relevant to this
point) However, I prefer being ignorant of the non-metrizable world and so don't follow these modifications or whether they are needed.
SUMMARY: For purposes of proving something about coverings of locally connected semi-locally simply-connected spaces usual covering work fine. For purposes of proving anything in topology
beyond these restrictions, you would definitely need overlays, rather than "fool's covering spaces". But admittedly overlays are slightly harder to define. Thus for purposes of defining a
formal concept which agrees with coverings for "nice" spaces and is not intended to be used for proving anything beyond "nice" spaces, "fool's covering spaces" suit well; I would call them
e.g. path-overlays.
By the way, I like the idea about the Seifert-van Kampen theorem; I think if combined with overlays, it should give a Seifert-van Kampen theorem in Steenrod homotopy, which would be an
interesting result.
I asked a terminology question, and instead I learned something very interesting. Thanks a lot! – Pavol S. Dec 15 '10 at 22:56
Sorry, what I first attributed to Hernandez-Paricio was not exactly what he proved. – Sergey Melikhov Dec 16 '10 at 1:02
add comment
In the Algebraic Topology literature, what you describe would be called a local system of sets on $X$. In general a local system on a space $X$ is a covariant functor from $\Pi_1(X)$ to
some category.
up vote 3 In Steenrod's definition of homology with local coefficients, he uses a local system of abelian groups as coefficients (ordinary homology corresponds to the constant functor). This is
down vote explained nicely in G.W Whitehead's book Elements of Homotopy Theory.
This is perhaps not exactly what I wanted: a local system should be a functor from the Cech fundamental groupoid, while I was asking about the usual (path) fundamental groupoid (e.g.
the application to Van Kampen theorem was meant for the usual (path) fundamental group). – Pavol S. Jan 3 '11 at 20:53
@Trial-well, I sabotaged my own answer by link I chose, which discusses the more modern, algebro-geometric usage of the term. But the original topological definition uses the usual
groupoid (see the References section half way down the linked page). – Mark Grant Jan 3 '11 at 22:02
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homotopy-theory terminology teaching or ask your own question. | {"url":"http://mathoverflow.net/questions/49526/terminology-question-on-covering-spaces/49570","timestamp":"2014-04-16T16:17:59Z","content_type":null,"content_length":"71036","record_id":"<urn:uuid:5c0498c3-0b98-469f-95ce-035df4b428a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamental Theorem of calculus
April 2nd 2009, 03:50 PM #1
Apr 2009
Fundamental Theorem of calculus
1. The problem statement, all variables and given/known data
Find a function f : [-1,1] ---> R such that f satisfies the following properties:
a) f is continuous
b) f is restricted to (-1,1) is differentiable
c) its derivative f' is not differentiable on (-1,1)
2. Relevant equations
3. The attempt at a solution
I kinda think that the mean value theorem and Theorem 2 of the fundamentals f(x)dx = F(b)-F(a) got some link but I can't seem to get it. I do understand that for f'' not to exist, x should be
undefined on the (-1,1). Please help.
April 2nd 2009, 05:19 PM #2
Apr 2009 | {"url":"http://mathhelpforum.com/calculus/82010-fundamental-theorem-calculus.html","timestamp":"2014-04-18T08:35:48Z","content_type":null,"content_length":"31805","record_id":"<urn:uuid:087f5ff2-0ecd-4ec0-be89-5d3be945d599>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. a game commonly used for low-stakes gambling, in which numbered balls or slips are drawn at random and players cover the correponding numbers on their cards, called Bingo cards, which have square
arrangement of such numbers. Each card has a different arrangement of the numbers, and the first player to cover all numbers in one row (horizontal, vertical, or diagonal) is the winner, usually
announcing that fact by a cry of “Bingo!”
Variants of the game may require that all peripheral numbers are covered, to form a box, or other figure. The numbers usually have one letter from the group “B”, “I”, “N”, “G”, and “O”, plus two
digits. The “cards” may be disposable sheets of paper on which the numbers are printed. | {"url":"http://www.question.com/dictionary/bingo.html","timestamp":"2014-04-16T10:21:12Z","content_type":null,"content_length":"9474","record_id":"<urn:uuid:d70437ec-97e5-4568-98e1-983d25b4f41e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Green, CA Algebra 2 Tutor
Find a Green, CA Algebra 2 Tutor
...It also introduces matrices and their properties. Knowledge of Algebra 2 is important for success on both the ACT and college mathematics entrance exams. American history is one of the few
subjects that American students are expected to study throughout most their educational careers.
27 Subjects: including algebra 2, reading, English, GED
...I have been playing flute/taking lessons on flute for nine years and will continue for the rest of my life. As a freshman in high school I auditioned for the highest band and made fourth chair
(two senior and junior were ahead of me). I continued to move up in wind ensemble as the years continued. Also I have played jazz flute as long as I have played classical.
18 Subjects: including algebra 2, chemistry, physics, calculus
...I have been in banking for 12 years and enjoy working with people and helping them succeed. I have tutored algebra at Los Angeles City College, and I have also tutored children in 8th grade
math at Thomas Starr King Middle School in Los Angeles, CA. I have done private tutoring for the comprehensive high school exit exam as well.
65 Subjects: including algebra 2, reading, Spanish, English
...I have always loved math, nerdy as that may be, and I was disappointed to find that it did not fit into my curriculum as a theatre major. I was very excited when I found the opportunity to
begin tutoring math so that I could hopefully spread my love of the subject to other students. I believe t...
9 Subjects: including algebra 2, geometry, statistics, algebra 1
...I studied Dyslexia's causes, symptoms and treatments during graduate school. I have a doctoral degree in Child and Adolescent Clinical Psychology (Psy.D.) from a program fully accredited by
the American Psychological Association. I have ample experience tutoring students with a variety of learning differences.
44 Subjects: including algebra 2, English, reading, writing
Related Green, CA Tutors
Green, CA Accounting Tutors
Green, CA ACT Tutors
Green, CA Algebra Tutors
Green, CA Algebra 2 Tutors
Green, CA Calculus Tutors
Green, CA Geometry Tutors
Green, CA Math Tutors
Green, CA Prealgebra Tutors
Green, CA Precalculus Tutors
Green, CA SAT Tutors
Green, CA SAT Math Tutors
Green, CA Science Tutors
Green, CA Statistics Tutors
Green, CA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Broadway Manchester, CA algebra 2 Tutors
Cimarron, CA algebra 2 Tutors
Dockweiler, CA algebra 2 Tutors
Dowtown Carrier Annex, CA algebra 2 Tutors
Firestone Park, CA algebra 2 Tutors
Foy, CA algebra 2 Tutors
La Tijera, CA algebra 2 Tutors
Lafayette Square, LA algebra 2 Tutors
Miracle Mile, CA algebra 2 Tutors
Pico Heights, CA algebra 2 Tutors
Rimpau, CA algebra 2 Tutors
Wagner, CA algebra 2 Tutors
Westvern, CA algebra 2 Tutors
Wilcox, CA algebra 2 Tutors
Wilshire Park, LA algebra 2 Tutors | {"url":"http://www.purplemath.com/Green_CA_algebra_2_tutors.php","timestamp":"2014-04-20T21:35:56Z","content_type":null,"content_length":"24290","record_id":"<urn:uuid:910ab334-de4c-4bf1-a9c6-56893884e2c3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bulletin of the Iranian Mathematical Society
Home Articles List
• |
• |
• |
• |
Volume & Issue:
Volume 37%25252C No. 1 (April 2011)
1 Extensions of Baer and quasi-Baer modules
Page 1-13
E. Hashemi
2 On the denseness of the invertible group in uniform Frechet algebras
Page 15-27
T. Ghasemi Honary; M. Najafi Tavani
3 A characterization of triple semigroup of ternary functions and Demorgan triple semigroup of ternary functions
Page 29-41
J. Pashazadeh; Yu. M. Movsisyan
4 Modified Noor iterations for infinite family of strict pseudo-contraction mappings
Page 43-61
L.-P. Yang
5 Multipliers of generalized frames in Hilbert spaces
Page 63-80
A. Rahimi
6 On the average number of sharp crossings of certain Gaussian random polynomials
Page 81-92
S. Rezakhah; S. Shemehsavar
7 On images of continuous functions from a compact manifold to Euclidean space
Page 93-100
R. Mirzaie
8 φ-factorable operators and Weyl-Heisenberg frames on LCA groups
Page 101-113
R. A. Kamyabi Gol; R. Raisi Tousi
9 Lower bounds of Copson type for the transpose of matrices on weighted sequence spaces
Page 115-126
R. Lashkaripour; G. Talebi
10 Multiplicative bijective maps on standard operator algebras
Page 127-130
M. B. Asadi
11 A note on the comparison between Laplace and Sumudu transforms
Page 131-141
A. Kilicman; K. A. M. Atan; H. Eltayeb
12 A composite explicit iterative process with a viscosity method for Lipschitzian semigroup in a smooth Banach space
Page 143-159
P. Katchang; P. Kumam
13 n-cyclicizer groups
Page 161-170
L. Mousavi
14 An effective optimization algorithm for locally nonconvex Lipschitz functions based on mollifier subgradients
Page 171-198
N. Mahdavi-Amiri; R. Yousefpour
15 Dual multiwavelet frames with symmetry from two-direction renable functions
Page 199-214
Y. Li; Sh. Yang
16 Linear preserving gd-majorization functions from Mn,m to Mn,k
Page 215-224
A. Armandnejad; H. Heydari
17 Some equivalence classes of operators on B(H)
Page 225-233
T. Aghasizadeh; S. Hejazian
18 Approximating fixed points of generalized nonexpansive mappings
Page 235-246
A. Razani; H. Salahifard
19 Iterative algorithms for families of variational inequalities fixed points and equilibrium problems
Page 247-268
S. Saeidi
20 New complexity analysis of a full Nesterov-Todd steps IIPM for semidefinite optimization
Page 269-286
H. Mansouri; M. Zangiabadi | {"url":"http://bims.iranjournals.ir/?_action=article&vol=42&issue=43&_is=Volume+37%252525252C+No.+1+(April+2011)","timestamp":"2014-04-16T14:06:02Z","content_type":null,"content_length":"32050","record_id":"<urn:uuid:e6258c30-d322-4519-acb2-bee47d81534d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear and homogenous, differences...
March 18th 2011, 02:52 AM #1
Junior Member
Oct 2010
Linear and homogenous, differences...
I was wondering if perhaps someone here could give me a hand sorting out what's what between some different classes of diff. equations:
My basic understanding of a homogenous diff is that it can be written on a form of the type (for first order)
y'(x) + p(x)y(x) = 0
where being equal to zero was of prime concern. Recently though I've understood that this isn't a complete understanding at all. I recently head that the basic definition of a homogenous diff is
in fact that you can express a dx/dy as a function of x/y; that is, the variables scale. And then you have the option of solving using a variable substitution, x/y = v for example.
So yea, currently I'm feeling a little bit lost in between linear, homogenous and linear homogenous. If anyone feel like giving a bit of an explanation, or perhaps link to some good source of
disambiguation, as it were, I'd be very grateful!
"Homogeneous" can mean either that there are no terms that do not contain y or one of its derivatives, or it can mean that the coefficients of all differentials are homogeneous functions of the
same degree, depending on the context. Why not take a look at Chris's DE Tutorial?
Actually, that one line was exactly what I needed to click things into place; just a clear cut sum-up of when the term homogenous is applicable. Thanks a lot!
You're welcome. Have a good one!
March 18th 2011, 02:57 AM #2
March 18th 2011, 03:09 AM #3
Junior Member
Oct 2010
March 18th 2011, 04:40 AM #4 | {"url":"http://mathhelpforum.com/differential-equations/174945-linear-homogenous-differences.html","timestamp":"2014-04-16T16:04:11Z","content_type":null,"content_length":"40197","record_id":"<urn:uuid:4fc4fd40-1502-459a-a0e8-13f016a1d272>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Set Theory Proofs
December 19th 2011, 07:19 PM #1
Dec 2011
Basic Set Theory Proofs
I'm taking Topology next semester and, to prepare, I'm learning a little set theory. I had bought a Topology book from Dover Publications a couple years ago and am now starting to do some
problems in it. The very first section is on Set Theory. Here's the first couple problems:
1. S ⊂ T, then T - (T - S) = S.
Let x ∈ (S ⊂ T). Therefore, x ∉ (T – S). If x ∉ (T – S), then x ∈ T – (T – S). And x ∈ (S ⊂ T). Therefore, S ⊂ (T - (T - S))
Let x ∈ (T - (T - S)). This implies that x ∉ (T – S). Which implies that x ∈ S. Therefore, T – (T – S) ⊂ S and T - (T - S) = S.
2. S ⊂ T iff S ∩ T = S.
If S ⊂ T, every x ∈ S is in T. That implies that every x ∈ S is in S ∩ T. This implies that S ∩ T = S.
Let x ∈ S. If S ∩ T = S, then x ∈ (S ∩ T). This implies that S is contained in T and S ⊂ T.
I could really use some direction/correction.
Re: Basic Set Theory Proofs
First, the symbol ⊂ is somewhat ambiguous because it may mean either proper or improper subset in different sources. To clear the ambiguity, ether use ⊆ for improper subset and ⊊ for proper
subset or describe your notation in words.
An object x cannot belong to (S ⊂ T) because (S ⊂ T) is a proposition (something that is either true or false), not a set. It is sometimes possible to write x ∈ S ⊆ T as a contraction to "x ∈ S
and S ⊆ T," but such shortcuts are better avoided in the beginning.
Only if x ∈ T, which, granted, is apparently the case here.
Or x ∉ T.
Strictly speaking, it just implies that S ∩ T ⊇ S. The other inclusion is obvious, but at this high level of proof detail maybe it should be said explicitly.
Re: Basic Set Theory Proofs
If you have already studied the concept of universal set $U$ , the complementary $M^c$ of a subset $M\subset U$, distributive and Morgan's laws, etc you can prove the equality in another way.
Choosing as universal set any set $U$ such that $S\cup T\subset U$ (in this case for example $U=T$) we have $M-N=M\cap N^c$ for $M,N\subset U$ . So,
$T-(T-S)=T\cap (T-S)^c=T\cap (T\cap S^c)^c=T\cap (T^c\cup S)=$
$(T\cap T^c)\cup(T\cap S)=\emptyset \cup (T\cap S)=T\cap S=S$
Re: Basic Set Theory Proofs
If you have already studied the concept of universal set $U$ , the complementary $M^c$ of a subset $M\subset U$, distributive and Morgan's laws, etc you can prove the equality in another way.
Choosing as universal set any set $U$ such that $S\cup T\subset U$ (in this case for example $U=T$) we have $M-N=M\cap N^c$ for $M,N\subset U$ . So,
$T-(T-S)=T\cap (T-S)^c=T\cap (T\cap S^c)^c=T\cap (T^c\cup S)=$
$(T\cap T^c)\cup(T\cap S)=\emptyset \cup (T\cap S)=T\cap S=S$
I have studied the concept of a universal set (Prob & Stat). I just didn't think I could use it here. Now that I think about it, the way my book describes it is this: T - S is "the compliment of
S in T" which is the same as saying T ∩ S’. (I use apostrophe instead of C. Habit from Prob & Stat.)
Re: Basic Set Theory Proofs
I have studied the concept of a universal set (Prob & Stat). I just didn't think I could use it here. Now that I think about it, the way my book describes it is this: T - S is "the compliment of
S in T" which is the same as saying T ∩ S’. (I use apostrophe instead of C. Habit from Prob & Stat.)
Well, in that case you can use the alternative way I provided (if you have previously covered distributive laws, etc).
December 20th 2011, 12:50 AM #2
MHF Contributor
Oct 2009
December 20th 2011, 04:52 AM #3
December 20th 2011, 06:21 AM #4
Dec 2011
December 20th 2011, 08:08 AM #5 | {"url":"http://mathhelpforum.com/discrete-math/194502-basic-set-theory-proofs.html","timestamp":"2014-04-17T01:28:29Z","content_type":null,"content_length":"52662","record_id":"<urn:uuid:bd6e7c39-15f7-4509-a310-fca8b0c3d7d8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to solve the side of quadrilateral with circle inside
how possible is this done? please help... im trying but i could not get the answer thanks a lot
A quadrilateral can have an inscribed circle if and only if AD + BC = AB+CD. Sum of opposite sides are equal.
I made a GSP sketch of this. I uploaded a pdf file. If you want to find the length of the side, and if it's a general quadrilateral, then I don't think you have enough information. I'd say you need
to know the radius of the circle and something about the central angles (between the radii that lead to the vertices of the quadrilateral). If I know it's a square, then I can figure out the side
length in terms of the radius. Maybe I'm missing something. It's true what coolge says, but I thought the problem was the find the lengths of all four sides in terms of the radius.
Last edited by zhandele; November 30th 2012 at 07:28 AM. | {"url":"http://mathhelpforum.com/geometry/208764-how-solve-side-quadrilateral-circle-inside.html","timestamp":"2014-04-16T07:01:03Z","content_type":null,"content_length":"35895","record_id":"<urn:uuid:a2d12b09-d033-4d2c-815f-4a6db342815a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help needed on IMO 1990 question
March 26th 2009, 09:46 PM #1
Mar 2009
Help needed on IMO 1990 question
(IRN 2) Let S be a set with 1990 elements. P is the set such that its elements are the ordered
sequences of 100 elements of S. Knowing that any ordered element pair of S appears at most in
one element of P. (If x = (…a…b…), then we call ordered pair (a, b) appeared in x.) Prove
that P has at most 800 elements.
Hello muditg
It would be helpful if we could see the exact wording of the original question, since this one, as you have punctuated it, does not have a very clear meaning.
The words I have marked in red do not form a complete sentence, so I can only guess that this should perhaps be part of the definition of the set $P$.
Although I have some thoughts which may be helpful, I can't see quite where the $800$ comes from. So I am open to corrections or further suggestions/information. My thoughts are these:
The number of ordered pairs, $(a, b), a < b$, that can be formed from the elements of the set S is $\tfrac{1}{2}1990 \times 1989$.
Each element $x \in P$ is an ordered sequence made up of $100$ elements of S. The number of ordered pairs $(a,b), a<b$, that can be made from $100$ elements of S is $\tfrac{1}{2}100\times 99$. So
(if I am understanding the question correctly) this is the number of ordered pairs of $S$ that each element $x$ will 'consume', if each ordered pair can be used at most once in this way. So it
would appear that a maximum value on the number of possible elements $x \in P$ is:
$\frac{\tfrac{1}{2}1990 \times 1989}{\tfrac{1}{2}100 \times 99} = 399.8$
This appears to show that $|P| < 400$, not the $800$ of which the question speaks.
Am I misinterpreting something? Is there something wrong with my reasoning?
I copied and pasted that question from IMO 1990 longlist verbatim (see attachment). However, the same question appears in a text on Combinatorics in a different, and I hope clear, language. It is
question number 106 on the image of the page attached with this message.
Sorry, I pressed the submit button too early. Attachments are with this message. (See problem no. 33 in the longlist for the problem in its original wordings.)
Hello muditg
Thanks for clarifying that you have stated the question exactly as it was set on the paper. I note that it was translated from the Chinese. Perhaps something has been lost in the translation!
It somehow looks correct to me
An example
I consider S having 6 elements and P as 3-ary
S= {A1,A2,A3,A4,A5,A6}
p={ (A1,A2,A3), (A1,A5,A6),.......}
It means that Ai & Aj can be used only once
in P I have made , (A1,A3,A4) won't be a member
as A1 & A3 appeared earlier in (A1,A2,A3)
If this was the question than we had to prove that
no. of elements in p <= something (was 800 in question)
Please correct me if this was not similar to the question
(IRN 2) Let S be a set with 1990 elements. P is a set whose elements are ordered
sequences of 100 elements of S. Given that any ordered element pair of S appears at most in
one element of P (if x = (…a…b…), then we say that the ordered pair (a, b) appeared in x), prove
that P has at most 800 elements.
[I have edited the wording to try to clarify what I think the question is asking.]
The number of ways of selecting an ordered pair from an ordered set of 100 elements is ${\textstyle{100\choose2}}=4950$. The number of ways of selecting an ordered pair from an (unordered) set of
1990 elements is $1990\times1989=3958110$. But $\tfrac{3958110}{4950} = 799.618\ldots$, so there can be at most 799 of the 100-element sets with all their ordered-pair subsets disjoint.
March 27th 2009, 05:53 AM #2
March 27th 2009, 09:14 AM #3
Mar 2009
March 27th 2009, 09:20 AM #4
Mar 2009
March 27th 2009, 10:14 AM #5
March 28th 2009, 02:55 AM #6
March 28th 2009, 06:36 AM #7 | {"url":"http://mathhelpforum.com/discrete-math/80872-help-needed-imo-1990-question.html","timestamp":"2014-04-20T06:44:46Z","content_type":null,"content_length":"57783","record_id":"<urn:uuid:58376a47-535a-4409-8c5c-b0d9a760904e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Re: about MLE of exponential distribution
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Re: about MLE of exponential distribution
From Maarten Buis <maartenlbuis@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: Re: about MLE of exponential distribution
Date Wed, 7 Mar 2012 17:45:15 +0100
On Wed, Mar 7, 2012 at 4:40 PM, mahadeb poudel wrote me privately:
> I am using a six probability distribution function namely Normal, Beta,
> Gamma, Weibull, Lognormal, and Exponetial for the yield risk analysis of
> rice in my PhD dissertation research. I want to estimate the parameters of
> each distribution by using Maximum likelihood method. For this, I am using
> log likelihood estimation method. So, far I can estimate the parameters
> of Normal, Gamma, Weibull, and Lognormal, however I can not estimate the
> parameters of Beta and Exponential distribution. The exponential
> distribution I have applied is
> program define expon
> 1.version 11.0
> 2. args lnf lamda
> 3. quietly replace `lnf'= ln(`lamda ')-(`lamda')*($ML_Y1)
> 4. end
> ml model lf expon (theta:nepn=time timesq)
> ml search
> ml maximize
> I always get: invalid syntax r(198)
> I am suffering by this problem. Therefore, Could you please help me to solve
> the problem?
Questions like these should not be sent privately, but instead to the
statalist. Reasons for that are discussed here:
I would not try to program that yourself, as it has already been done.
For the normal/Gaussian distribution you can just use -regress- or
-glm-, for the beta, gamma, Weibull, and lognormal you can download
the -betafit-, -gammafit- -weibullfit-, -lognfit- packages from SSC,
see: -help ssc-.
With the appropriate constraints you can use -gammafit- to estimate an
exponential distribution. In that case you don't want to use the
-alphavar()- option and constrain the constant of the alpha equation
to 1, and typically you parametrize an exponential distribution in
terms of a rate, which you can get by specifying the -alt- option:
constraint 1 [alpha]_b[_cons] = 1
gammafit varname, alt constr(1)
-- Maarten
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-03/msg00310.html","timestamp":"2014-04-18T13:17:34Z","content_type":null,"content_length":"9615","record_id":"<urn:uuid:8ffc8e7e-f317-4f79-ac61-aa54bc86e9e1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Longwood University
2006 - 2007 Academic Year
Abstracts & Biographies
Date: 9/19/2006 (Tuesday)
Speaker: Dr. Chris Gennings
Department of Biostatistics
Virginia Commonwealth University
Title: A Gestalt Index: Use Of Desirability Functions In Evaluation Of Multiple Measurements per Experimental Subject
Abstract: Multiple endpoints are often measured on each subject in both toxicology and clinical studies. If each endpoint is analyzed separately, the risk of claiming an effect when none exists
increases. One approach to analyzing multiple endpoints in an integrated way is through the use of a composite score. The use of desirability functions is described as a way of deriving an overall
score that uses information from each of the outcomes. This methodology is commonly used in the quality engineering literature but has not been applied to toxicology or clinical data. The approach is
demonstrated through the analysis of a mixture of organophosphorus pesticides where a threshold was estimated when evaluating five measures of neurotoxicity. Application of the approach to patient
data from a clinical study will also be illustrated.
Bio:Dr. Chris Gennings earned her B.A. in mathematics at Westhampton College of the University of Richmond in 1982 and her Ph.D. in biostatistics from the Medical College of Virginia of Virginia
Commonwealth University in 1986. She has been on the faculty in the Department of Biostatistics at VCU since graduating from there. Her research interests include experimental design and analysis
methods for toxicology data, especially as applied to the analysis of drug/chemical mixtures. She is married to Otis Fulton and they have two children - a daughter in the 6^th grade and a son in the
2^nd grade.
Date: 10/12/2006 (Thursday)
Speaker: Dr. Kyle Siegrist
Department of Mathematical Sciences
University of Alabama in Huntsville
Title: 1/e: How to Find the Woman (or Man) of Your Dreams
Abstract: There are n candidates, totally ordered from best to worst. The candidates arrive sequentially, in random order. Our goal is to choose the best candidate; no one less will do.
Unfortunately, we cannot observe the absolute ranks of the candidates, but only relative ranks. Once rejected, a candidate is no longer available. What should our strategy be? When n is large, is
there any hope finding the best candidate? The answers are interesting and surprising.
Bio:Kyle Siegrist received a PhD in mathematics from Georgia Tech in 1979 and has been on the faculty of the Department of Mathematical Sciences at UAH since 1980. His research interests are
probability and stochastic processes. He is currently the editor of the Journal of Online Mathematics and its Applications.
Date: 10/31/2006 (Tuesday)
Speaker: Ms. Virginia Lewis
Department of Mathematics and Computer Science
Longwood University
Ph.D. Candidate in Mathematics Education at the University of Virginia
Title: Undergraduate Calculus Students' Definitions and Images of Function
Abstract: How do Longwood calculus students define the concept of function? What do these students include in their concept image of function? Is there a relationship between the level of calculus in
which a student is enrolled and the student's definition and image of function? Undergraduate calculus students at Longwood University were examined using mixed methods to determine how they define
function and their concept images of function. Surveys were completed in three different levels of calculus. Students were presented with a variety of graphs and asked to determine if the graphs were
functions. Students were also asked to define the word function. Interviews were conducted with two students in each level of calculus. We will discuss the results of this study and how these results
can inform calculus instruction.
Bio:Virginia Lewis graduated from Longwood College in 1992. She earned her Master's in Interdisciplinary Studies in 2003 at VCU. She is currently a doctoral candidate in mathematics education at UVA.
She is married with two daughters. In her free time she loves to play the piano and cross-stitch.
Date: 11/30/2006 (Thursday)
Speaker: Mr. David McWee
Principal Information Engineer
Professional Software Engineering, Inc. (PROSOFT)
Title: From the Classroom to the Cubicle
Abstract: The transition from the classroom to the application of knowledge can seem insurmountable especially when a student is joining an enterprise level software development process. This talk
will identify some of the concerns and questions students may have about this transition. This speech will be a first hand account of my transition from being a student to a Professional Software
Developer. I will provide some examples including a project currently supported and maintained solely by Longwood graduates as well as several others.
Bio:David McWee graduated from Prince Edward County High School in 1998 and Longwood University in 2002. Since leaving Longwood he has worked as a Civil Servant for the US Navy at Naval Surface
Warfare Center Dahlgren Division (NSWCDD). While at NSWCDD he worked on the Submarine Launch Ballistic Missile, and TOMAHAWK Programs as a software tester. He also worked on the AEGIS Radar Program
developing the Enhanced Radar Data Display System (ERDDS) which is currently maintained by only Longwood graduates. He was recently employed by The Consulting Network Inc. (TCNI) developing pieces of
the Tactical Control Software (TCS) for the Virginia Class Submarine. He is now working at the Joint Forces Command (JFCOM) for the Joint Combat Operational Analysis (JCOA)division developing the
Joint Lessons Learned Information System (JLLIS). JLLIS is intended to allow different military branches to share lessons learned and improve training.
Date: 1/30/2007 (Tuesday)
Speaker: Dr. Danny Cline
Assistant Professor
Department of Mathematics
Lynchburg College
Title: Self-Contradiction-or- I Am Lying To You
Abstract: Mathematicians and philosophers have long been fascinated by paradox. Some of the earliest known and most troubling paradoxes for mathematicians are variations on the self-contradictory
statement: "I am lying to you." In the early part of the 20^th century, Bertrand Russell found instances of these sorts of statements in mathematics and set out to find a way to eliminate them. In
the process he hoped also to grant a wish of mathematician David Hilbert, a program to list a complete set of axioms capable of generating all of mathematics. Russell and his colleague Alfred North
Whitehead made an attempt at this with their Principia Mathematica; however, the same flaws crept in. In 1931, logician Kurt Gödel used a clever proof to show that these same sort of
self-contradictory statements could be generated within Russell's system of axioms. He showed also that this contradiction cannot be removed from such a system and thus that there can be no hope of
ever completing mathematics and granting Hilbert's wish. In this talk, I will discuss the interconnected histories of Russell's paradox and Gödel's Incompleteness Theorem. I will also discuss how the
same self-contradictory statements that Russell hoped to remove from mathematics remained regardless of his efforts and undid the rest of his project.
Bio:Danny Cline was born in West Virginia and received two B.S. degrees from West Virginia University, one in Mathematics and one in Chemical Engineering. He went on to study mathematics at the
graduate level at Virginia Tech, earning his M.S. in Mathematics and, in 2004, his Doctorate in Number Theory. Since the fall of 2005, he has been an Assistant Professor of Mathematics at Lynchburg
College. His mathematical interests are primarily in number theory and the philosophy of mathematics.
Date: 2/8/2007 (Thursday)
Speaker: Dr. Gretchen Koch
Assistant Professor
Mathematics and Computer Science Department
Goucher College
Baltimore, Maryland
Title: E. Coli and Taylor Series: My Journey Through Biomathematics
Abstract:In this talk, Dr. Koch will describe the unusual journey she took to becoming a math-biologist. She will also detail what it is like to work with scientists in other disciplines. Finally,
she will present two computer models of cell division in E. coli depicting the minCDE system that decides where the middle of the cell lies. One is a Markov model, and the other is a Monte Carlo
simulation. Both models have answered important questions about the MinCDE system while presenting many more.
Bio:Gretchen Koch has been an assistant professor of mathematics and computer science at Goucher College since August 2005. She completed her Ph.D. in Mathematics at Rensselaer Polytechnic Institute
in Troy, New York in August 2005. She also completed her M.S. in Applied Mathematics at RPI in December 2002. Her B.S. in Mathematics was completed at St. Lawrence University in Canton, NY in May
2001. As an applied mathematician, she uses math to model real phenomena. As a teacher, she tries to relate the topics in lecture to things that happen in our world. Her research can be categorized
as biomathematics. Recently she created two computer models of cell division in E. coli depicting the system that decides where the middle of the cell lies. This system, the MinCDE system, is a
fascinating interplay of oscillating polymers. One of the aspects of her research she enjoys the most is that it is interdisciplinary, and thus she has the opportunity to work with biochemists and
see her work come to life in the laboratory.
Date: 3/1/2007 (Thursday)
Speaker: Mr. Robert M. Marmorstein
Ph.D. Candidate in Computer Science
College of William and Mary
Title: Multiway Decision Diagrams
Abstract: Multiway decision diagrams (MDDs) are a convenient way to store large sets of multi-field data. Unlike arrays and linked lists, MDDs take advantage of similarities between data items. In
this talk, we will present an algorithm for constructing an MDD, describe several important MDD operations, and illustrate the application of MDDs to a simple security filter. We will also discuss
when it is appropriate to use an MDD in place of other storage techniques.
Bio: Robert Marmorstein is a Ph.D. Candidate in Computer Science at the College of William and Mary. His research concentrates on the visualization, analysis, and repair of firewall policies. He has
an M.S. in Computer Science from William and Mary and a B.A. in Computer Science and Mathematics from Washington and Lee University. In addition to being a teaching assistant and a technical
assistant at William and Mary, he has also worked at NASA Langley Research Center.
Date: 3/8/2007 (Thursday)
Speaker: Dr. John Augustine
Visiting Assistant Professor
Computer Science Department
Colby College
Title: Resource Allocation in Computer Systems: Pandas and Tetris
Abstract: Resource allocation in computer systems leads to optimization problems that often have elegant solutions. In many cases, the problems are modeled mathematically and the solutions are
analyzed rigorously. We will discuss two problems and solutions in a manner that highlights this research process.
The problem of powering down is encountered when any energy intensive system goes through an idle period of time. I will present several models and algorithms that are increasingly more accurate in
capturing the essence of this problem.
The second problem is encountered while scheduling jobs in a Field Programmable Gate Array (FPGA). Simply put, an FPGA is a mesh of computing elements that can be reconfigured in run-time. Our goal
is to schedule an input set of jobs in a manner that minimizes the completion time of all jobs.
I will conclude with a summary of other previous work and plans for future research. In particular, I will describe how I plan to collaborate with undergraduate students.
Bio:Dr. Augustine has a Ph.D. in Information and Computer Science from the University of California at Irvine, a M.S. in Electrical and Computer Engineering and an M.S. in Systems Science, both from
Louisiana State University. His undergraduate degree was a B.E. in Computer Science and Engineering from the University of Madras in India. When he finds himself with some time to kill, he likes to
ride his road bike, photograph, and hike.
Date: 3/22/2007 (Thursday)
Speaker: Dr. Laura Taalman
Associate Professor of Mathematics
Department of Mathematics and Statistics
James Madison University
Title: Sudoku Variations and Research
Abstract: Sudoku puzzles and their variants are linked to many mathematical problems involving combinatorics, Latin squares, magic squares, polyominos, symmetries, computer algorithms, the rook
problem, knight tours, graph colorings, and permutation group theory. In this talk we will explore variations of Sudoku and the many open problems and new results in this new field of recreational
mathematics. Many of the problems we will discuss are suitable for undergraduate research projects. Puzzle handouts will be available for all to enjoy!
Bio:Laura Taalman is an Associate Professor of Mathematics at James Madison University. She received her Ph.D in mathematics from Duke University, and did her undergraduate work at the University of
Chicago. Her research includes singular algebraic geometry, knot theory, and the mathematics of puzzles. She is the author of a textbook that combines calculus, pre-calculus, and algebra into one
course, and one of the organizers of the Shenandoah Undergraduate Mathematics and Statistics (SUMS) Conference at JMU. Her awards include the MAA Trevor Evans award for her jointly written article
"Simplicity is not Simple: Tesselations and Modular Architecture", and the MAA Alder Award for Distinguished Teaching.
Date: 4/5/2007 (Thursday)
Speaker: Dr. Deborah L. Gochenaur
Assistant Professor of Mathematics
Department of Mathematical Sciences
Elizabethtown College
Elizabethtown, Pennsylvania
Title: Evaluating STEM Intervention Programs
Abstract: As the United States science, technology, engineering, and mathematics (STEM) workforce continues to grow faster than the overall workforce, the need for college-trained STEM workers
continues to increase. While African Americans comprise 11.3 percent of the US population 18 years or older, they hold just 4.4 percent of the STEM degreed-workforce positions. Numerous intervention
programs, geared towards increasing the number of underrepresented minorities in STEM, were developed to increase undergraduate retention and attainment rates, graduate degree attainment rates, and
the rate at which students were entering the STEM workforce. Although many of these programs have conducted small self-studies, few have undergone extensive external program evaluation. As millions
of dollars continue to pour into these programs, evaluation of their effectiveness at increasing the numbers of African Americans in STEM needs to be addressed. An evaluation model that uses
qualitative analysis to build upon a quantitative analysis base, utilizing logistic regression, will be explored.
Bio: Dr. Gochenaur is currently an Assistant Professor in the Department of Mathematical Sciences at Elizabethtown College in Elizabethtown, PA. In addition to teaching duties in the Department she
serves as Program Director for Mathematics Education by overseeing pre-service teachers as they complete their academic programs and then supervising them in their professional semester. Dr.
Gochenaur has done research into underrepresented minorities for several years and is currently working to procure funding to increase the number of underrepresented minorities entering the teaching
workforce from Elizabethtown College. Dr. Gochenaur brings a variety of life experiences to her roles at the college; she was a non-traditional returning student, first raising a family and going the
route of butcher, baker, candlestick maker. Receiving her Ph.D. from American University in 2005, she was selected as a MAA Project NExT National Fellow.
Date: 4/19/2007 (Thursday)
Speaker: Dr. Ling Xu
Assistant Professor
Department of Mathematics and Statistics
James Madison University
Title: Detecting Multimodality in Ecological Data
Body size is one of the most important features of any animal. Ecologists are interested in understanding the processes that determine whether distributions of body size are multimodal. Several broad
classes of methods have been devised to assess modality. This talk will focus on non-parametric inference with kernel estimates and a Bayesian approach with mixture models, and the utility of these
methods on body size distributions of boreal forest birds and mammals.
Bio:Ling Xu earned her Ph.D. in statistics from The University of New Mexico in 2005. Ling Xu is currently an Assistant Professor at James Madison University. | {"url":"http://www.longwood.edu/mathematics/Colloquium_2006_07.htm","timestamp":"2014-04-20T18:25:27Z","content_type":null,"content_length":"33273","record_id":"<urn:uuid:1af368d8-8d0d-4a5c-9599-31dc050d54e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need major help with Trig Identities!
January 9th 2008, 12:51 PM #1
Junior Member
Jan 2008
Need major help with Trig Identities!
Hey guys..
So I'm doing trig identities in my advanced functions course and for some reason my brain just will not even take hold of a basic grasping as to how they work.. Its quite frustrating. I was
wondering if anyone knew of any good 'math' websites that would be able to explain something like this well.
The question I am stuck on right now is
1+tanē x
cscē x
And I think you can turn tan into sec 2 x. However, I can't get much farther then that. This is only the third question on my page and I have been sitting here for an hour or so working through
them and I'm not getting anywhere. The book we are using gives like 5 total examples for the whole section
I kind of have a test on Friday on this stuff, so I need to learn it pretty quickly.
help is appreciated!
Thank you Divideby0 and badgerigar for helping me out last time.
Hey guys..
So I'm doing trig identities in my advanced functions course and for some reason my brain just will not even take hold of a basic grasping as to how they work.. Its quite frustrating. I was
wondering if anyone knew of any good 'math' websites that would be able to explain something like this well.
The question I am stuck on right now is
1+tanē x
cscē x
And I think you can turn tan into sec 2 x. However, I can't get much farther then that. This is only the third question on my page and I have been sitting here for an hour or so working through
them and I'm not getting anywhere. The book we are using gives like 5 total examples for the whole section
I kind of have a test on Friday on this stuff, so I need to learn it pretty quickly.
help is appreciated!
Thank you Divideby0 and badgerigar for helping me out last time.
it would probably be better if you told us what the question asked you to do. it seems you want to simplify, but that is not the same thing as proving trig identities, which is what the title of
your thread suggests.
change $\sec^2 x$ to $\frac 1{\cos^2 x}$ and $\csc^2 x$ to $\frac 1{\sin^2 x}$
now what do you see?
Hey guys..
So I'm doing trig identities in my advanced functions course and for some reason my brain just will not even take hold of a basic grasping as to how they work.. Its quite frustrating. I was
wondering if anyone knew of any good 'math' websites that would be able to explain something like this well.
The question I am stuck on right now is
1+tanē x
cscē x
And I think you can turn tan into sec 2 x. However, I can't get much farther then that. This is only the third question on my page and I have been sitting here for an hour or so working through
them and I'm not getting anywhere. The book we are using gives like 5 total examples for the whole section
I kind of have a test on Friday on this stuff, so I need to learn it pretty quickly.
help is appreciated!
Thank you Divideby0 and badgerigar for helping me out last time.
1+tanē x = secē x = 1/cosē x.
cscē x = 1/sinē x.
Substitute the above and divide the two fractions.
So the answer will end up being tanē x.
Well, I did get tan ēx as one of my answers, but my answer book says that the answer is 1 :/ So i'm unsure where they came up with that.
Yes, this was just a simplifying question.
The thing is, normally math comes relatively easy to me, but I think that trig identities use some sort of deducing that I'm just not attuned to..
I asked for help on here, because I'm on the third question of 5.1, and it goes all the way up to 5.5. So I figured if I'm allready stuck here then I really need some help.
And I can't really ask the answer for every single question on this forum.
Thank you both for your help so far though.
Anyone know of any helpful websites?
Well, I did get tan ēx as one of my answers, but my answer book says that the answer is 1 :/ So i'm unsure where they came up with that.
Yes, this was just a simplifying question.
The thing is, normally math comes relatively easy to me, but I think that trig identities use some sort of deducing that I'm just not attuned to..
I asked for help on here, because I'm on the third question of 5.1, and it goes all the way up to 5.5. So I figured if I'm allready stuck here then I really need some help.
And I can't really ask the answer for every single question on this forum.
Thank you both for your help so far though.
Anyone know of any helpful websites?
the answer cannot be 1.
let's say it was, we have:
$\frac {1 + \tan^2 x}{\csc^2 x} = 1$
$\Rightarrow 1 + \tan^2 x = \csc^2 x$
which we know is not true in general
January 9th 2008, 01:02 PM #2
January 9th 2008, 01:08 PM #3
January 9th 2008, 01:23 PM #4
Junior Member
Jan 2008
January 9th 2008, 01:32 PM #5 | {"url":"http://mathhelpforum.com/trigonometry/25823-need-major-help-trig-identities.html","timestamp":"2014-04-16T16:07:24Z","content_type":null,"content_length":"49106","record_id":"<urn:uuid:cda57eba-3d53-4f76-9f8a-f83b9cd74af6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ibn al-Haytham
Ibn al-Haytham (ĭbˈən äl-hĪth-ämˈ) [key] or Alhazen ălhəzĕnˈ, 965–c.1040, Arab mathematician. Ibn al-Haytham was born in Basra, but made his career in Cairo, where he supported himself copying
scientific manuscripts. Among his original works, only those on optics, astronomy, and mathematics survive. His Optics, which relied on experiment rather than on past authority, introduced the idea
that light rays emanate in straight lines in all directions from every point on a luminous surface. Latin editions of the Optics, available from the 13th cent. on, influenced Kepler and Descartes. As
a cosmologist, al-Haytham tried to find mechanisms by which the heavenly bodies might be shown to follow the paths determined by Ptolemaic mathematics. In mathematics, al-Haytham elucidated and
extended Euclid's Elements and suggested a proof of the parallel postulate.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Ibn al-Haytham from Fact Monster: | {"url":"http://www.factmonster.com/encyclopedia/people/ibn-al-haytham.html","timestamp":"2014-04-21T13:39:47Z","content_type":null,"content_length":"20964","record_id":"<urn:uuid:c5b45ab6-93a1-4a58-8e5e-abd37f3ccb2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Venice, CA Calculus Tutor
Find a Venice, CA Calculus Tutor
...Why is this stuff important anyway? Where is this class going? It's time to get a tutor - someone who cares about your success, who can help you understand and get back on track, and show you
why this stuff is important.
18 Subjects: including calculus, chemistry, algebra 2, SAT math
...My name is David, and I hope to be the tutor you are looking for. I have over 5 years of tutoring experience in all math subjects, including Algebra, Geometry, Trigonometry, Pre-Calculus,
Calculus, Probability and Statistics. I have also helped students out with their coursework in Physics and Finance.
14 Subjects: including calculus, physics, geometry, statistics
...Most of my work has been in chemistry, but I have also helped students with algebra, geometry, and calculus. I went to UCLA for graduate school in chemistry (M.S., 2009), UC Davis for training
as a science teacher (credential, 2004), and UC Berkeley for a more general education focused on nutrit...
17 Subjects: including calculus, chemistry, organic chemistry, ESL/ESOL
...I am an approved tutor in SAT preparation. I have been working with the teacher training program at UCLA giving future teachers techniques and methods of teaching elementary mathematics. I work
well with K-6th children.
72 Subjects: including calculus, reading, English, physics
...Mathematics is my passion, and I have a very theoretical mindset (in that I prefer the student understand the WHY's - and hence memorization of formulas without context is something I shy away
from). I teach my students to derive formulas, based on prior knowledge - thereby instilling in them tr...
14 Subjects: including calculus, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Venice_CA_Calculus_tutors.php","timestamp":"2014-04-19T17:45:15Z","content_type":null,"content_length":"23985","record_id":"<urn:uuid:617f5d43-2202-4f95-a7d0-25de994a5a53>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Regression
Linear regression attempts to model the relationship between two variables by fitting a linear equation to observed data. One variable is considered to be an explanatory variable, and the other is
considered to be a dependent variable. For example, a modeler might want to relate the weights of individuals to their heights using a linear regression model.
Before attempting to fit a linear model to observed data, a modeler should first determine whether or not there is a relationship between the variables of interest. This does not necessarily imply
that one variable causes the other (for example, higher SAT scores do not cause higher college grades), but that there is some significant association between the two variables. A scatterplot can be
a helpful tool in determining the strength of the relationship between two variables. If there appears to be no association between the proposed explanatory and dependent variables (i.e., the
scatterplot does not indicate any increasing or decreasing trends), then fitting a linear regression model to the data probably will not provide a useful model. A valuable numerical measure of
association between two variables is the correlation coefficient, which is a value between -1 and 1 indicating the strength of the association of the observed data for the two variables.
A linear regression line has an equation of the form Y = a + bX, where X is the explanatory variable and Y is the dependent variable. The slope of the line is b, and a is the intercept (the value of
y when x = 0).
Least-Squares Regression
The most common method for fitting a regression line is the method of least-squares. This method calculates the best-fitting line for the observed data by minimizing the sum of the squares of the
vertical deviations from each data point to the line (if a point lies on the fitted line exactly, then its vertical deviation is 0). Because the deviations are first squared, then summed, there are
no cancellations between positive and negative values.
The dataset "Televisions, Physicians, and Life Expectancy" contains, among other variables, the number of people per television set and the number of people per physician for 40 countries. Since both
variables probably reflect the level of wealth in each country, it is reasonable to assume that there is some positive association between them. After removing 8 countries with missing values from
the dataset, the remaining 32 countries have a correlation coefficient of 0.852 for number of people per television set and number of people per physician. The r² value is 0.726 (the square of the
correlation coefficient), indicating that 72.6% of the variation in one variable may be explained by the other. (Note: see correlation for more detail.) Suppose we choose to consider number of people
per television set as the explanatory variable, and number of people per physician as the dependent variable. Using the MINITAB "REGRESS" command gives the following results:
The regression equation is People.Phys. = 1019 + 56.2 People.Tel.
outliers, and depending on their location may have a major impact on the regression line (see below).
Data source: The World Almanac and Book of Facts 1993 (1993), New York: Pharos Books. Dataset available through the JSE Dataset Archive.
Outliers and Influential Observations
After a regression line has been computed for a group of data, a point which lies far from the line (and thus has a large residual value) is known as an outlier. Such points may represent erroneous
data, or may indicate a poorly fitting regression line. If a point lies far from the other data in the horizontal direction, it is known as an influential observation. The reason for this distinction
is that these points have may have a significant impact on the slope of the regression line. Notice, in the above example, the effect of removing the observation in the upper right corner of the
People.Phys = 1650 + 21.3 People.Tel.
The correlation between the two variables has dropped to 0.427, which reduces the r² value to 0.182. With this influential observation removed, less that 20% of the variation in number of people per
physician may be explained by the number of people per television. Influential observations are also visible in the new model, and their impact should also be investigated.
lurking variables. In our example, the residual plot amplifies the presence of outliers.
Lurking Variables
If non-linear trends are visible in the relationship between an explanatory and dependent variable, there may be other influential variables to consider. A lurking variable exists when the
relationship between two variables is significantly affected by the presence of a third variable which has not been included in the modeling effort. Since such a variable might be a factor of time
(for example, the effect of political or economic cycles), a time series plot of the data is often a useful tool in identifying the presence of lurking variables.
Whenever a linear regression model is fit to a group of data, the range of the data should be carefully observed. Attempting to use a regression equation to predict values outside of this range is
often inappropriate, and may yield incredible answers. This practice is known as extrapolation. Consider, for example, a linear model which relates weight gain to age for young children. Applying
such a model to adults, or even teenagers, would be absurd, since the relationship between age and weight gain is not consistent for all age groups. | {"url":"http://www.stat.yale.edu/Courses/1997-98/101/linreg.htm","timestamp":"2014-04-20T13:28:18Z","content_type":null,"content_length":"8043","record_id":"<urn:uuid:7dda3314-b229-481e-8588-1997944d8dff>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
WZW Models in a Cohesive ∞-Topos
Posted by Urs Schreiber
A few hours back has started the
Workshop on Representation Theoretical and Categorical Structures In Quantum Geometry and Conformal Field Theory .
I have prepared some slides for my talk tomorrow, titled
Maybe you’d enjoy looking at it. I’d be interested in whatever comment you might have.
Where the previous talk focused on the fact that there is physics at all induced in a cohesive $(\infty,1)$-topos, this one looks more specifically at aspects of conformal field theory, namely at
higher WZW theory, canonically existing in a cohesive $\infty$-topos.
Since my audience eats WZW models for breakfast, the slides do not explain what these are. They only give a little teaser for why they are of interest. Maybe I find the time to expand the $n$Lab
entry WZW model. But at least a bunch of references is currently listed there already.
Posted at October 31, 2011 5:12 PM UTC | {"url":"http://golem.ph.utexas.edu/category/2011/10/wzw_models_in_a_cohesive_topos.html","timestamp":"2014-04-19T11:56:53Z","content_type":null,"content_length":"11727","record_id":"<urn:uuid:0e21c207-c548-473c-8554-8621d747a165>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] try to solve issue #2649 and revisit #473
huangkandiy@gmai... huangkandiy@gmai...
Wed Apr 3 13:44:23 CDT 2013
Hello, all
I try to solve issue 2649 which is related to 473 on multiplication of a
matrix and an array. As 2649 shows
import numpy as np
x = np.arange(5)
I = np.asmatrix(np.identity(5))
print np.dot(I, x).shape
# -> (1, 5)
First of all I assume we expect that I.dot(x) and I * x behave the same, so
I suggest add function dot to matrix, like
def dot(self, other):
return self * other
Then the major issue is the constructor of array and matrix interpret a
list differently. array([0,1]).shape = (2,) and matrix([0,1]).shape = (1,
2). It will throw error when run np.dot(I, x), because in __mul__, x will
be converted to a 1*5 matrix first. It's not consistent with
x), which returns x. To fix that, I suggest to check the dimension of array
when convert it to matrix. If it's 1D array, then convert it to a vertical
vector explicitly like this
if isinstance(data, N.ndarray):
+ if len(data.shape) == 1:
+ data = data.reshape(data.shape[0], 1)
if dtype is None:
intype = data.dtype
Any comments?
Kan Huang
Department of Applied math & Statistics
Stony Brook University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20130403/9ff81ba4/attachment.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2013-April/066062.html","timestamp":"2014-04-16T08:04:31Z","content_type":null,"content_length":"4128","record_id":"<urn:uuid:c0741f30-cbaf-4b9a-8224-99f71a18f2c7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
physics(check my work)
Number of results: 169,681
Physics (check)
The answer is correct. But, be careful in the work. In your work, you are using 1000kg when you mean 1000g or 1kg. You knew what you meant, but you might lose some points in the work.
Sunday, May 18, 2008 at 9:31pm by Quidditch
work on each is weight*height. work=m*.041(1+2+3+...7)g check my thinking.
Wednesday, November 10, 2010 at 3:11am by bobpursley
post your work, please, I will check it. Ahhh. did you work mass in kg?
Monday, June 20, 2011 at 1:50pm by bobpursley
It is rather easy: work= PE change+friction work = mg8sin15 + 8*mg*CosTheta*.4 check that.
Sunday, December 6, 2009 at 8:35pm by bobpursley
Physics check
yes on 2. On 1, I will be happy to check your work.
Monday, April 14, 2008 at 7:16pm by bobpursley
Physics ~ work shown, pls help!
Bqv=mv^2/r v=Bqr/m v= .232*1.6E-19*.118/1.67E-27 Check that. Now compare it with whatever you did, I cant figure out what you did. for information, if you posted this as your answer: .118/2.83x10^-7
I dont understand what you did, and why did you not simplify this. I get about...
Friday, April 15, 2011 at 11:07am by bobpursley
No, you didn't work it right. Check the units in your work, your answer comes out as kg-second. Is that a force? You can do better than this.
Wednesday, April 16, 2008 at 2:37pm by bobpursley
Physics please check
since work is done on the gas, the value of work has to be negative. so you would do 2531 + 1101 instead
Sunday, April 15, 2007 at 3:01am by kat
I will be happy to check your work. Break the puling force into horizontal and vertical components. work= forcehorizontal*distance.
Saturday, October 23, 2010 at 1:54pm by bobpursley
I am having issues with factoring. I am trying to check my work using a TI84 Plus calculator but I don't know how to plug in a problem to check its factoring. Can anyone help? for example 44y^3+55y^
2-11xy how can I go about plugging that in to check my work?
Thursday, March 27, 2014 at 8:04am by Christopher
Physics - Check my work please :)
Check this please :D Which of the following three statements used to describe work are true: i)Work is done only if an object moves through a distance ii)Work is done only if energy is transferred
from one form to another iii)Work is done only if a force is applied In which of...
Monday, December 12, 2011 at 10:57pm by Kellie
Nothing to be lost about. Coulombs Law: force=kQ1*Q2/distance^2 I can check your work. Work this in the SI system of units.
Tuesday, April 24, 2012 at 5:49pm by bobpursley
George, how much of this do you know how to do? This is a lot of work for me just to check your answers and I don't want to work these if you already know how. If you have worked these, post your
work and I'll be happy to check your answers. I can do that much faster than I ...
Sunday, April 3, 2011 at 5:13pm by DrBob222
Math ~CHECK MY WORK~
Thank You Katalina! And Damon I said check my work not give me the answer! But thanks anyway!both of 2
Thursday, January 16, 2014 at 5:10pm by Wendy
Chemistry- please check my work
We would much prefer you to show your work and let us check your answer. We can help better that way.
Wednesday, February 25, 2009 at 9:54pm by DrBob222
chem- plz check my work
I didn't check the math but the procedure is right. Good work.
Wednesday, December 8, 2010 at 6:09pm by DrBob222
Physics - conservative and non conservative (check
I object to your use of "total energy". Your formula expression is the same as mine, but I found the work done on friction. That is not, nor even close to, total energy. So I am not certain what you
mean. You found friction work. I also object to the number of significant ...
Saturday, December 4, 2010 at 2:07pm by bobpursley
please please someone check my work I don't understand it fully so I am asking if someone could please check my work thank you if it's too much I'm sorry. God Bless
Wednesday, January 21, 2009 at 5:07pm by Hellokitty1993
This is a lot of work and you need to know how to do this type problem. So tell us what you understand, how much you can do, and what you don't understand and we can help you through it. I'm not
interested in spending the next hour typing the answers just so you can check your...
Monday, February 18, 2008 at 8:51am by DrBob222
#1 is ok. #2. I assume binding energy is the same as work function. K. E. = (hc/lambda) - work function Convert 956 ev to Joules and substitute for K. E. h, c, and lambda (don't forget lambda is in
meters). Solve for work function. Check my thinking. Check for typos.
Monday, November 5, 2007 at 10:14pm by DrBob222
physics 3!! **
The energy (work) required to life any mass against a graviatational force is force*distance or mgh. This translates to a change of PE. Power=work/time. I will be happy to check your thinking.
Monday, October 6, 2008 at 5:09am by bobpursley
please check these: In which of the following sentences is WORK used in the scientific sense of the word? a. holding a heavy box requires a lot of work b. a scientist works on an experiment in the
laboratory c. sam and rachel pushed hard, but they could do no work on the car d...
Monday, February 9, 2009 at 5:27pm by y912f
chemistry stoichiometry
It would be far simpler for you to show your calculations and let us check them than for us to work the problems, take the time to post, and let you check our work.
Sunday, September 9, 2007 at 9:34pm by DrBob222
Thank you!
Wednesday, November 7, 2012 at 11:39pm by Amy
I can check your work on this.
Thursday, December 6, 2012 at 3:27pm by bobpursley
I will be happy to check your work.
Monday, February 4, 2008 at 7:09pm by bobpursley
IF you post your work, I will check it for you.
Saturday, October 11, 2008 at 6:24pm by bobpursley
I will be happy to check your work.
Wednesday, October 28, 2009 at 4:59pm by bobpursley
I will be happy to check your work.
Sunday, December 13, 2009 at 7:11pm by bobpursley
I will be happy to check your work.
Sunday, March 7, 2010 at 9:37pm by bobpursley
I will be happy to check your work.
Thursday, December 15, 2011 at 12:56pm by bobpursley
I will be happy to check your work.
Wednesday, August 29, 2012 at 6:40pm by bobpursley
If you care to post your work I will be glad t check it. Don't take shortcuts in doing it. When checking problems against an on-line database the problem (and it happens quite often) is you key in
too many (or not enough) significant figures. Check that first before typing in ...
Sunday, October 27, 2013 at 9:38pm by DrBob222
Chemistry- please check my work
That's why we like to see your work. Seeing your work made me put those remarks in about the solids.
Wednesday, February 25, 2009 at 9:54pm by DrBob222
physics 2
Why don't you show your work and let us check it.
Wednesday, April 9, 2008 at 4:52pm by DrBob222
see above . Check my work.
Friday, October 24, 2008 at 6:29pm by bobpursley
repost this, with your thinking/work. I will check it for you
Monday, February 14, 2011 at 4:27am by bobpurslley
What is your thinking on this? I will be happy to check your work.
Thursday, June 16, 2011 at 12:28am by bobpursley
and your thinking is...? I will be happy to check your work.
Tuesday, January 21, 2014 at 8:54pm by bobpursley
Early one October you go to a pumpkin patch to select your Halloween pumpkin. You lift the 2.8 kg pumpkin to a height of 1.27 m, then carry it 51.4 m (on level ground) to the check-out stand. How
much work do you do on the pumpkin as you carry it from the field? I have ...
Saturday, March 13, 2010 at 9:56pm by Physics Nerd
Sorry to bother you, can you explain the steps to me, so I can see the work and check my work. I thought i had done it wrong, I started my work and then guessed.
Wednesday, March 24, 2010 at 10:50pm by Anonymous
The gravel will work efficiently if the brakes of the truck are at least working, even though they may not be in perfect condition. The kinetic energy, Ek of the truck on entering the ramp =(1/2)mv²
Increase in potential energy, Ep, at the end of the ramp of length L =...
Saturday, October 31, 2009 at 8:10pm by MathMate
Physics repost please check
your formula is correct. however since it says the work is done on the system work has to be negative so you would do 2531-(-1101) or 2531 + 1101. That should give u the right answer
Monday, April 16, 2007 at 6:34pm by kat
I really dont get this. physics
I presume you are in calculus based physics. Rate of work= d work/dt= d/dt (1/2 kx^2)= kx dx/dt = kx * velocity But velocity= sqrt(2*KE/mass) where KE= 10-1/2 kx^2 so rate of work= kx*(sqrt2/mass
(10-1/2kx^2) check my thinking.
Sunday, November 18, 2007 at 6:12pm by bobpursley
check my work I did it off the top of my head.
Friday, April 3, 2009 at 9:32am by bobpursley
Find your error. Or, post your work, and I will check.
Thursday, October 1, 2009 at 5:56pm by bobpursley
I will be happy to check your work, or critique your thinking.
Sunday, July 11, 2010 at 6:54pm by bobpursley
I will be happy to check your optics equation work
Tuesday, March 6, 2012 at 9:10pm by bobpursley
thermodyn. please check work
2. does the temp of the nail not increase? 3. When you squeeze gas it gets hot. If you do it rapidly enough there is no time for heat to escape. 4. That did the gas do work on? 5. Again I do not see
any work being done. 6. The steam expands through the turbine, losing ...
Saturday, August 15, 2009 at 10:26pm by Damon
^=exponents /= divide So I am not sure how to do this. It is an example in my book but I still dont get it. Can you work through ti so I can do my real work. (-1^2+-1^-2)^-1 (-1^2+-1^-2)^-1 = First,
do work within the parentheses. -1*-1 = 1 and -1-2 = 1/(-12) = [1+(1/(-12)]= [...
Thursday, November 23, 2006 at 7:35pm by Gina
physics problem-check
work = force*distance. Looks ok to me.
Tuesday, October 16, 2007 at 11:13pm by DrBob222
physics repost
Thinking is correct. I didn't check calc work.
Friday, April 4, 2008 at 12:08pm by bobpursley
Use conservation of momentum. I will be happy to check your work.
Monday, October 27, 2008 at 5:25am by bobpursley
workapplied=39cosine22.6*1m= work doneonbook= 1/2* 4 *1.75^2 work friction= work applied-work done
Wednesday, July 18, 2012 at 8:06pm by bobpursley
physics repost
I didnt check calc work, but thinking is correct.
Friday, April 4, 2008 at 12:10pm by bobpursley
Physics-circular force
I didn't get that, I get on the order of .1 Check your work, or post it here.
Wednesday, October 28, 2009 at 5:02pm by bobpursley
See your 10:30am post.
Thursday, November 8, 2012 at 9:15pm by Henry
Physics Please check my answer
There is no motion in the direction of the force. NO WORK DONE!
Friday, January 10, 2014 at 6:27pm by Damon
Physics- drwls please check
Good morning. This is a repost. The system is telling me my calculations is incorrect. I thought a fresh pair of eyes would help. Suppose a monatomic ideal gas is contained within a vertical cylinder
that is fitted with a movable piston. The piston is frictionless and has a ...
Thursday, April 19, 2007 at 9:06am by Mary
Io, The formula Q1 d)t=-ln(0.25)*2*R*C doesn't work for me, you can check it, there can be there a mistake is?
Wednesday, May 22, 2013 at 3:37pm by BoobleGum
See the steps I gave you. Start with those, I will check your work on each step.
Tuesday, October 1, 2013 at 4:02pm by bobpursley
physics concepts (please check over my work)
30kgm 6kg 3m/s
Saturday, December 9, 2006 at 9:18pm by Anonymous
distance = rate x time 120 m = 290 m/s x time time = ?? The gravitational force will be acting on the bullet during that time. The distance it will fall is S = 1/2 (g)t^2 I get something like 0.8 m
or about 80 cm. Check my thinking. Check my work.
Saturday, September 26, 2009 at 11:14pm by DrBob222
Physics - check my work
A cannon is fired from a cliff 190 m high downward at an angle of 38o with respect to the horizontal. If the muzzle velocity is 41 m/s, what is its speed (in m/s) when it hits the ground? Please
check my work. I think it’s wrong, but I don’t know where I’m messing up… Vx = 41 ...
Friday, November 9, 2007 at 10:57am by Lindsay
work in the block=friction*distance but friction force is mainly constant in the wood, so reducing the distance by 1/2 reduced the work absorbed by half 1/2 m 100^2 -1/2 m 10^2=friction work so if
friction work is halved, then that is added to the final KE. final KE=1/2 m 10^2...
Wednesday, August 31, 2011 at 8:02pm by bobpursley
Work=mgh work=mxgxh work=20x10x2 work=400j
Monday, June 20, 2011 at 1:58pm by Olaniyan
T = Tom's age. S = Son's age. ============== equation 1 is T - 15 = S equation 2 is T+ 10 + 5 = 3*(S+10) ================================= Solve the two equations simultaneously. Post your work if
you get stuck. Check my thinking. Check my work.
Saturday, December 15, 2007 at 9:22pm by DrBob222
vf^2=Vi^2 + 2ad 4=30.96+2(-3.3)d d= about 4 meters. Check my work.
Sunday, October 21, 2007 at 2:31pm by bobpursley
Check my answer that I posted yesterday. Show your work if you need further assistance.
Sunday, January 24, 2010 at 12:45pm by drwls
I will be happy to check your work. Also, do remember sum of vertical forces is zero.
Wednesday, October 31, 2012 at 12:18pm by bobpursley
Math(work problem)
After 3 days, there is still 5/8 job left to do. (5/8)/3 = 1/8 + 1/x x = 12 Check: In 3 days, the man can do 3/8 of the work The helper can do 3/12 = 1/4 of the work 3/8 + 1/4 = 5/8, the amount of
the work left to do.
Monday, July 1, 2013 at 7:37pm by Steve
Physics repost please check
Good morning. This is a repost. The system is telling me my calculations is incorrect. I thought a fresh pair of eyes would help. Please tell me what I am doing wrong. Suppose a monatomic ideal gas
is contained within a vertical cylinder that is fitted with a movable piston. ...
Friday, April 20, 2007 at 2:48am by Mary
Let R = # red marbles and B = # black marbles. =========================== R + B = 42 R = 3B + 3 Solve the two equations. I have 13 for B and 29 for R but check me out. Check my work. Check my
Monday, May 25, 2009 at 11:46am by DrBob222
Here is one way to solve this system of equations. Substitute x+3 for y in the second equation. x + 3 = -3x + 7 Now solve for x: x + 3x = -3 + 7 4x = 4 x = 1 Therefore, y = 4 Check both equations to
see if the solutions work. It always helps to check your work!
Tuesday, January 31, 2012 at 12:13pm by MathGuru
You have to be pulling our legs. Surely you can do this. Someone here will check your work, if you wish.
Saturday, October 18, 2008 at 2:45pm by bobpursley
I will be happy to try to check your work. You have two equations: conservation of momentum, and energy.
Friday, August 10, 2012 at 3:20pm by bobpursley
Yes, I could. I will be happy to check your work, but you need to do your own thinking. Answer grazers learn nothing.
Thursday, September 6, 2012 at 11:00am by bobpursley
If you use S=v0*t + (1/2)at² you have S=960m v0=80.2 m/s a=4.2 m/s² Solve for t. I get 9.6 sec. for going up, and 14 sec. for coming down (v0=0,a=-g,S=-960). Check my work and post if you would like
to check answers.
Sunday, October 11, 2009 at 11:37pm by MathMate
1. No The train does work; the wagon has work done on it. 2, Negative net work ON an object is work done BY that object against another force. When negative work is done ON an object, its kinetic
energy may decrease, but not necessarily. The work done can be converted to other...
Saturday, October 27, 2007 at 9:44pm by drwls
how would I check this on my own? I know you put numbers in the place of the letters but when I do that I come up with my moms answer. this is what i have when I check my work. p=-2 q=3 6*(-2)*3 =-36
but 3(-2)(-2*3)=36 could you tell me what i'm doing wrong when I check them pls
Friday, October 22, 2010 at 6:13pm by TiffanyJ
I am getting the wrong answer. Can you please check my work? My answer is off by a multiple of ten. A sample of an unknown material appears to weigh 295 N in air and 165 N when immersed in alcohol of
specific gravity 0.700. (a) What is the volume of the material? ___________m^...
Thursday, November 8, 2012 at 10:30am by Amy
You have four rod segments, in which you can use the parallel axis theorem. I will be happy to check your work.
Saturday, October 18, 2008 at 10:49am by bobpursley
you use the mg-(2.3E2*cos21)for fnormal...bobpursley you really should check your work before...
Saturday, November 13, 2010 at 11:07am by Bob
Business Management
1. Plan the work to be delegated. 2. Decide whom to delegate for this work. Consider time, aptitude, and skills of this person. 3. Check periodically to make sure the work is being done correctly. 4.
Add additional information and advice as needed.
Tuesday, December 1, 2009 at 4:09pm by Ms. Sue
Physical Science
You are absolutely right. I don't know that you haven't done any work. I do know, however, that you didn't post any work nor show us that you had done any. We shall be happy to help you with your
checking but it's best to work the other way; i.e., you post and we check. Thanks...
Monday, April 28, 2008 at 6:43pm by DrBob222
(1) Are you doing any volunteer work? How about your friends? 1. Yes, I am. They are doing (some) volunteer work, too. 2. No, I'm not. They are not doing (any) volunteer work, either. (2) Do you do
any volunteer work? How about your friends? 1. Yes, I do. They do (some) ...
Thursday, November 18, 2010 at 4:58am by rfvv
Physics, still don't get it!
Suppose a monatomic ideal gas is contained within a vertical cylinder that is fitted with a movable piston. The piston is frictionless and has a negligible mass. The area of the piston is 3.06 10-2
m2, and the pressure outside the cylinder is 1.02 105 Pa. Heat (2109 J) is ...
Wednesday, April 18, 2007 at 10:30pm by Mary
Work done by the system is -. It seems to me that the system does 196 J work so that is -196 J. It receives 40 J of heat which is +. Therefore, -196+40 would be the change. The environment would be
the negative of that. Check my thinking.
Saturday, April 26, 2008 at 1:29pm by DrBob222
I am getting the wrong answer. Can you please check my work? My answer is off by a multiple of ten. A sample of an unknown material appears to weigh 295 N in air and 165 N when immersed in alcohol of
specific gravity 0.700. (a) What is the volume of the material? ___________m^...
Thursday, November 8, 2012 at 9:15pm by AMY
Okay, I figured this was the answer. Not to mention, mathematically, the calculation would be W = (10)(9.8)(cos90)(10) But, cos90 = 0, so the calculation shows that the man does 0 J of work. If the
question had concerned him picking up the sack vertically, then work would have...
Wednesday, February 8, 2012 at 2:23pm by Mishaka
social studies
No one here will do your work for you. Post what you think, and we'll check your work for you.
Wednesday, January 2, 2013 at 11:06am by Writeacher
how would I check this on my own? I know you put numbers in the place of the letters but when I do that I come up with my moms answer. this is what i have when I check my work. p=-2 q=3 6*(-2)*3 =-36
but 3(-2)(-2*3)=36 could you tell me what i'm doing wrong when I check them ...
Friday, October 22, 2010 at 6:44pm by TiffanyJ
chem- repost
work at constant temp and pressure, will be pressure*deltaV in this case, you get two volumes of product S8(s)+4O2>>8SO2 work= p*dV on a per S8 mole basis, work= 101.3kPa*4*22.4dm^3*1m^3/1000dm^3
work= above in Joules check my thinking
Sunday, April 17, 2011 at 7:20pm by bobpursley
Algebra Concept check
How can we check your answers, if you have not given them? We do not do your work for you.
Wednesday, May 12, 2010 at 11:48am by PsyDAG
the key...^=exponent QUESTION: What is 2^3-6x3^2 Can anyone help me please!!!! 2^3 is 8 3^2 is 9 I will be happy to check your work. is it 1? [[[help]]] u there No, of course not. Where is your work?
Thursday, October 5, 2006 at 5:17pm by jessica
Please know that no one will do your work for you. If YOU post what YOU THINK, someone here will be happy to check your work.
Monday, September 10, 2012 at 8:15am by Writeacher
Will this check work? Is this correct? 0-2 = -2 0-1 = -1 -1/-2 = 0.5 0^2 – 1 = -1 -1/2 = -0.5 0.5 + -0.5 = 0 THEN 1-2 = -1 1-1 = 0 0/-1 = 0 1^2 – 1 = 0 0/2 = 0 0+0 = 0 x = 0, 1 Does this all check
Tuesday, August 4, 2009 at 8:58am by Rachale
Pre-calculus-check answers
1) I got a remainder of 5, check your work 2) correct
Saturday, August 23, 2008 at 1:47pm by Reiny
precalcus will someone check my homework
no. check it f(3)=ab^3=3*2^3=24, not -3/8 recheck your work.
Saturday, March 3, 2012 at 7:31pm by bobpursley
Chemistry(Please Check)
4,5,6 look ok. 1,2,and 3 I can't confirm. Show your work on those. I didn't check 7.
Monday, April 23, 2012 at 11:56am by DrBob222
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=physics(check+my+work)","timestamp":"2014-04-19T08:20:21Z","content_type":null,"content_length":"35353","record_id":"<urn:uuid:00351fb5-8ba2-4e7a-bc77-79a7196ff152>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
If You Read Knowing and Teaching Math by Liping Ma, What Math Do You Use? - K-8 Curriculum Board
I just read "Knowing and Teaching Elementary Mathematics" by Liping Ma. I used to teach second grade and I have to admit I could have been an example of the poor American teacher without a deep
understanding of math. I taught math how I was taught using phrases like "borrow from your neighbor".
So if you read the book, what math series are you using? My son is almost 5, so I need to decide which math series to use soon. | {"url":"http://forums.welltrainedmind.com/topic/246544-if-you-read-knowing-and-teaching-math-by-liping-ma-what-math-do-you-use/","timestamp":"2014-04-20T20:55:56Z","content_type":null,"content_length":"194092","record_id":"<urn:uuid:1d2908b2-76fc-4aaa-a377-0535ce78546c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
oh boy do i need help - complex analysis
September 13th 2008, 05:25 PM #1
Junior Member
Sep 2008
oh boy do i need help - complex analysis
I can't do #3 or #4
I have a feeling I can do 3 but I'm not quite sure where to go from
and for 4 I know that to show it's not injective show that exp(0,0) = exp(0,2pi)
3)The problem here is that how is really defined what $\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}$ means? I assume it means that we think of $f: D\to \mathbb{C}$ as $f(x,y) = (u
(x,y),v(x,y))$ (since the complex numbers are simply points in $\mathbb{R}^2$ formally).
Now define $\frac{\partial f}{\partial x} = \left( \frac{\partial u}{\partial x}, \frac{\partial v}{\partial x} \right)$ and $\frac{\partial f}{\partial y} = \left( \frac{\partial u}{\partial y},
\frac{\partial v}{\partial y} \right)$.
This means $\left( \frac{\partial u}{\partial x}, \frac{\partial v}{\partial x} \right) + i \left( \frac{\partial u}{\partial y}, \frac{\partial v}{\partial y} \right) = \left( \frac{\partial u}
{\partial x}, \frac{\partial v}{\partial x} \right) + \left( - \frac{\partial v}{\partial y} , \frac{\partial u}{\partial y} \right)$.
However, $\frac{\partial u}{\partial x} - \frac{\partial u}{\partial y} = 0$ and $\frac{\partial v}{\partial x} + \frac{\partial u}{\partial y} = 0$ by Cauchy-Riemann equations.
4)Note $\exp(0) = \exp(2\pi i)$ thus it is not injective.
thank you
now i'll try 5 and 6 on my own for a while before asking help for those
May I offer a different perspective?
If $x=\frac{z+\overline{z}}{2}$ and $y=\frac{z-\overline{z}}{2i}$ then:
$<br /> \frac{\partial f}{\partial \overline{z}}=\frac{\partial u}{\partial x}\frac{\partial x}{\partial \overline{z}}+\frac{\partial u}{\partial y}\frac{\partial y}{\partial \overline{z}}+i\left
(\frac{\partial v}{\partial x}\frac{\partial x}{\partial \overline{z}}+\frac{\partial v}{\partial y}\frac{\partial y}{\partial \overline{z}}\right)$
..... $=\frac{1}{2}\left(\frac{\partial u}{\partial x}-\frac{\partial v}{\partial y}\right)+\frac{i}{2}\left(\frac{\partial v}{\partial x}+\frac{\partial u}{\partial y}\right)<br />$
Well there you go: If $f$ is analytic then the partials satisfy the Cauchy-Riemann equations so $\frac{\partial f}{\partial \overline{z}}=0$ and the only way for
$\frac{1}{2}\left(\frac{\partial u}{\partial x}-\frac{\partial v}{\partial y}\right)+\frac{i}{2}\left(\frac{\partial v}{\partial x}+\frac{\partial u}{\partial y}\right)=0$
is for the partials to satisfy the Cauchy-Riemann equations, that is, $f$ must be analytic.
Last edited by shawsend; September 14th 2008 at 03:43 PM. Reason: corrected formulas
thank you but i got that one already
also for #4
where it says show that exp(C) = C/{0}
what does that mean? the notation is unfamiliar to me and hasn't been explained in course lecture or notes
also there is no textbook for the class which makes this particularly frustrating
Hey jb. It's a function at a domain. Suppose I define the unit circle as the domain $D=\left\{z : |z|<=1\right\}$
Then I could say, what is the function at that domain? or what is $f(D)$ meaning how does the function map the domain $D$. Same with $e^{\mathbb{C}}$. How does the exponential function map all of
$\mathbb{C}$? It does so by the expression:
$e^{\mathbb{C}}=\mathbb{C}\backslash\{0\}$ meaning it maps all of $\mathbb{C}$ to the deleted neighborhood: $\mathbb{C}\backslash\{0\}$ which is $\mathbb{C}$ minus the origin.
thank you
the complex plane not including the origin <- thats what I needed to know
I'm still at a loss to show how the exp maps to that but I'm sure I'll figure it out by tonight
If $\alpha \in \mathbb{C}^{\times}$ then you should know the fact that is is possible to write $\alpha = re^{i\theta}$ where $r>0$. Now let $z = \log r + i\theta$. Then $e^z = e^{\log r + i \
theta} = re^{i\theta} = \alpha$. This shows that the equation $e^z = \alpha$ always has a solution for any $\alpha\in \mathbb{C}^{\times}$. Thus, the function $\exp : \mathbb{C} \to \mathbb{C}^{\
times}$ is onto.
umm I got #5 a and b d
but I can't get c
also if it's not too much of a bother could someone give hints at 6
I don't want them entirely answered for me because I still want to be learning this stuff
It may be late (since it's past 3pm...), but I don't care
I'm also learning how to do this stuff !
For #5 c), use Cauchy-Hadamard theorem : Cauchy-Hadamard theorem - Wikipedia, the free encyclopedia
My teacher says the best way above all to solve this problem is to use Abel's theorem :
$R=\sup \{ ~ r \in \mathbb{R} / a_n r^n \text{ is bounded } \}$
(R is the radius of convergence)
Here, Hadamard's theorem is more straightforward.
#6 is above my capacities...
Cauchy-Hadamard's theorem is the name for this "root test".
September 13th 2008, 06:00 PM #2
Global Moderator
Nov 2005
New York City
September 13th 2008, 07:42 PM #3
Junior Member
Sep 2008
September 14th 2008, 01:41 PM #4
Super Member
Aug 2008
September 14th 2008, 01:46 PM #5
Junior Member
Sep 2008
September 14th 2008, 01:57 PM #6
Super Member
Aug 2008
September 14th 2008, 02:10 PM #7
Junior Member
Sep 2008
September 14th 2008, 02:25 PM #8
Global Moderator
Nov 2005
New York City
September 14th 2008, 05:16 PM #9
Junior Member
Sep 2008
September 15th 2008, 12:05 PM #10
September 15th 2008, 12:54 PM #11
Global Moderator
Nov 2005
New York City
September 15th 2008, 01:26 PM #12 | {"url":"http://mathhelpforum.com/calculus/48946-oh-boy-do-i-need-help-complex-analysis.html","timestamp":"2014-04-20T18:37:21Z","content_type":null,"content_length":"75487","record_id":"<urn:uuid:f03e5873-13bd-4ac5-a9bc-b78e10dbb670>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
RcppZiggurat: Rcpp integration of different Ziggurat Normal RNGs
Random numbers are of critical importance for simulations. In particular, those following a standard normal distribution are frequently used.
, as a system and environment for statistical programming, comes with several generators for the normal distribution. However, while these are thoroughly tested and well-regarded for their
statistical properties, these builtin-generators are not among the fastest generators. The following figure, taken from the
detailed pdf vignette
, illustrates the speed to four different Ziggurat implementations relative to the default R generator. It displays violin plots of the results aggregated by the microbenchmark package, running 100
replications of one million draws each:
We can also illustrate it with the standard benchmark table:
R> library(rbenchmark)
R> n <- 1e6
R> benchmark(rnorm(n), zrnorm(n))[,1:4]
test replications elapsed relative
1 rnorm(N) 100 11.643 9.27
2 zrnorm(N) 100 1.256 1.00
For settings in which the RNG draws constitutes a measurable amount of time, the gain can be noticeable. In other situations, the about nine-fold gain may be somewhat immaterial as it affects only a
small part of the overall execution.
Please see the
detailed pdf vignette
(also included in the package) for more details on the algorithm, the statistical validation and the timing comparisons.
The Ziggurat algorithm was introducded by
Marsaglia and Tsang (2000)
in a well-known JSS paper. An important correction was published by
Leong et al (2005)
, also in JSS. Several Open Source implementations exists. We compare the following three in our vignette: the version in the
due to Jochen Voss, a version from
GNU Gretl
, and an experimental version from
Where do I get it
The package is available
• as releases via CRAN and its mirrors,
• source code (plus some extras such as underlying papers) are in the GitHub repo, and
• local copies of release tarballs are also available from the Rcpp directory on my server.
RcppZiggurat is written and maintained by Dirk Eddelbuettel.
RcppZiggurat is licensed under the GNU GPL version 2 or later. | {"url":"http://dirk.eddelbuettel.com/code/rcpp.ziggurat.html","timestamp":"2014-04-21T14:41:50Z","content_type":null,"content_length":"10434","record_id":"<urn:uuid:84a2494c-ee0c-481c-9208-f177c95cb272>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
On a clear day: solution
On a clear day: solution
R is the radius of the Earth, H is your height and D is how far you can see. The walking distance across the Earth's surface is the length of the arc s.
Solution to problem 1: In Issue 54's Outer Space you were asked to work out the "walking" distance over the Earth's surface to the horizon. This is equal to the length
Solution to problem 2: You see something, like a cloud or a hot air balloon, high in the sky beyond the horizon and you know its maximum possible height above the ground. Can you work out how far
away it could be?
Let's assume that the object is at the maximum possible height
Using Pythagoras' theorem, we see that the distance from you to point
We have already worked out that for a person of height
Now let
Noting that
Now the maximum distance to the object from the observer is approximately equal to | {"url":"http://plus.maths.org/content/plus-magazine-14","timestamp":"2014-04-21T09:54:21Z","content_type":null,"content_length":"35325","record_id":"<urn:uuid:6cfbb70e-3c5f-4904-ad8b-1d7face1fea2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: DHS Ghana variable construction question
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: DHS Ghana variable construction question
From Tharshini Thangavelu <thth4658@student.su.se>
To statalist@hsphsun2.harvard.edu
Subject Re: st: DHS Ghana variable construction question
Date Mon, 27 Jul 2009 15:42:38 +0200 (CEST)
Beginning with responding to Friedrich
1.) The suggested command seems not be working for getting the age of mother and
father. I investigated further on this problem and found that the following notice:
.tab hv105
Age of |
household |
members | Freq. Percent Cum.
0 | 772 22.69 22.69
1 | 706 20.75 43.45
2 | 655 19.25 62.70
3 | 689 20.25 82.95
4 | 553 16.26 99.21
5 | 27 0.79 100.00
Total | 3,402 100.00
As you can see from the above table, the age of household member contains an
interval of 1-5, this explains why I cannot get the mother and fathers age
correctly. Further on, it seem like if the variables hv105 and hc1 which denotes
age in months corresponds as does the two following variables; hc27 - sex and
hv104 - sex of household member.
Does this means that my merged dataset is not reliable? Where is the problem? I
believe that I have followed the merging document that is available when
downloading datasets from DHS.
Can I ignore this problem and simply use the variables for mother and fathers
age from the following variables; v730 - partners age and v447a - womens' age in
years from household report.
2.) Thanks for the suggested articles, I will definitely
take a closer look at them.
3.) Creating nr of siblings produced the following results.
tab sno
sno | Freq. Percent Cum.
-1 | 966 28.62 28.62
0 | 1,198 35.50 64.12
1 | 1,112 32.95 97.07
2 | 91 2.70 99.76
3 | 8 0.24 100.00
Total | 3,375 100.00
I wanted to double check if this is reasonable. My intuition said that if ; v218
- No. of living children = (sno + hv014 denotes no. of children under 5 years
old), it should be correct. But just by looking at the data editor, it was clear
that this was not the case. How do I know that the created variable sno, is correct?
Question 1.
I have a specific question concerning some variables existence in the DHS data
sample. Probably those who has been working extensively or familiar to this data
can answer.
Is there in the Mens' Recode file : their heigh and weight? I have been looking
for theses two variables without any success? Since there is height and weight
for women I thought surely there must be for men. Having these two variables for
men as control variables in my analysis can be good, given that they exists.
Question 2.
Since DHS is a survey, I know there is command svy which incorporates some of
the survey characteristic. I read about the command, apparently there are under
certain circumstances should be avoided to use. I am using a simple OLS and 2SLS
(IV method). I know that applying this command yield a consistent std. error,
p-value and confidence intervall. Though, my issu concern if the difference is
notable that I should use the command? Is the command applicable when using 2SLS
with IV method?
On 2009-07-26, at 14:11, Friedrich Huebler wrote:
> Tharshini,
> Answer to question 1: The ages of the mother and father are given with
> these commands:
> by hhid: gen mage = hv105[hv112]
> by hhid: gen fage = hv105[hv114]
> The wrong parents' ages may be a consequence of missing observations
> in a household. See these posts from the Statalist archive for a
> possible solution:
> http://www.stata.com/statalist/archive/2006-06/msg00321.html
> http://www.stata.com/statalist/archive/2006-06/msg00323.html
> Answer to question 2: The best you can do is use the wealth index as
> an indicator of relative household wealth. For more information read
> this article:
> Deon Filmer and Lant H Pritchett, “Estimating wealth effects without
> expenditure data - or Tears: An application to educational enrollments
> in states of India,” Demography 38, no. 1 (February 2001): 115-132.
> Answer to question 3: As an example, assume you want to consider only
> children that have the same mother and father.
> * Create unique ID for all groups of siblings
> egen sid = group(hhid hv112 hv114)
> * Count number of siblings of children under 5
> bysort sid: egen sno = count(sid)
> replace sno = . if hv105>=5
> replace sno = sno - 1
> Some children cannot be identified as siblings from the mother's and
> father's line number, among them children whose parents are dead, do
> not live in the same household or for whom the parents' line numbers
> are missing. To exclude these children modify the code above:
> egen sid = group(hhid hv112 hv114) if hv112>0 & hv112<99 & hv114>0 & hv114<99
> With the commands above, children who do not share both parents but
> only have the same mother or father cannot be identified as siblings.
> For further reading I recommend these Stata FAQs:
> http://www.stata.com/support/faqs/data/anyall.html
> http://www.stata.com/support/faqs/data/members.html
> Friedrich
> On Sat, Jul 25, 2009 at 9:47 AM, Tharshini
> Thangavelu<thth4658@student.su.se> wrote:
>> Hi,
>> I have few things that I wonder in the DHS dataset.
>> Question 1.
>> Responding first to Friedrich Hueblers answer 2009-06-22.
>> I tried as you said inorder to get the age of mother and age of father. But
>> those variables seems weird to me. Some of the values indicate 2, which in
>> impossible. None of mothers or fathers' age can actually be 2 years. There are
>> variables indicating partners age (v730)and mothers' age (v447a). I just don't
>> understand why the two variables created according to the following command:
>> by hhid: gen mage = hv105[hv112]
>> by hhid: gen fage = hv105[hv114]
>> doesn't give the same value in years as in the v730 and v447a. Normally this
>> should be the case, or is it?
>> Question 2. - Disposable household income variable
>> I would like to create a variable for disposable household income. This variable
>> doesn't exit in the DHS datasample but I would like to use a proxy variable that
>> is available in the data. The suggested proxy variables are;
>> .wealth index hv270 (indicates 1-5 level, where 1 is the poorest and 5 is
>> .respondens'currently working v714 (the respondent consist of only women which
>> then will not indicate a good proxy for household income)
>> Other potentially proxy variables are:
>> . partners occupation v704 v705
>> . respondents occupation v716 v717
>> I don't know which variable that can in an efficient and in a consistent way
>> show the disposable household income variable.
>> question 3. - Nr of siblings
>> I would like to create, if possible nr of siblings for children under 5 (my
>> dependent variable)
>> How can I create this variable. I have looked if there is variable for nr of
>> siblings. However, looking at the data sample closely the variables
>> . Nr of household member h009
>> . Nr of children under five years h014
>> My reasoing for creation of nr of siblings is the following: looking closely at
>> these two variables shows the following:
>> Ex: in row nr5. the h009 denotes 5 and h014 denotes 2. Thus, this particular
>> household, incorporates 5 household members with 2 children under five. The one
>> member that is left, who is this person? Is it sibling as I am assuming or
>> another family member such as a relative. Further more, I don't know if both
>> parents are alive in the household. In order to check if both parents are alive
>> I take on this method.
>> sort hhid hvidx
>> by hhid: gen mother = hv010[hv112]
>> by hhid: gen father = hv011[hv114]
>> The hv010 and hv011 represents nr of eligible women and nr of eligible men in
>> the household. The hv112 and hv114 denotes, mother and fathers line nr
>> respectively. Nevertheless, there are two other variables sh11 and sh13 which
>> also indicates mother and fathers line nr. Does it matter which one I use?
>> Somehow, it doesn't give me the desired results. Instead I try to combine with
>> the variables
>> . mothers' alive sh10
>> . fathers' alive sh12
>> This I just check by edit command. In the end how can I verfiy if the one member
>> is actually a sibling? Because this is the variable that I am looking for.
>> So, if someone can enlighten me in these three question. I would be happy.
>> Best regards
>> Tharshini
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Tharshini THANGAVELU
Forskarbacken 8 / 101
114 16 Stockholm
Phone +46 (0)735 53 43 90
E-mail thth4658@student.su.se
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-07/msg01086.html","timestamp":"2014-04-18T23:57:33Z","content_type":null,"content_length":"16253","record_id":"<urn:uuid:449d177e-8ea4-4ba7-819d-7bfbf8aabbb8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding fertilizer math
There's a surprising amount of math devolved in using chemical fertilizers. This section covers the following useful fertilizer math skills:
• Converting fertilizer recommendations from an N-P[2]O[5]-K[2]O basis to the actual kind and amount of fertilizer needed.
• Selecting the most economical fertilizer.
• Mixing fertilizers.
• Determining how much fertilizer is needed per area, per Plant, and Per length of row.
• Converting fertilizer dosages from weight to volume.
USE THE METRIC SYSTEM!: It greatly simplifies fertilizer math and most other calculations. Even if your country doesn't use metrics, it's well worth your while to use it for calculation purposes.
Here's how to quickly convert some common non-metric units into metric (see Appendix A also):
lbs./acre X 1.12 = kg/ha; 1 lb. = 0.454 kg = 454 g
1 acre = about 4000 sq. meters (actually 4048 m2) 1 manzana (Latin America) = 7000 sq. meters
4" = 10 cm
8" = 20 cm
12" = 30 cm
16" = 40 cm,
18" - 45 cm
24" = 60 cm
30" = 75 cm
32" = 80 cm,
36" = 90 cm
40" = 100 cm
CONVERTING RECOMMENDATIONS FROM AN N, P[2]O[5], K[2]O BASIS TO THE ACTUAL KIND AND AMOUNT OF FERTILIZER NEEDED
As explained in Chapter 9, fertilizer recommendations aren't always given in terms of actual kind and amount of fertilizer. Instead, technical brochures and soil testing labs often give
recommendations in terms of the amount of N, P[2]O[5], and K[2]O needed per hectare. In this case, it's up to you and the farmer to determine what kind and amount of actual fertilizer is needed per
hectare to match this recommendation. Let's run through a practice problem:
PROBLEM 1: A farmer's cooperative has just received the following fertilizer recommendation for a one hectare tomato field.
│ │ kg/hectare │
│ │ N │ P[2]O[5] │ K[2]O │
│ At transplanting │ 40 │ 80 │ 40 │
│ 1st sidedressing at 30 days │ 30 │ │ │
│ 2nd sidedressing at 60 days │ 30 │ │ │
│ 3rd sidedressing at 90 days │ 30 │ │ │
Suppose the local ag supply store has the following fertilizers available. What kind will be needed, how much of each, and what will the cost be?
│ Fertilizers Available │ Cost per 50 kg Sack │
│ 15-15-15 │ $18 │
│ 16-20-0 │ $15 │
│ 20-10-5 │ $14 │
│ 10-20-10 │ $16 │
│ 20-0-0 │ $12 │
STEP 1: Let's begin with the 40-80-40 transplanting recommendation. The first thing to do is to look at the ratio of N:P[2]O[5]:K[2]O and then look for a fertilizer with a similar ratio. The 40-80-40
figure has a ratio of 1:2:1. Look at the fertilizer list and you'll see that 10-20-10 is the only one with a 1:2:1 ratio, so it's the one that' 8 needed.
STEP 2: There are 2 ways to find out how much 10-20-10 is needed to supply the 40 kg N, 80 kg P[2]O[5], and 40 kg K[2]O needed for the hectare:
a. You know that each 100 kg of 10-20-10 supplies 10 kg of N, 20 kg of P[2]O[5], and 10 kg of K[2]O. Therefore 400 kg would supply 40-80-40.
b. The second way is to divide the percentage of N, P[2]O[5] or K[2]O in the 10-20-10 into the respective amount of N, P[2]O[5], or K[2]O needed. Let's do this using N:
40 kg N needed = 40 kg = 400 kg 10-20-10 needed 10% N in the fertilizer 0.10
Note that you would get the same answer using P[2]O[5] or K[2]O so it's only necessary to do this division once.
STEP 3: Now what about the N sidedressings of 30 kg actual N each? In this case, choosing the right fertilizer is easy, since the 20-0-0 fertilizer is the only one containing just N. To find out how
much 20-0-0 will be needed to supply the 30 kg of N needed for a sidedressing, use one of the 2 methods in STEP 2 as follows:
a. You know that each 100 kg of 20-0-0 supplies 20 kg of N. 200 kg would supply 40 kg N. It would take 150 kg of 20-0-0 to supply 30 kg (i.e. 150 X 20% = 30)
b. Divide 20% into 30 kg which gives you 150 kg.
Therefore, 3 sidedressing of 150 kg 20-0-0 each will be needed for a total of 450 kg.
STEP 4: You've determined that 400 kg of 10-20-10 and 450 kg of 20-0-0 are needed for the hectare, so you can now calculate the cost:
│ 400 kg 10-20-10 at $16/100 kg = │ $64 450 kg │
│ 20-0-0 at $12/100 kg = │ $54 │
│ │ $118 TOTAL │
You won't always be able to fit a recommendation exactly, because the right type of fertilizer may not be available locally. At any rate, you don't have to be exact, since soil tests and
recommendations aren't 100% accurate anyway. But, do try to get within 10-25% of the amounts recommended. There's nothing wrong with having to apply more P than needed in order to supply enough K or
vice-versa; P won't leach, and K is fairly immobile. However, avoid putting too much N on at planting or leaching losses may be high.
Let's look at a situation where the fertilizers don't exactly fit the recommendation (Problem 2):
PROBLEM 2: Soil test results recommend that Fatou fertilize her maize field as follows:
│ │ kg per hectare │
│ │ N │ P[2]O[5] │ K[2]O │
│ At planting │ 30 │ 50 │ 40 │
│ At knee high stage │ 50 │ │ │
Given the following fertilizers, how much and what kind will be needed per hectare?
Fertilizers Available
STEP 1: Let's begin with the planting recommendation of 30 kg N, 50 kg P[2]O[5], and 40 kg K[2]O. That's a 3:5:4 ratio (or 1:1.7:1.3). None of the available fertilizers has this ratio, but 12-24-12
is the closest with a 1:2:1: ratio.
STEP 2: Let's figure out how much 12-24-12 is needed to supply the 30 kg of initial N:
30 kg N needed / 12% N in fertilizer = 250 kg 12-24-12
250 kg of 12-24-12 per hectare would supply 30 kg N, 60 kg P[2]O[5] and 30 kg K[2]O. This falls short on K[2]O. by about 25% but runs over on P[2]O[5] about 20% This is still satisfactory. Now what
would happen if we tried to supply the exact amount of P[2]O[5] (50 kg) using 12-24-12?:
50 kg P[2]O[5] needed / 24% P[2]O[5] in fertilizer = 208 kg 12-24-12
208 kg/hectare of 12-24-12 supplies 25-50-25 which is about 20% less N and 40% less K[2]O. than called for.
The 3rd option is to see how much 12-24-12 it would take to supply the exact amount of K[2]O (40 kg) called for:
40 kg K[2]O needed / 12% K[2]O in fertilizer = 333 kg 12-24-12
333 kg of 12-24-12/hectare supplies 40-80-40 which is about 30% more N and 60% more P[2]O[5] than called for at planting. You could adjust for the extra N by applying less in the sidedressing, but
there's no way to compensate for the 30 kg extra P[2]O[5] applied. True, some of this excess will be available to future crops, but at the expense of having to buy about 33% more 12-24-12 compared to
the 250 kg rate.
Thus, of the 3 options, the first one of applying 250 kg of 12-24-12 is best.
STEP 3: Now for the N sidedressing. The 33-0-0 fertilizer (ammonium nitrate) is the only straight N source, so it's the one to use. Calculate the amount needed to supply the 50 kg of N as follows:
50 kg N needed / 33% N in fertilizer = 150 kg 33-0-0 needed
You can't compare a 14-14-14 and 10-30-10 fertilizer on the basis of cost per unit of nutrient for 2 reasons:
• A 1:1:1 ratio fertilizer may be better suited than a 1:3:1 ratio or vice-versa, depending on the soil and the crop.
• N, P[2]O[5] and K[2]O. don't necessarily cost the same per kg of nutrient.
However, you can compare straight fertilizers having just one of the "Big 3", such as urea (45-0-0) vs. ammonium sulfate (20-0-0), or single superphosphate (0-20-0) vs. triple superphosphate
(0-48-0). You can also compare NP or NPK fertilizers having the same ratio, such as 10-20-10 and 12-24-12.
When comparing several sources of the same nutrient as to cost, what counts is the cost per kg of nutrient, not the cost per sack. Let's run through a practice problem:
PROBLEM 3: Which of the 3 N fertilizers below is the most economical source of N, other considerations aside?
│ Fertilizer │ % N │ Cost per 50 kg sack │
│ Urea │ 45% │ $18.00 │
│ Ammonium nitrate │ 33% │ $15.84 │
│ Ammonium sulfate │ 21% │ $11.76 │
SOLUTION: Although ammonium sulfate has the lowest cost per sack, it's not necessarily the cheapest. The real test is the cost per kg of N. Here's how to calculate it:
UREA: A 50 kg sack contains 22.5 kg of N (50 kg x 45%)
$18.00 / 22.5 kg N = $0.80 per kg of N
AMMONIUM NITRATE: A 50 kg sack contains 16.5 kg of N.
$15.84 / 16.5 kg N = $0.96/kg of N
AMMONIUM SULFATE: A 50 kg sack contains 10.5 kg of N.
$11.76 / 10.5 kg N = $1.12/kg of N
This makes urea the cheapest source of N. Usually, the fertilizer with the highest content of the nutrient will be the most economical due to lower shipping costs per unit of actual nutrient.
However, this isn't always the case.
Other factors may be important aside from the cost per kg of nutrient. Although it's the most economical (in this case), urea is a very highly concentrated source of N; farmers unfamiliar with urea
may over-apply it and waste money or injure their crops. As for ammonium sulfate, it's often the most costly per kg of N, yet it might be the best choice for a sulfur-deficient soil, unless another
sulfur-bearing fertilizer were used at planting time. On the other hand, ammonium sulfate is considerably more acid forming in its longterm effect on soil pH than either urea or ammonium nitrate (see
Table 9-1). Ammonium nitrate is a quicker-acting N source than ammonium sulfate or urea, because half of its N is already in the mobile, nitrate form. It might be the best choice where a crop is
showing N deficiency symptoms (see Appendix E) or where sidedressing has been delayed.
There are cases where it's necessary to mix 2 or 3 different fertilizers together in order to obtain the nutrient ratio needed to suit a recommendation. For example:
PROBLEM 4: Suppose that the extension office recommends the following fertilizer rates for cabbage at planting time:
│ kg per hectare │
│ N │ P[2]O[5] │ K[2]O │
│ 40 │ 80 │ 40 │
The local ag supply store has the following fertilizers on hand:
0-45-0 (triple superphosphate)
Is it possible to meet the 40-80-40 recommendation by mixing 2 or more of these together? If so, what proportions are needed, and what is the resulting fertilizer formula?
SOLUTION: The 40-80-40 recommendation has a 1:2:1 ratio. The 15-15-15 provides NPK in a 1:1:1 ratio. What's needed is to increase the amount of P by adding some 0-45-0 fertilizer. The easiest way to
calculate the amounts needed is to set up a worksheet as follows:
│ │ │ N │ P[2]O[5] │ K[2]O. │
│ 100 kg 15-15-15 │ = │ 15 kg │ 15 kg │ 15 kg │
│ X kg 0-45-0 │ = │ 0 kg │ 15 kg │ 0 kg │
│ 100 + X kg │ = │ 15 kg │ 30 kg │ 15 kg │
This worksheet helps visualize the problem. It shows that in order to end up with a 1:2:] N:P[2]O[5]:K20 ratio, we need to combine 100 kg of 15-15-15 with enough 0-45-0 to add 15 extra kg of P[2]O[5]
To figure out how much 0-45-0 is needed, divide 15 by 45%:
15 kg P[2]O[5] needed / 45% P[2]O[5] = 33 kg 0-45-0
Now let's fill in the worksheet:
│ │ │ N │ P[2]O[5] │ K[2]O │
│ 100 kg 15-15-15 │ = │ 15 kg │ 15 kg │ 15 kg │
│ 33 kg 0-45-0 │ = │ 0 kg │ 15 kg │ 0 kg │
│ 133 kg │ = │ 15 kg │ 30 kg │ 15 kg │
This shows that mixing 100 kg of 15-15-16 with 33 kg of 0-45-0 will produce 133 kg. of a fertilizer with a 1:2:1 ratio.
Determining the true formula of the mix: At first glance, it would seem that the formula of the mixture is now 15-30-15, but it isn't! What you've made is 133 kg of fertilizer containing 15 kg N, 30
kg P[2]O[5], and 15 kg K[2]O. But, fertilizer formulas are based on nutrient content in percent (i.e. kg of N, P[2]O[5], and K[2]O per 100 kg of fertilizer). Here's how to derive the true formula:
True formula = 15-30-15 / 1.33 = 11.25-22.5-11.25
CAUTION!: Not all Fertilizers can be Mixed
• Lime in any form should not be mixed with ammonium N fertilizers or urea. It will cause loss of N as ammonia gas.
• Lime should also not be mixed with any chemical fertilizer containing P, because it may convert some of the P into an insoluble, unavailable form.
Fertilizer recommendations are usually given on a per hectare (or per acre) basis. However, such figures are of little use unless you know how to determine the following:
• How much actual fertilizer is needed, given the size of the particular field?
• If the fertilizer will be applied using an LP (localized placement) method, how much fertilizer is needed per plant if the hole or half-circle method is used, or how much per length of row if it's
banded? (These 2 application methods were covered earlier in this chapter.)
TABLE 9-5 FERTILIZER MIXING GUIDE
BLANK BOXES = Fertilizers which can be mixed.
BOXES WITH AN "X" = Fertilizers which can be mixed only shortly before use.
BOXES WITH A "O" = Fertilizers which cannot be mixed for chemical reasons.
EXAMPLES: Ammonium sulfate should not be mixed with lime.
Urea can be mixed with single or triple superphosphate shortly before use.
For Large Fields: Measure the field's dimensions and calculate the area. If its shape is not rectangular, you may have to divide it up into triangles and rectangles and determine the area of each.
(The area of a triangle equals 1/2 the base X the height.)
PROBLEM 5: Suppose soil tests recommended applying 250 kg/ha of 16-20-0 to grain sorghum at planting time. How much is needed for a field measuring 40 x 80 meters?
SOLUTION: One hectare = 10000 sq. meters
The field's size = 3200 sq. meters (40 x 80)
3200 sq. meters X 250 kg/ha / 10000 sq, meters = 80 kg of 16-20-0 needed
For Small Plots: The metric system has some very handy shortcuts. A very useful one is:
100 KG/HA = 10 GRAMS PER SQ. METER
In other words, to convert from kg/ha to g/sq. meter, just drop a zero and change kg to grams!! Here's why it works:
100 kg/ha = 100,000 grams/hectare
100,000 grams / 10,000 sq. meters = 10 grams/sq. meter
PROBLEM 6: If the extension service recommends broadcasting 10-30-10 at 600 kg/ha for nursery seedbeds when no compost or manure are available, how many grams of 10-3010 would be needed for a nursery
seedbed measuring 1 X 5 meters?
SOLUTION: 600 kg/ha = 60 g/sq. meter
1 x 5 meters = 5 sq. meters
5 sq. meters x 60 g/sq. meter = 300 grams of 10-30-10 needed
NOTE: The calculations below are based on open fields with evenly-spaced rows running across them. Where "intensive" gardening is used (beds with alleyways around them), another factor needs to be
considered. We'll cover this after explaining the open-field calculations.
If using the half-circle or hole method of placement, the farmer will need to know how much fertilizer is needed per plant (or group of plants if they're in "hills"). There are several ways of doing
this, but most people agree that the following method is the simplest:
PROBLEM 7: Angelita is planning to plant a field of maize with rows 90 cm apart. She'll plant 3 seeds per hole with 60 cm between holes, using a planting stick. The extension office recommends
applying 18-46-0 at 150 kg/ha. If she uses the hole method of fertilizer placement, how many grams of 18-46-0 should each seed group receive?
SOLUTION: 150 kg/ha = 15 g/sq. meter
0.9 m X 0.6 m = 0.54 sq. meters of space belonging to each plant group.
0.54 X 15 g/sq. meter = 8.1 grams of 18-46-0 per plant group
NOTE: As you see, it's not necessary to know the field's size in order to arrive at the above answer. All that's needed is the rate per hectare and the in-row and between-row spacings. Of course, you
need to know the field's area (or the total number of plants) to figure out how much fertilizer to buy.
PROBLEM 8: A communal garden project has run out of manure and is about to transplant cabbage on a field measuring 20 x 20 meters. The local extension office recommends applying 16-20-0 at 250 kg/ha
using the half circle method. How much fertilizer should each plant receive if the rows are 60 cm apart with 40 cm between plants in the row?
SOLUTION: 250 kg/ha = 25 g/sq. meter
0.6 m X 0.4 m = 0.24 sq. meters space occupied by each plant
0.24 X 25 g/sq. meter = 6 grams of 16-20-0 needed per plant.
NOTE: The calculations below are based on open fields with evenly-spaced rows running across them. Where "intensive" gardening is used (beds with alleyways around them), another factor needs to be
considered. We'll cover this after explaining the open-field calculations.
When banding fertilizer, farmers need to know how much to apply per meter of row length (or per row). As with per-plant dosages, there are several ways of calculating this, but the simplest and
quickest method is shown below:
PROBLEM 9: Suheyla is about to apply an N sidedressing to her grain sorghum field. The recommendation is 200 kg/ha of 21-0-0 (ammonium sulfate). The rows are spaced 90 cm apart, and she plans to
apply the fertilizer in a band running down the middle of each row. How many grams of 21-0-0 should be applied per meter of row length?
STEP 1: 200 kg/ha = 20 g/sq. meter
STEP 2: All the fertilizer in a meter of row length will be confined in a band. If you can calculate the area belonging to that one meter of row length, you can figure out the dosage per meter:
Area belonging to 1 meter of row length = 1 meter of length X 0.9 m of width = 0.9 sq. meters
STEP 3: 0.9 sq. meters X 20 g/sq. meter = 18 g of 21-0-0 meter.
All the above dosage calculations were based on the open-field system of crop spacing where the rows are spaced equally across the field. (Both systems are explained and illustrated in Chapter 4.)
However, if you use the same assumption when calculating dosages for intensively-grown crops (bed-and-alley system) you'll end up significantly shortchanging the plants on fertilizer. Here's why:
• In the intensive system, vegetables are grown under very close spacings within beds (flat, raised, or sunken) that are separated by alleyways used for all foot and equipment traffic.
• Since virtually all root growth takes place in the soil within the beds, no fertilizer (or water) should be applied to the alleys.
• The fertilizer recommendation per hectare is the same for both systems, BUT this means that the dosage per plant, per meter of row length, and per planted area (i.e. beds only) will be higher in
the intensive system to make up for the fact that no fertilizer is applied to the alleyways.
• Another way of explaining this is that plants grown under the intensive system are spaced much more closely than when grown on an open-field basis. Because of this, more fertilizer is needed per
sq. meter of actual bed. Since alleyways aren't fertilized, the amount of fertilizer per hectare ends up the same for both systems.
NOTE: You may think that water rates need to be similarly increased per sq. meter, but not so. That's because the high plant densities under bed-and-alley cropping shade more of the soil surface and
thus lower evaporation losses of water; this helps compensate for the increased usage caused by the higher plant density. Also, the plant leaves shade each other more, which lowers transpiration
(actual plant usage).
Let's go over how to calculate fertilizer dosages for bed-and-alley cropping.
PER AREA (Bed-and-Alley System)
In the open-field system, 100 kg per hectare equals 10 grams per sq. meter. Now, in the intensive system you have 2 types of area: bed area (planted area) and alley area. This means that 100 kg/ha of
fertilizer doesn't work out to 10 g/sq. meter of actual bed area. If you applied 10 g per sq. meter to all the bed area, you'd end up applying much less than 100 kg/ha, because of not fertilizing the
alley area which can equal about 30-40% of the total area.
Let's run through a practice problem on how to adjust for this:
PROBLEM 10: Akbar has 10 beds each measuring 1 x 10 meters; they're separated from each other by alleyways 60 cm wide on all 4 sides. He is told to apply 12-24-12 at 300 kg/ha at planting and wants
to know how much fertilizer to buy.
STEP 1: 300 kg/ha = 30 g/sq. meter of total area (beds + alleys)
STEP 2: Determine the total area (beds + alleys) occupied by Akbar's plots:
It's accurate enough to assume that each 1 x 10 meter bed is separated from another by a 60 cm alleyway on each of its 4 sides.
Therefore each bed along with its associated portion of alley (half the width of each alley) measures 1.6 x 10.6 meters which equals 17 sq. meters.
10 beds x 17 sq. meters (bed + alley) = 170 sq. m
STEP 3: Calculate the amount of 12-24-12 needed for all the beds, based on the total area involved.
170 sq. m x 30 g/sq. m = 5100 g = 5.1 kg of 12-24-12 needed
STEP 4: Calculate the amount needed per bed:
5.1 kg are needed but will be applied only to the beds themselves, not alleys.
5100 grams = 510 grams 12-24-12 needed per bed 10 beds
Now you can see how much difference there is in dosages. If you had based the dosage on 30 g/sq. m and used bed area alone, each bed would receive 300 grams of fertilizer (10 x 30) instead of the 510
grams it really deserves!
PER PLANT (Bed-and-Alley System)
In this case, the easiest way to calculate the upwardly-adjusted rate is to count the number of plants on a bed and then divide that into the amount of fertilizer needed per bed as we did for Akbar's
plot above.
PROBLEM 11: Suppose Akbar is planning to transplant cabbage on the beds above. He'll run 3 rows down the length of each bed with 40 cm between rows and 40 cm between plants in the rows. How much
12-24-12 should each cabbage transplant receive if he plans to use the half-circle method and the same rate per hectare (300 kg)?
STEP 1: Find how how many cabbage plants will fit on each bed:
25 plants will fit in each row (24 spaces with 40 cm between them, with the first and last plant being 20 cm from the bed's end). 75 total plants/bed. (See Figure 9-3.)
FIGURE 9-3: A 1 x 10 m bed can accomodate 75 cabbage plants spaced 40 cm x 40 cm.
STEP 2: Find the dosage per plant:
In the above problem, we determined that Akbar needs 510 g of 12-24-12 per bed.
510 g = 6.8 g 12-24-12 per plant 75 cabbage plants
Now, let's compare this dosage to that obtained from using open-field system math calculations as in Problems 7 and 8 a few pages back:
300 kg/ha = 30 g/sq. m
0.4 m (40 cm) X 0.4 m = 0.16 sq. m of space belong to each plant
0.16 x 30 g/sq. m = 4.8 grams (too little)
If Akbar applied 4.8 g per plant, each bed would receive only 360 g (instead of 510 g), which would work out to about 212 kg/ha instead of the recommended 300 kg/ha. (If each bed occupies 17 m2
(including alleyway area), there would be about 588 beds in a hectare; 588 x 360 g = 212 kg.)
AMOUNT PER METER OF ROW LENGTH (Bed-and-Alley System)
In this case, the simplest method is to find the amount of fertilizer needed per bed as in Problem 10 and divide this by the number of rows per bed.
PROBLEM 12: Suppose Akbar decides to plant leaf lettuce on the 10 beds in Problem 10 g in rows 20 cm apart running the short way (i.e. 1 meter long rows). Using the same fertilizer rate (300 kg/ha of
12-24-12, how much should be applied per meter of row if the band method is used?
STEP 1: Find out how many rows will fit on each 1 x 10 m bed:
50 rows with 49 row spaces each of 20 cm will fit on a 1 x 10 m bed. Each of the 2 end rows will be 10 cm in from the bed's edge.
STEP 2: Determine the dosage per meter of row (i.e. one row in this case):
From Problem 10, we know that 510 grams are needed per bed, so:
510 grams = 10.2 g of 12-24-12 per 50 rows one meter of row length
Again, let's compare this intensive system dosage with that obtained by using open-field calculations as in Problem 9:
300 kg/ha of 12-24-12 = 30 g/sq. meter
0.2 m (20 cm) X 1 meter = 0.2 sq. meters space belonging to each meter-long row
0.2 sq. meters x 30 g/sq. m = 6 g of 12-24-12 per sq. m (too low)
TABLE 9-6
│ │ Grams of Fertilizer Equal to: │ │
│ Fertilizer │ 100 cc(ml) │ 1 Level Tablespoonful (15 cc) │
│ Ammonium sulfate │ 108-120 g │ 16-18 g │
│ Ammonium nitrate │ 85 g │ 13 g │
│ (prilled) │ │ │
│ Ammonium nitrate │ 100 g │ 15 g │
│ (granulated) │ │ │
│ Urea │ 75-79 g │ 11-12 g │
│ 16-20-0 │ 98-104 g │ 15 g │
│ 18-46-0 │ 93-108 g │ 14-16 g │
│ Potassium chloride │ 100-120 g │ 15-18 g │
│ (0-0-60) │ │ │
│ Single superphosphate │ 109-11, g │ 16-18 g │
│ Triple superphosphate │ 100-112 g │ 15-17 g │
│ Most other NP and │ 93-110 g │ 14-16.5 g │
│ NPK fertilizers │ │ │
As shown by the above dosage problems, the amount of chemical fertilizer needed per plant or per meter of row is surprisingly small, usually ranging from 5-30 grams. To assure accuracy and
cost-effectiveness, farmers should not attempt to estimate such small amounts. However, since few farmers or gardeners have easy access to accurate scales, it's very helpful if to convert the
fertilizer dosage from weight to volume. This doesn't mean simply converting grams to cubic centimeters, either. The dosage should be given in terms of a commonly available volume measure such as a:
• Juice can
• Tuna fish can
• Bottle cap lid
• Match box
• Spoon size commonly used in the area
This can be done by using a gram scale (check the post office or a pharmacy) to measure the densities of the common fertilizers available and comparing them to water (1 gram = 1 cc or 1 ml). Then you
can measure the volume of commonly available containers like those above and calculate how many grams of fertilizer they hold.
Fertilizers vary a lot in their density, depending on type, brand, and moisture content. If no scales are available, use Table 9-6.
Here's a practice problem for converting fertilizer weight to volume:
PROBLEM 13: How many grams of urea would a 120 cc juice can hold?
SOLUTION: 100 cc of urea weighs 74-79 grams.
(120 cc / 100 cc) X 74-79 g = 89-95 g of urea in one juice can | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0hdl--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=hdl&cl=CL1.16&d=HASH412cd503b5262205ac14c6.11.11","timestamp":"2014-04-17T15:32:00Z","content_type":null,"content_length":"61988","record_id":"<urn:uuid:22661c8a-9ab0-42c2-89b7-c91a8f4d45db>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seven Bridges of K
Solution: Math Puzzle: The Seven Bridges of Königsberg
Solution: The Seven Bridges of Königsberg Math Puzzle
The Swiss mathematician Leonhard Euler became interested in the Königsberg bridge problem. In 1736 he published a paper showing that it was not possible to find a path that would cross each bridge
just once without missing any of them. Here’s why:
Let’s label each of the landmasses with a red uppercase letter and each of the bridges with a blue lowercase letter.
We can regard each of the landmasses as a point and each of the bridges as a line. In this way the map is equivalent to the diagram below:
We call a diagram like this a network. Each of the points A, B, C, and D is called a vertex. (The plural of vertex is vertices.) Each of the lines, a, b, c, d, e, f, and g is called an arc. We refer
to the number of arcs that meet at a vertex as the degree of that vertex. The degree of vertex A is 5. The degree of each of the other three vertices, B, C, and D, is 3.
Solving the Königsberg bridge problem is equivalent to being able to draw the network pictured without lifting your pencil off the paper and without retracing any arc. We call this traveling the
Euler showed that a network could not be traveled if it has more than two vertices with degrees that are odd numbers. The network representing the Königsberg bridge problem has four odd vertices.
To help understand how this works, look at these two networks:
The one on the left has four vertices, each with a degree of 2. The pentagonal network has five vertices of degree 2. In both cases all the vertices are of even degree. Each of these networks can be
traveled starting at any vertex and ending at that same vertex.
Now look at this network pictured to the left:
Vertices A and B are each of degree 3, which is odd. The other four vertices are of degree 2, which is even. This network can be traveled via several different paths, but the starting point must be
A or B and the ending point must be the other odd vertex. Try it.
Here’s another network with two odd vertices.
It can also be traveled as long as the end points of the trip are C and D, the odd vertices.
Here’s a network that cannot be traveled because it has four odd vertices.
E and H are of 1 degree each. F and G are of 3 degrees each.
Here’s another network that cannot be traveled.
Why does this rule hold true? One way to think about it is to realize that a vertex of even degree can be entered and left during the journey. A vertex of degree 2 can be entered and left once. If
it is of degree 4, there will be two trips through it. But if a vertex is of odd degree, the last trip in must be the end, since there’s no arc left to depart on. For example, if a vertex is of
degree 7, there will be three complete trips through it, accounting for six of the arcs. But the last trip in on the seventh arc leaves no exit.
One could also start on a vertex of odd degree. Then all subsequent trips through the vertex would use a pair of arcs. In general, when a network has two odd vertices, they must be the beginning and
end points if the network is to be successfully traveled.
Now Try This
Königsberg is now known as Kaliningrad. There are still seven bridges. Some of the original bridges remain, but others are gone and new bridges have been built. Here’s the current arrangement:
Can this new network be traveled? | {"url":"http://www.planetseed.com/mathsolution/solution-math-puzzle-seven-bridges-konigsberg","timestamp":"2014-04-20T23:49:18Z","content_type":null,"content_length":"64439","record_id":"<urn:uuid:43095fee-a44b-4590-be63-a02a76dbe486>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Curious Science of Counting a Crowd
Digital Design & Imaging Service, Inc.
On June 4, a huge crowd gathered in Hong Kong
for a vigil to commemorate the 22nd anniversary of the Tiananmen Square massacre in Beijing. But just how huge? In some stories 77,000 people showed up. Another story, though, listed the attendance
as nearly double that: 150,000.
There's a reason for the disparity. The first figure 77,000 is a police estimate. The second is from the event's coordinators, who probably had some motivation to pad their numbers. To find out which
crowd size was correct, two professors Paul Yip at the University of Hong Kong and Ray Watson at Melbourne University ran the numbers. To fit 150,000 people into that space, they'd have to cram
together at about one person per 2.7 square feet (four per square meter), so that estimate is unrealistic. That would be "mosh-pit density," the researchers write in
a new paper on crowd estimation techniques
published in the journal
This story of competing head counts is not uncommon. Estimating large numbers is difficult even with the best of intention. If you count the number of jellybeans in a jar three times, you'll probably
have three different numbers, because people simply cannot count very large numbers without some error. Now, imagine trying to count a shifting mass of heads, some stooping to tie shoes, some sharing
the same umbrella, some arriving late or leaving early. Plus, this is one field in which good intentions are rare. Crowd-size estimation is a murky science, positioned at the intersection of
statistical precision and political sleight-of-hand, and plenty of people are motivated to either exaggerate or low-ball an event's attendance.
"Almost everyone who has tried to make a crowd estimate has a vested interest in what the outcome of the estimate is," Charles Seife says. Seife is a journalism professor at New York University who
writes about math and physics. [Disclosure: I had a class with Seife at NYU.] His newest book
tackles the ways that people try to fool others (and sometimes fool themselves) with numbers. "Whenever you see a crowd estimate," he says, "you have to wonder where it's coming from." Nevertheless,
Seife says, if you do your math carefully, it is possible to count a large crowd to within a couple of tens of thousands. And researchers like Yip and Watson are now applying new strategies to find
out whether it is indeed possible to get a more accurate count of a teeming mass of humanity.
Crowd-Counting 101
Herbert Jacobs, a journalism professor at the University of California, Berkeley, in the 1960s, is credited with modernizing crowd-counting techniques. From his office window, Jacobs could see
students gathered on a plaza below protesting the Vietnam War. The plaza's concrete was poured in a grid, so Jacobs counted students in a few squares to get an average of students per square, then
multiplied by the total squares. He derived a basic density rule that says a light crowd has one person per 10 square feet, a dense crowd has one person per 4.5 square feet, and Yip and Watson's
mosh-pit density would have one person per 2.5 square feet.
Fifty years after Jacobs, the tools for counting crowds have improved but the principle is the same: area times density. Steve Doig, a journalism professor at Arizona State University, used a photo
from a GeoEye-1 military satellite to count people at President Barack Obama's inauguration speech in 2009 (he estimated 800,000 people). The New York Police Department counts the people in the
fenced crowd-control barricades that it places, then multiplies by the number of barricades. Yip and Watson applied the basic formula to the candlelight vigil in Hong Kong.
But a simple area times density calculation has its limits. Crowds are not uniform they clump in some places and spread out in others. To account for this, estimation methods are becoming more
sophisticated. Companies such Digital Design and Imaging Service are now adapting the formula for multiple densities. The firm has counted attendance at major events on the National Mall in recent
years and claims it can count the crowd to within 10 percent. So CBS hired DDIS to count heads at Glenn Beck's rally at the Lincoln Memorial in August 2010.
To get its figure for the Beck rally, the design firm first cased the venue, created 3D maps marked with probable high-density spots and cross-referenced those with historical photos of similar
events. The result was a prediction of how people would congregate. "Our goal is to find out where we anticipate the crowds will gather. If it's in the winter, we look for the wind breaks, and if
it's in the heat of summer, we look for the shade," Curt Westergard, the company's president, says. Crowds press toward the stage, but also toward the Jumbotron screens, and they shy from
loudspeakers, he says.
Knowing what to expect, Westergard chose his observation point and launched a tethered balloon at the height of the rally. The balloon lifted a suite of remote-control cameras that, within seconds,
had captured 360-degree crowd shots at various heights: 200 feet, 400 feet and 800 feet. The different heights allowed for shots of people under trees and in hard-to-see places. He laid a composite
of the images over the 3D model and counted heads. His team counted heads in grid squares that represented different densities. Then, for each density (such as lightly populated or very heavily
populated) they multiplied the number of people per square by the number of squares of that category, finally arriving at an estimate of 87,000 people for the Beck rally.
"Unquestionably, that's what it was. As a benefit of the doubt, we gave it a 10 percent rate of error," Westergard says. "We go in pixel by pixel and put a dot on every head that we see. If a lady is
there holding a baby, we put two dots there. We counted this thing three times and got an outside guy [Steve Doig] to count it, and he got back to us with a number that was similar to ours, which was
80,000." But, unsurprisingly, Westergard's certainty didn't satisfy everyone: News reports about the rally reported a smattering of different numbers. Rep. Michele Bachmann, announced from the stage
that there must be a million people present, while NBC counted 300,000, and Glen Beck himself estimated the crowd at 300,000 to 650,000. In a summer of competing rallies, such as Jon Stewart's, there
no shortage of swipes at DDIS's methodology
and its relatively low number. (For its part, the National Park Service tries to stay above the fray by not estimating crowd sizes.
It stopped providing head counts
after the organizers of the 1995 Million Man March accused the service of underestimating their crowd.)
The Future of Head Counts
Photos are good proxies for static crowds, but to count a crowd on the move, Yip and Watson say you've got to get down into the mass. That's the focus of their newest research, and they've come up
with new methods for doing it. They've come up with two methods so far: a strategy that uses one crowd inspection point, and another that uses two.
In the one-point method, counters positioned near the focal point of a march or a parade tally the number of people who pass their station in a given time interval. But it's not exactly ideal: Some
people may have ducked out before reaching the station and others may leave soon after passing it. Not ideal: You'd have to do a phone survey of the marchers to find out how long they stayed with the
march, and that introduces a whole new set of survey-bias problems. A faster, more accurate method is to set up two counting stations, suitably spread out. The counters would tally people passing and
also randomly survey them to ask if they also passed the other station (or planned to). Any more than two inspection points would increase the cost, but not appreciably increase the accuracy, the
researchers write in their study.
At this point, it seems like getting an estimation so exact might be more trouble than it's worth. But technological help may be on the way, this time from the Web. Westergard has plans to
crowd-source head-counting aerial photos to Amazon's Mechanical Turk. The Turk is a network of people around the world who do tasks online for a fee. Westergard can send a photo to 20 people, quickly
receive 20 different head counts, throw out the outliers and average the rest.
If this all sounds like an academic exercise, remember that accurate crowd counting can have practical applications such as preparing emergency responders. If a fire, terrorist attack, stage collapse
or other calamity happened at a large event, Westergard figures that within 20 minutes he could provide first responders with the location of the threat and rough estimates of the number of people
who might need treatment.
And, as Yip said in a statement about his study, a good way to count crowds could cut through the politically motivated stats we put up with now. "In the absence of any accurate estimation methods,
the public are left with a view of the truth colored by the beliefs of the people making the estimates. The public would be better served by estimates less open to political bias." | {"url":"http://www.popularmechanics.com/_mobile/science/the-curious-science-of-counting-a-crowd","timestamp":"2014-04-20T01:57:19Z","content_type":null,"content_length":"32314","record_id":"<urn:uuid:88f8ca93-65ec-4d39-adb5-03867b0577e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to choose predictive variables in my time series regression model
I have to forecast daily sales for my company and I have a list of 300+ potential variables that can be predictive of the daily sales. How do I decide which one to include in my time series
regression model? Do I have to go through steps like prewhitening and cross-correlation function for each of them? How do I check multicollinearity amont these 300+ explanatory variable? Thanks. | {"url":"http://www.analyticbridge.com/group/timeseries/forum/topics/how-to-choose-predictive?commentId=2004291%3AComment%3A49216&groupId=2004291%3AGroup%3A19439","timestamp":"2014-04-19T23:06:30Z","content_type":null,"content_length":"66763","record_id":"<urn:uuid:853a2dc3-5b37-48b9-9428-8feff5ba4f60>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coordinating transmit power and carrier phase for wireless networks with multi-packet reception capability
Driven by advances in signal processing and multiuser detection (MUD) technologies, it has become possible for a wireless node to simultaneously receive multiple signals from other transmitters. In
order to take full advantage of MUD in multi-packet reception (MPR) capable wireless networks, it is highly desirable to make the compound signals from multiple transmitters more separable on its
constellation at the receiver by coordinating both the transmit power level and carrier phase offsets of the transmitters. In this article, we propose a feedback-based transmit power and carrier
phase adjustment scheme that estimates the symbol energy and the carrier phase offset for each transmitter’s received signal, computes the optimal received power level and carrier phase shift to
maximize the minimum Euclidean distance between the constellation points, and finally feeds the optimal transmit power level and phase shift information back to the transmitters. We then evaluate the
performance of the proposed transmit power and carrier phase adjustment scheme and subsequently show that the proposed scheme significantly reduces the error probability in a multiuser communication
system having MPR capability.
In conventional wireless networks, each receiver is only capable of decoding signals from one transmitter at a time; referred to as single-user detection (SUD). In SUD, when a mixed signal from
multiple transmitters is sensed, the receiver typically discards the signal and treats it as a collision. However, signal processing technology has rapidly evolved, and compound signals from multiple
transmitters have become decodable at the receiver side [1,2]. To effectively decode multiple signals in a multiple access environment, multiuser detection (MUD) can be used. In [2], the optimum
multiuser detector has a computational complexity that increases exponentially with the number of active users. Therefore, several suboptimum detectors have been proposed in order to achieve a
performance comparable with that of the optimum detector while maintaining a low complexity. The decorrelating detector [3], the decision feedback detector [4], the minimum mean squared error (MMSE)
[5], and multistage detectors [6] are examples of suboptimum multiuser detectors. Some of these multiuser detectors are also suitable for blind adaptive implementations, in which information about
the interfering users (such as their powers and signature sequences) is not needed for the construction of the receiver filter of a desired user. A blind adaptive implementation of an MMSE multiuser
detector is given in [7], and blind adaptive decorrelating detector implementations are shown in [8,9].
Since MUD technology permits simultaneous packet reception from multiple sources, compound signals, which were previously treated as a collision event in conventional wireless networks, are now
preferred for their ability to enhance the achievable throughput performance [10-16]. However, how to take advantage of the MUD technique and how to adjust its tunable parameters in designing the
medium access control (MAC) for multi-packet reception (MPR) capable wireless networks and maximize the achievable throughput have yet to be sufficiently studied.
Considering the error-prone nature of the wireless medium, the symbol separation and decoding of a mixed signal are primarily influenced by channel conditions and characteristics. To this end,
several studies have attempted to overcome channel effects by means of carrier phase error correction [17-20]. Steendam et al. [17] investigated the effects of carrier phase offsets on a low-density
parity-check (LDPC) coded system, and then proposed a maximum likelihood (ML)-based carrier phase synchronization algorithm that exploits the posterior probabilities of the data symbols. Similarly,
Zhang et al. [18] proposed an a priori probability aided carrier phase estimation for turbo decoding. They showed that the physical (PHY) layer technique provides a reliable carrier phase estimation
that approaches the Cramer-Rao bounds at a very low signal-to-noise ratio (SNR). Harshan et al. [19,20] then identified the problem of maximizing the capacity region between two users for a Gaussian
multiple access channel (GMAC). By performing a rotation on one of the sets in such a way that the error probability is minimized, the capacity gain can be maximized. Compared to Harshan’s study, our
study is applicable to a more general and complex condition.
Even though we consider a feedback-based adjustment using a centralized control to coordinate both the transmit power and carrier phase of the transmitters, a distributed method for achieving the
phase coherence of transmitters was also proposed. In [21], the phase alignment for distributed transmit beamforming was independently performed at each transmitter using minimal feedback from the
receiver. Through feedback based on the SNR from the receiver, each transmitter decides whether their applied random phase is kept or not, and this iterative process is repeated until all
transmitters converge to phase coherence. In [10], a carrier phase adjustment scheme that attempts to maximize the minimum Euclidean distance among the constellation points was proposed for multiple
access networks with multi-packet reception capability. Because this scheme simply assumes that the received power levels from multiple transmitters are the same at the receiver, it adjusts the
carrier phase offsets of transmitters, but does not coordinate the transmit power levels of transmitters.
In this article, we propose a MAC/PHY cross-layer approach for enhancing the separation and decoding performance of compound signals on an additive white Gaussian noise (AWGN) channel with phase
noise effects. Specifically, this article focuses on more complex and realistic scenarios than our previous study in [10]. In this article, we consider the coordination of not only the transmitters’
carrier phase offsets but also their transmit power levels. In this system, a receiver with MPR capability performs multiuser detection and then estimates the symbol energy and the carrier phase
offset for each transmitter’s signal from the compound signals. Next, the receiver piggybacks the optimal transmit power level and carrier phase shift, which is the difference between the estimated
carrier phase offset and an optimal carrier phase offset, to the corresponding transmitters so that they can adjust their transmit power and carrier phase offset to the optimal value when
transmitting signals. To determine the optimal transmit power level and carrier phase shift, we formulate an optimization problem in order to maximize the minimum Euclidean distance between the
constellation coordinates of the compound signals. We subsequently evaluate the performance of the proposed transmit power level and carrier phase adjustment scheme and compare it to that of the no
adjustment case for QPSK and 8PSK with 2–4 transmitters. The simulation results show that the proposed scheme significantly reduces the error probability for all cases investigated in our simulation
The remainder of this article is organized as follows. Section 2. describes the system model on which the proposed scheme is based, and the motivation that initiated this study. Section 3. then
explains the mathematical basis and detailed procedures of the proposed scheme. The performance evaluation is carried out in Section 4., and we finally conclude this article in Section 5.
System model and motivation
We consider a simple MAC protocol for an uplink single-cell system that is coordinated by a base station (BS) having MPR capability. Because all transmitters in a cell are associated with and
continuously communicate with the corresponding BS in the cell, symbol level synchronization at the BS is assumed to be possible in this study. In this multiple access communication system, all
transmitters that want to send a data frame are required to transmit a request-to-send (RTS) frame to their intended receiver, which is then responsible for coordinating the packet transmissions
among the competing transmitters. On receiving multiple RTS frames, the receiver broadcasts a clear-to-send (CTS) frame, which includes the set of transmitters that are permitted to transmit. We will
use this CTS frame to inform the transmitters of the feedback information (optimal transmit power levels and carrier phase shifts), which are calculated by the proposed transmit power and carrier
phase adjustment scheme^a.
Figure 1 shows the block diagram for the proposed transmit power and carrier phase adjustment scheme. Each transmitter sends a signal to its receiver, and due to the AWGN, the compound signals y from
multiple transmitters at the receiver is given by
where N is the number of transmitters, g[i] is the channel gain of the ith transmitter, is the transmit power level of the ith transmitter, θ[i]is the carrier phase of the ith transmitter, x[i]is
the sequence of independent and identically distributed (i.i.d.) equiprobable ith transmitter data symbols, and n is the complex valued AWGN channel noise.
Figure 1. Structure of the proposed scheme. Optimal transmit power level and carrier phase adjustment scheme for a multiple access communication system with multi-packet reception capability.
In the figure, the signals transmitted from multiple transmitters pass through the channel with AWGN and fading, in which the signal waveforms are changed by the channel gain g[i] and AWGN noise n.
When the compound signals are received, the receiver can estimate the received signal power p[i] and carrier phase . Using the values of received power and estimated carrier phase, the proposed
scheme then solve the optimization problem to obtain the optimal received power and carrier phase . To reduce the feedback burden, the receiver broadcasts the optimal transmit power (optimal
received power divided by channel gain, ) and carrier phase shift (difference between optimal carrier phase and estimated carrier phase, ) to the multiple transmitters. In this way, the multiple
transmitters can apply the fed-back value to the next transmission of each transmitter, and then the receiver can achieve a higher performance in the communication system.
The receiver in this communication system should be able to decode the compound signals from multiple transmitters in order to realize the MPR capability. Many receivers based on MUD techniques
exist, which are currently capable of decoding multiple signals and maximizing the signal-to-interference-plus-noise ratio (SINR) of each signal. These MUD techniques make it possible to decode the
compound signals from multiple transmitters at the receiver side [1,22].
For the received signal in (1), the constellation of the received signal has a number of densely distributed points for the signals received from multiple transmitters. In this case, if the
constellation points of received signal from multiple transmitters are considerably contiguous or overlapped with each other, it would be quite difficult that the receiver correctly separates and
identifies each signal from the compound signal. For example, Figure 2 shows the two-user constellations at 8PSK modulations when p[i]=1 and θ[i]=0 for i=1,2. In this figure, only 33 out of 64
constellation points are visible, since the other 31 points overlap and are canceled out. Accordingly, the receiver cannot correctly separate each signal from the compound signals due to the
overlapped constellation points, which are then identified as decoding errors.
Figure 2. Motivation of carrier phase adjustment. Two-user constellations for 8PSK modulation signal sets (31 constellation points are overlapped with other points).
According to the above communication environment, many constellations sent by multiple signal sets from multiple transmitters are distributed to the same signal space; consequently, the minimum
Euclidean distance between the constellations is decreased or many constellations are canceled by overlapping. In this manner, the overlapped constellations are incorrectly demapped in the demapper;
as a result, the transmitted signal is identified as an error. In other words, the network capacity in the multiuser communication system is not maximized, since the error probability of multiple
signals is increased according to the short minimum Euclidean distance. To overcome this problem, the Euclidean distances between constellation points at the receiver side should be kept as large as
possible by adjusting the carrier phase offsets at the transmitter side. Specifically, the minimum Euclidean distance between the constellation points should be maximized in order to decrease the
error probability in multiple signal decoding for MPR communication systems.
In addition, the RF signals of all transmitters propagate through a wireless medium at different channel gains. Thus, even though the transmitters transmit at the same transmit power level, the power
levels of the received signals would be different due to variations in the channel gains. As an example, suppose that two transmitters exist (i.e., s[1] and s[2]) that have the highest and the
weakest received signal power level (p[1]and p[2]), respectively. When the received signal power level of s[2](p[2]=0.3) is much smaller than that of s[1](p[1]=1), the minimum Euclidean distance is
determined by that between s[2]’s own constellation points, as shown in Figure 3a. In contrast, when p[2]is set to 0.7, as depicted in Figure 3b, the minimum Euclidean distance is determined by p[1]
of s[1] as well as p[2] of s[2]. This condition implies that the transmit power levels for simultaneously transmitting users should be properly coordinated to ensure performance improvements in MPR
communication systems.
Figure 3. Motivation of power control. Two-user QPSK signal constellations with (a)p[2]=0.3 and (b)p[2]=0.7 (p[1]=1 and θ[i]=0, for i=1,2).
Proposed transmit power and carrier phase adjustment scheme
In this section, we propose a feedback-based transmit power and carrier phase adjustment scheme that controls the transmit power level and carrier phase offset in order to fully exploit the MPR
channel capacity. The proposed transmit power and carrier phase adjustment scheme has two steps. The first is the carrier phase estimation, which estimates the carrier phase offset incurred by
channel noises, such as AWGN and phase noise. The second is the optimal transmit power and carrier phase adjustment, which computes the optimal transmit power level and carrier phase offset, and
feeds the information—which includes the optimal transmit power levels and carrier phase shifts—back to the transmitters as described in Figure 1. The optimal transmit power levels and carrier phase
offsets are obtained for a given modulation scheme based on the placement of constellation points that maximize the minimum Euclidean distance between the points.
Feedback scheme
As briefly addressed in Section 2. and depicted in Figure 1, the transmitters waiting to send packets are required to transmit an RTS frame to the receiver. When the receiver detects and decodes the
RTS frame signals received from the multiple transmitters, it estimates the received signal power levels and the carrier phase offsets that occur when the compound signals are transmitted on a
wireless medium. Under the assumption that the training sequence of RTS frames is a form of pseudo-noise (PN) sequence which is orthogonal to each other, the receiver can estimate the received signal
power levels and the carrier phase offsets without signal interference among multiple transmitters. Then, the receiver computes the optimal received power level and carrier phase shift so as to
reduce the error probability, and returns the feedback information through a CTS frame which includes the optimal transmit power levels and carrier phase shifts. In this case, the optimal transmit
power can be calculated from the optimal received power by using the known channel gain. The carrier phase shift is then the difference between the estimated carrier phase offset of the received
signal and the optimal carrier phase offset. This feedback-based transmit power and carrier phase adjustment mechanism makes it possible for the receiver to separate the constellations of multiple
signals, significantly reducing the error probability at the receiver side.
In other words, the receiver estimates the received signal power level and carrier phase distortion through the RTS frame, and then broadcasts a CTS frame that includes the set of transmitters that
are permitted to transmit and the transmit power level and phase shift information, thereby achieving low transmission error probability and high reliability.
Transmit power level and carrier phase offset for multiple transmitters
We propose an optimization-based approach for deriving the optimal transmit power levels and carrier phase offsets for multiple transmitters. As noted in Section 2., the minimum Euclidean distance
between multiple constellations have a critical effect on the decoding performance of a multiuser wireless communication system. To minimize the bit error rate on the iterative decoding process, the
minimum Euclidean distance between the received multiple constellation points should be maximized so that the receiver can successfully separate each transmitter’s signal from the original
superimposed signal. In this section, we derive the optimal transmit power levels and carrier phase offsets between multiple transmitters in order that the resulting transmit power and carrier phase
information can be used for the optimal transmit power and carrier phase adjustment on each transmitter side.
Two-user Case for QPSK Signal Set
In order to determine the optimal transmit power levels and carrier phase offsets, we first consider an analytic derivation of the two-user QPSK signal set in a closed form. Figure 4 shows the
two-user constellation for the QPSK modulation signal set. Here, we assume that the received power (p[1]) and the carrier phase of the first transmitter (θ[1]) are 1 and 0, respectively. Let p[2]and
θ[2]denote the received power and carrier phase of the second transmitter, respectively. Note that the range of θ[2] is limited to be within and in the QPSK modulation case.
Figure 4. Analytic derivation for two-user case. Two-user constellation for the QPSK signal set.
In Figure 4, the adjacent constellation points (A, B, C, and D) are determined and depicted. Then, the coordinates of the constellation points are given by
Next, let l, m, and n denote the distances from D to A, B, and C, respectively. These distances are represented by
Then, the minimum Euclidean distance (d[min]) of the two-user constellation for the QPSK modulation signal set is given as follows:
For , it can be easily shown that the minimum Euclidean distance d[min]is maximized when l, m, and n are the same. That is, the received power (p[2]) and carrier phase (θ[2]) of the second
transmitter that maximizes the minimum Euclidean distance are obtained when l=m=n. By (2) and (3), the distances of l, m, and n are as follows:
To find p[2] and θ[2], we solve the above simultaneous equations and obtain and . Finally, we derive the optimal received power level and carrier phase offset as follows:
In the case of more than two transmitters, it becomes more complicated to derive the optimal received power and carrier phase. Therefore, we solve an optimization problem numerically in order to
determine the optimal values for multiple transmitters.
Optimization of transmit power and carrier phase for multiple transmitters
As stated in Section 2., y is the received compound signals from multiple transmitters for M-PSK modulation and is represented in (1). If we let denote a set of the constellation points with N
transmitters for M-PSK modulation, the received compound signals y from N transmitters for M-PSK modulation has M^N constellation points, as briefly explained in Section 2.; i.e., . Note that
illustrative examples of and are shown in Figure 5.
Figure 5. Optimized result of two-user 8PSK. Two-user constellations for 8PSK modulation signal sets ( ) under the proposed transmit power and carrier phase adjustment scheme.
The optimal received power levels and carrier phase offsets are obtained by the placement of constellation points when the minimum Euclidean distance between the points are maximized. Therefore, we
formulate the following optimization problem to determine these values:
where d(r,s) is the Euclidean distance among the constellation points r and s in the set of , i.e., d(r,s)=∥r−s∥, p and θ are sets of the received power levels and carrier phase offsets,
respectively, N is the number of transmitters that transmitted a signal, and is the maximum received power level. Note that the above optimization maximizes the minimum Euclidean distance for all
pairs of transmitters.
We then numerically solve the optimization for 2–4 transmitters for QPSK and 8PSK modulations. This optimization problem to find the optimal power and phase of multiple users is an NP-hard and
nonlinear problem. Here, we use the sequential quadratic programming (SQP) method which is a useful method for numerically solving constrained nonlinear optimization problems. The SQP method
iteratively solves a quadratic programming (QP) subproblem and updates an estimate by using the solution of QP subproblem at each iteration. The results represent the optimal received power level and
carrier phase offset of each transmitter (rounded to five decimal places), and are listed in Table 1. In this case, the optimal values of all transmitters are normalized by that of the first
transmitter, with the received power level and carrier phase offset of the first transmitter being 1 and 0, respectively. Note that the optimal received power level and carrier phase offset for the
two-user QPSK case obtained by optimization are equal to that by derivation in (5). As an illustrative example, the two-user constellations for the 8PSK modulations are then shown in Figure 5. Using
the obtained results, all 64 constellation points for the 8PSK modulations are non-overlapped with the minimum Euclidean distance maximized, unlike the constellations shown in Figure 2.
Table 1. Optimal received power levels and carrier phase offsets
Numerical evaluation of minimum Euclidean distance
We then analyzed the effects of the received power level and carrier phase offset on the minimum Euclidean distance. Figure 6 shows d[min]for the two-user QPSK and 8PSK signal cases with respect to
the received power level and carrier phase offset of the second transmitter. In this case, the signal power level and carrier phase offset of the first transmitter are respectively, fixed at 1 and 0.
The range of second transmitter’s signal power level (p[2]) varies from 0 to 1, and the carrier phase offset (θ[2]) range varies from 0 to 90°.
Figure 6. Numerical evaluation of d[min]. Minimum Euclidean distance (d[min]) with respect to the received power level and carrier phase offset of the second user for (a) QPSK and (b) 8PSK
Figure 6a shows the value of d[min]for the two-user QPSK signal case, which is symmetric with respect to the point at which the second transmitter’s carrier phase offset is . Note that the signal
set is applied to the QPSK modulation. The maximum value of d[min] is obtained when p[2]=0.5176 and θ[2]= 15° (0.2618). As the signal power of the second transmitter varies from 0 to 0.5, d[min]
increases with respect to p[2]because the minimum value of d[min]is determined by the Euclidean distance between the constellation points corresponding to the same group of the second transmitter.
After the value of 0.5, d[min]decreases because the constellation points corresponding to different groups are getting closer to each other. Figure 6b shows d[min]for the two-user 8PSK signal case.
Similar to the QPSK case, the value of d[min]is symmetric with respect to the point at which the second user’s carrier phase offset is . The maximum d[min] in this case is obtained when p[2]=0.5668
and θ[2]≈ 1.83° (0.0319).
We compare the performance of the proposed scheme in two-user QPSK case with that of the single-user 16PSK case. Since the number of joint constellation points for two-user QPSK case is the same as
the number of constellation points for single-user 16PSK case, the two cases have the same sum rate. Therefore, we analyze the performance of two cases in terms of the minimum Euclidean distance. In
general, d[min] of single-user M-PSK is given by
where E[s] is the symbol energy. To simplify the analysis, we assume that the symbol energy is 1. Under this assumption, d[min] of single-user 16PSK is . The probability of bit error for single-user
16PSK (P[e,16PSK]) is as follows:
In the proposed scheme, we obtain the optimal power level and phase offset (p[1]=1, p[2]=0.5176, θ[1]= 0°, and θ[2]= 15°) for the two-user QPSK. For comparison under the same conditions as the
previous single-user 16PSK, we adjust the sum of two users’ power level to 1. The power levels of the first user and second user are 0.6589 and 0.3411, respectively. In this case, d[min] of joint
constellation points for two-user QPSK signal set is about 0.4823. Then, the probability of bit error for two-user QPSK (P[e,2-QPSK]) is as follows:
If the noise condition is the same, P[e,16PSK]is higher than P[e,2-QPSK]. That is, the proposed scheme for two-user QPSK case is more efficient than the single-user 16PSK case, even though two cases
have the same number of constellation points and the same sum rate.
Performance evaluation
We conducted a performance evaluation for the proposed transmit power and carrier phase adjustment scheme through a comparison with an unmodified PSK modulation scheme. We implemented the optimal
transmit power and carrier phase adjustment system for MPR depicted in Figure 1, which included the successive interference cancelation, demapper, decoder, signal power and carrier phase estimator,
and the feedback scheme for the optimal transmit power levels and carrier phase offsets. Two modulation schemes (QPSK and 8PSK) were evaluated, and the number of transmitters was varied from 2 to 4.
Note that the carrier phase was uniformly distributed from 0 to 2Π, and the bit error rate (BER) performance was evaluated with respect to the SNR on the AWGN channel.
Figure 7a shows the BERs of the proposed transmit power and carrier phase adjustment scheme in comparison with the no adjustment case for QPSK modulation with respect to the SNR and number of
transmitters (i.e., the number of distinct signals that are compounded into the received signal at the receiver side). In this figure, the proposed scheme gives the lower BER values than the
unmodified QPSK over the entire SNR range. The proposed scheme shows a gain of about 5 dB at a BER of 10^−4. Figure 7b then shows the BER performance of the proposed scheme and the unmodified 8PSK
modulation. As like the previous results for the QPSK modulation, for all cases, the BER performance of proposed scheme has much lower values than the unmodified 8PSK. We obtained an SNR gain of
almost 5 dB with two transmitters at a BER of 10^−4.
Figure 7. BER performance. BER performance for (a) QPSK and (b) 8PSK modulation.
Figure 8 presents the comparison of BER performance between the proposed scheme and the carrier phase adjustment scheme in [10]. Note that the scheme in [10] adjusts only the carrier phase offset of
the transmitters without modifying their transmit powers under the assumption that the received power levels from multiple transmitters are the same. As shown in Figure 8a, b the proposed scheme
gives the lower BER values than their previous study in the entire range of SNR. These comparison results imply that the coordination of both transmit powers and carrier phases among multiple
transmitters can achieve the better performance than in the case that only the carrier phases are adjusted.
Figure 8. BER performance comparison. BER performance comparison of the proposed scheme with the previous study (Phase Adj.) [10] for (a) QPSK and (b) 8PSK modulation.
These results imply that the proposed scheme effectively adjusts the transmit power levels and carrier phase offsets of transmitters so that the signals from multiple transmitters are well separated
over a wide range of carrier phase error variations, and that the MPR capability is fully utilized.
In this article, we proposed a feedback-based optimal transmit power and carrier phase adjustment scheme in order to fully take advantage of the MUD technique in MPR-capable wireless networks by
coordinating the transmit power and carrier phase of each transmitter. To determine the optimal constellation placement of the compound signals at the receiver, we formulated an optimization problem
and then numerically obtained the optimal transmit power levels and carrier phase offsets for 2–4 transmitters for M-PSK modulation. Under the proposed scheme, the compound signals from multiple
transmitters became more separable on its constellation at the receiver; as a result, the BER performance significantly improved in comparison with the no adjustment cases.
As future study, we plan to implement MPR-capable wireless communications based on this transmit power and carrier phase adjustment scheme on software-defined-radio (SDR) to obtain an empirical
^aThe information containing the optimal transmit power levels and carrier phase shifts requires only a small number of bits, thus the overhead for the feedback information in the CTS frame is
This study was supported by Leading Foreign Research Institute Recruitment Program through the NRF funded by the MEST (K20901002277-12E0100-06010), by the WCU program by the MEST of Korea
(R31-10026), and by the GIST basic research project.
Sign up to receive new article alerts from EURASIP Journal on Wireless Communications and Networking | {"url":"http://jwcn.eurasipjournals.com/content/2013/1/1","timestamp":"2014-04-18T02:58:33Z","content_type":null,"content_length":"124061","record_id":"<urn:uuid:65a739c7-3850-484f-9bba-e6d4dbc08999>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Kim
Total # Posts: 1,974
Write 75.42 in fractional notation.
For a given quantum number, n, which electrons will experience the greatest Z*, s or p??????
what is the effect of electrons in a penetrating orbital on Z* (the effective nuclear charge) felt by electrons in a second orbital?
1. A 4.0 N force acts for 3 seconds on an object. The force suddenly increases to 15 N and acts for one more second. What impulse was imparted by these forces to the object? 2. A railroad freight
car, mass 15,000 kg, is allowed to coast along a level track at a speed of 2 m/s....
could you expand on 3 and 4 please?
1. i got 27 2. i got .46 3.still confused 4. "
1. A 4.0 N force acts for 3 seconds on an object. The force suddenly increases to 15 N and acts for one more second. What impulse was imparted by these forces to the object? 2. A railroad freight
car, mass 15,000 kg, is allowed to coast along a level track at a speed of 2 m/s....
physics- PLEASE HELP
2. A .12 kg ball is moving at 6 m/s when it is hit by a bat, causing it to reverse direction and have a speed of 14 m/s. What is the change in the magnitude of the momentum of the ball? 4. Lonnie
pitches a baseball of mass .2 kg. The ball arrives at home plate with a speed of ...
Calculate the future value of the following: o $5,000 compounded annually at 6% for 5 years o $5,000 compounded semiannually at 6% for 5 years o $5,000 compounded quarterly at 6% for 5 years o $5,000
compounded annually at 6% for 6 years Answer the following: What concl...
the cay takes place during???
Suppose that you let teh masses m1 and m2 slide (without adding the third mass). a. what is the acceleration of the mass 25g? Hint: Identify the force that produces the motion and the one that
opposes and write the equation for the net force. b. what is the change of momentum ...
looks right to me, but i only had 3 yrs french, so im not fluent
What are the checks on the power of the judiciary? Are they potent and easily invoked or weak and difficult to invoke?
Why has the Commerce Clause (Article 1, section 8, Clause 3) been used to target agriculture, transportation, finance, labor relations and the workplace?
Why do those who bear the costs (higher milk prices or taxes to support the maritime industry) not oppose these sorts of policies? Would it be rational to oppose them? Would it be rational to know
about them?
English 11th grade level
Whaat was on santiago's small line in old man and the seaa?
9th grade
at 30degrees celcius, 80 grams of potassium bromide is dissolved in 100 grams of water. is this solution unsaturated, saturated, or supersaturated.
I forgot to put this in the question i asked. Here it is again. 5/a=?/a 2power Write an equivenlant expression to this.
How to solve this: 5/a=?/a to the second power
AP World History
In a DBQ the should the thesis statement include the group part of the essay in it?
Math - gr.11
Hi, can someone help me figure out how to do this question? A 5m stepladder propped against a classroom wall forms an angel of 30 degrees with the wall. Exactly how far is the top of the ladder from
the floor? Express your answer in radical form. This chapter right now that we...
solve for y (x-3)(x^2+3x+9)=4y^2
solve for y (y^2-2y+1)-x=0
a 50 kg person is standing on mercury. mercury's mass is .0553 times the earths mass and has a radius of 2439 kilometers a. what is the persons weight on saturn b. what is the persons mass on saturn
how do i solve this, i dont need the answers, just an explanation
Explain how you would create the users for the sales organization unit and how to set up work groups in this particular situation. Keep in mind that you may have to name certain applications and
allowable tasks for each individual or job role. First MI Last Logon ID Title Dept...
A solid sphere of weight 36.0 N rolls up an incline at an angle of 30.0o. At the bottom of the incline the center of mass of the sphere has a translational speed of 4.90 m s-1. Isphere = 2/5mr^2.
What is the kinetic energy of the sphere at the bottom of the incline? How far do...
pre cal HELP
lim (x+x^2)/3x x->0 i need to factor it first then find the limit and the answer is 1/3. I just dont know how to work it out.
1) A 15.0 g sample of nickel metal is heated to 100.0 C and dropped into 55.0 g of water, initially at 23.0 C. Calculate the final temperature of the nickel and the water, if the specific heat
capacity of nickel is 0.444 j / g x C. 2) In a coffee-cup calorimeter, 1.60 g of NH4...
1. A cart of weight 20 N is accelerated across a level surface at .15 m/s^2. What net force acts on the wagon? (g=9.8m/s^2) 2. An automobile of mass 2000 kg moving at 30 m/s is braked suddenly with a
constant braking force of 10000 N. How far does the car travel before stoppin...
nevermind, someone helped me earlier
A block slides down a 10 angled incline at a constant speed. what is the coefficient of sliding friction between the blocks surface and the incline?
A 20 kg sled rests on a horizontal surface. if it takes 75 N to start the sled in motion and 60 N to keep it moving at a constant speed, what are the coefficients of static and kinetic friction
respectively? explain please
a block slides down an angle 10 incline at a constant speed. what is the coefficient of sliding friction between the block's surface and the incline?
7th grade pre algebra
What is The Answer To This Question Below? Evaluate (m-3.2)(m+4.1)when m=-4.1
Organic Chemistry
When benzyl chloride is treated with sodium iodide in acetone, it reacts much faster than 1-chlorobutane, even though both compounds are primarily alkyl chlorides. Explain this rate difference.
Organic Chemistry
Why is benzyl chloride reactive in both tests, whereas bromobenzene is unreactive?
Organic Chemistry
I think the answer to this question might be because bromide ions are better leaving groups than chloride ions. Someone please help clarify for me! Thanks in advance.
Organic Chemistry
In the tests with sodium iodide in acetone and silver nitrate in ethanol, why should 2-bromobutane react faster than 2-chlorobutane?
Need to solve. The length of a rectangle is fixed at 30cm. What width s will make the perimeter greater than 98cm? The width must be greater than _____ cm.
need to solve this. The function H described by H(x)=2.75x+71.48 can be used to perdict the hieght, in centimeters, of a woman whose humerus is x cm long. Perdict the hieght of a woman whose humerus
is 37cm long. 37cm long is ____ cm.
An automobile in surer has found that repair claims have a mean of $1520 and a stardard deviation of $770. Suppose that the next 100 claims can be ragarded as a random sample from the long-run claims
process 1. What is the mean and standrd deviation of the average x(bar) of th...
Solve. Ten graph. 4 x > 12 -
Graph the equation using the slope and the y-intercept y-4/3 x +4
Algebra College
Ok I will get this typed out. 1. Suppose you are in the market for a new home and are interested in a new housing community under construction in a different city. a) The sales representative informs
you that there are two floor plans still available, and there are a total of ...
Algebra College
Thank you for the advice. I will do that.
Algebra College
1. Suppose you are in the market for a new home and are interested in a new housing community under construction in a different city. a) The sales representative informs you that there are two floor
plans still available, and that there are a total of 56 houses available. Use ...
hi i want to see if this works
Find all the numbers between 1 and 1000 that have 2 their only prime factor? I can't do it alone some help me!!!!!!!!!!!
Social Studies
what is meaning of motto,or short saying ,on the great seal
Language Arts
swa-swa is an example of a word from ?
A pasture has four straight sides. Three sides are 3/8 miles long. It took 1 3/8 miles of fencing to enclose the pasture. What is the length of the fourth side? Using a verbal model write an equation
for this situation, where x is the length of the fourth side. Solve the equat...
graphs are so hard so I really need help
Expand the binomial: y = (x-3)^4
Factor the given equation: p(x) = x^3 - 25x
given the functions: f(x) = x^2 - 1, g(x) = x/x + 2 Why is x^2 - 1/x^2 + 1 the equation of (g o f)(x)?
no, I is just like I wrote it
Please help me solve this problem 2/5(2x-3)> please show me how this is done
I have to seperate salt, sand, steel shot, copper shot, and plastic pellets(one type is heavier than water. Any help? thank you
what does it mean that the united states has a mixed economy
what large land feature lies between the mississippi river and the rocky mountains?
How do you know if a value is a solution for an inequality? How is this different from determining if a value is a solution to an equation? If replacing the equal sign of an equation and put an
inequality sign in its polace, is there ever a time when the same value will be a s...
Find f(x+ (delta)x)- f(x) --------------------- (delta)x if f(x)=8x(squared)+ 1 This is so confusing! What is all the delta x stuff?
a wooden pancake looking at milk wizard of gauze
a wooden pancake looking at milk wizard of gauze
Algebra 2
Jessica's bank contains 18 quarters & dimes, of which q are quarters. Find the total value of the coins in dollars.
Solve for x and y 40 / (x+y) = 2.70 40 / (x-y) = 4.70
science can someone please check
how does one cite this in APA form.
writing skils
frequently? /
When equations have infinitely many solutions, How would you know this? also, when working on equations that has no solutions, How would I know this? If posible can you give me examples of each.
Thank you
where di i found this information about 2 award winners?
Suppose a Midwest Telephone and Telegraph (MTT) Company bond, maturing in 1 year, can be purchased today for $975. Assuming that the bond is held until maturity, the investor will receive $1,000
(principal) plus 6 percent interest (that is, 0.06 3 $1,000 5 $60). Determine the ...
Please help explain the following: Estimate the solution of the equation f(x)= -1
shut up
adult education
I am having a difficult time accessing information for a paper I need to reasearch on The American Medical Website. The topic is ethical opinions on computer confidentiality. They give you a hint
that the articles should start with an E, but I have searched for hours and just ...
i need help becaus i dont know how to do it
What is synthetic division? (rational expressions)
social studies
what is the meanining of great seal
social studies
That does fit...thank you very much....
social studies
thank you for you help.this is a crossword puzzle...9 letters..2nd letter i and 5th letter is a t....any suggestions
social studies
What is a group of people that has reactionary views called??
what type of unwanted reaction you might encounter form business committee
can u help with accounts
algebra 2
how to find accerlation
Write the ratio in simplest form. The ratio of 75 seconds to 3 minutes.
a simple process for finding rates and unite price solutions.
Algerbra II
Classify the system , and determine the number of solutions. The system is consistent and dependent and has infinitely many solutions. The system is inconsistent and independent and has no solutions.
The system is inconsistent and dependent and has no solutions. The system is...
algebra 2
Classify the system , and determine the number of solutions. This system is inconsistent. It has infinitely many solutions. This system is consistent. It has infinitely many solutions. This system is
inconsistent. It has no solutions. This system is consistent. It has one sol...
Law and Ethics
How did you find this information? I am taking the same class and have looked for hours without finding anything of value.
i need help so i may locate nouns in sentences. please help me.
what the noun in the sentence an extreme climate may affect your communication effectiveness.
what is the noun in the sentence poor gramer may result in miscommunication.
Sternbergs Love Theory ... I know the 8 different types but I need the characteristics of them and I cannot find them anywhere.
(11xy)¯^5/^7= 1/(11xy)¯^5/^7 Is this answer correct. Can someone tell me if I did this expression right. The instructions are to rewrite the following expression with positive exponenets.
REWRITE THE FOLLOWING EXPRESSION WITH POSITIVE EXPONENETS.(11xy)¯^5/^7= 1/(11xy)¯^5/^7 .is this answer correct. cAN SOMEONE HELP ME WITH THIS EQUATION ,I AM TRYING TO LEARN.
i need help on finding the missing finger
When managing a business, it is important to take inventory of where your money is spent. You have a monthly budget of $5,000. Refer to the table below and answer the questions that follow. Round
your answers to the nearest tenth of a percent. Category Cost Percentage Labor $1...
Hermes can travel 2 mi up Mt Olympus at a rate that is 18 mi/hr slower than he rides Pegasus down the same distance. The entire trip takes 14 min. How fast can he go up? How fast can he ride down?
Jackie can shell a quart of pecans twice as fast as John. Together they can shell a quart of pecans in 20 min. How long would it take John to shell a quart of pecans by himself?
Go to the state you live in and then .gov back slash mde. Check this site out it may help you.
Pages: <<Prev | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kim&page=17","timestamp":"2014-04-18T22:08:41Z","content_type":null,"content_length":"26650","record_id":"<urn:uuid:1e96b074-c6f7-484f-a8e7-a9f45c078d22>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vertical Distribution of Seimic Force - Structural
I am studying for the SE Lateral portion of test and starting going through Kaplan’s “Seismic Design and Review for the PE Exam” by Alan Williams. The question I have is when the book does example
2.9 explaining the vertical distribution of seismic forces it goes through the steps of calculating the vertical distribution factor (Cv) for each floor the same way I would but when it calculates
the total design shear to multiplied by this factor it sums the effective seismic weights for each floor.
The problem I see is by summing the seismic weight for each floor your forgetting to account for the wall weights below the midheight of the first floor. Am I wrong for thinking this needs to be
accounted for? ASCE7 states that is should be “total design lateral shear at the base of the structure”.
Thanks you for your help. | {"url":"http://engineerboards.com/index.php?showtopic=18520&pid=6941044&st=0","timestamp":"2014-04-18T08:07:05Z","content_type":null,"content_length":"70038","record_id":"<urn:uuid:f1ded20f-0005-4f50-8616-88e82f36bb82>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non parametric and F test!!!
October 17th 2008, 12:15 AM
Col carter
Non parametric and F test!!!
morning mathematicians, i have this question i have a answer but i`m not sure that my method is completly correct!! if anyone could have a quick check i`ll be happy.
The question is as follows---
18 of the subjects are selected for a particular study. The value recorded for each is the % improvement over the 4 week period in their time for running 400 metres. 8 of these subjects (group A)
were in the group which the doctor did not expect to benefit from the training. results were as follows!
sample mean sample variance
Group A 11 7 13 8 32 13 22 6 14 78.29
Group B 15 11 19 21 16 9 14 13 17 20 15.5 15.17
i) use a F test to determine wheter its reasobale to assume that the underlying pop have same variances
for this 1 i did as follows
Null Hypothese : variance A = Variance B
Alt Hypothses : Variance A doesnt equal Variance B
F test = Variace of A/variance of B
F test= 78.29\15.17
F test= 5.1608 (4dp)
Test statisitc at 5 percent two tail test= 4.197
5.1608 > 4.197 therefor we reject the null hypotheses.
ii) carry out a suitable non parametric test procedure to test whether the subjects in group A showed less improvement that those in group B.
group A rank sum=62.5
group b rank sume=108.5
test Null Hypothese: A=b
Alt Hypothese: A<B
if m is less than or equal to n , where m=8 and n=10 hence
mean= 0.5 * 8 *(8+10+1) mean=116
variance= (1/12 ) * 8 * 10 *(8+10+1) = 126.6666666`
using normal approx X~(116, 126.6666`)
z=(62.5 -116)/SQRT(126.666`)
there was no sig level given so i tested at 5% so Z test=1.6449
so we cannot reject at the 5% level so we accept null hypotheses
thanks guys | {"url":"http://mathhelpforum.com/advanced-statistics/54180-non-parametric-f-test-print.html","timestamp":"2014-04-20T12:11:05Z","content_type":null,"content_length":"5123","record_id":"<urn:uuid:c4191ae7-5766-4353-ba9e-fc1ca1f02036>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
nergy and
Work, Energy and Power
This page supports the multimedia tutorial Energy and Power
This page gives a background to some of the clips shown in the multimedia tutorial.
We all know about physical work, so we started the tutorial with this example, which also gives an idea of the size of the quantities involved. We begin with the calculations behind the histograms we
showed. These are 20 kg bags so the weight of each is about 200 newtons down, which is the grey arrow. Normally we don't say 'down' in this context, because weight is always in a direction close to
down. I did so here to remind you that weight is a vector. So let's write, for one bag,
W = − mgj = − (200 N)j.
Check that notation: weight, W, is a vector, whereas work, W, is a scalar. (Occasionally we shall also need W for the magnitude of W, but you will know from context which is which.) The motion of the
bags is slow, their accelerations are small compared to g, so the force required to accelerate them is small compared to their weight. So when I lift them, I'm applying a force F ≅ − W = (200 N)j,
which is the black arrow. If you remember the scalar product, you'll know that i.j = 0 but j.j = 1. So, if I apply this constant force over a displacement Δs = Δxi + Δyj, the work (W) I do is
W = F.Δs = (mgj).(Δxi + Δyj) = mg Δy
(which, as we show in the multimedia tutorial, is the increase in potential energy of a mass m in a gravitational field of magnitude g when raised a height Δy). So, for the first bag, the force is
200 N, I lift it about 0.7 m (the red arrow), so I do 140 J of work. The joule (symbol J) is the SI unit of work: one newton times one metre. The joule is not very big on a human scale: lift a small
apple (weight about 1 N) through a height of 1 m and you've done 1 joule of work on the peach – but rather more than that in moving your arm! Similarly, although I only do 140 J on the bag, I do more
work on moving my arms and torso. It's possible for a fit person to do a megajoule of work in an hour.
The pile is now shorter so I must lift the second bag through a larger increase in height: which reminds me that work is proportional to displacement!
Work is also proportional to the force, so lifting two bags requires twice the force and I do twice as much work as on the first bag.
Hmm, I've not planned this well and must lift the last two bags further: 400 N times 1.5 m is 600 J.
Look at the big displacement of the trolley and how easy it is. The trolley is supporting six bags, so the force it applies upwards has magnitude 1.2 kN (black arrow), and it moves several metres
(red arrow). But the force is in the j (or y) direction and the displacement in the i (or x) direction: they are at right angles. Remember
i.j = 0. Or F.Δcos θ = 0.
So no work is done.
Lifting 20 kg bags (weight = 200 N) is not so hard. Lifting my own 70 kg mass (weight W = 700 N)requires more force. But not if we use pulleys (which are discussed in more detail in the physics
of blocks and pulleys).
Here, a single rope goes from the support, down to my harness, round the pulley, back to the support, round another pulley and back to my hands. The pulleys turn easily, so the tension T in each
section of the rope is the same. There are three sections pulling me upwards. From Newton's second law, the total force acting on me equals my mass times my acceleration. Compared with g, my
acceleration here is negligible. So
3T + W = ma ≅ 0
So, if we neglect my (modest) acceleration, the force of the three sections pulling upwards on me equals my weight: the magnitude of the tension is T = |W|/3 = (700 N)/3.
So the force I need supply with my arms is reduced. However, to lift my body through say 2 m in altitude, I must still do (700 N)(2 m) = 1.4 kJ of work. How is this possible?
Well, each of the three sections of rope shortens by 2 m. So my hands pull the rope 6 m. I do work (6 m)(700 N)/3 = 1.4 kJ of work (plus a bit extra to overcome friction in the pulleys). Like
levers, blocks and pulleys don't save you work, but they can reduce (or increase) the force, which can make a task more convenient and comfortable.
Kinetic energy and the work energy theorem
The multimedia tutorial presents this theorem, but perhaps you'd like to see it again here. Let's apply a constant force F to a mass m as it moves, in one dimension, a distance x. (It might, for
instance, be the magnetic force that we used in our section on Newton's laws.) The force is constant over x, x increases linearly over x, so the work done ∫ F.dx increases linearly over x.
Once we relate this to time and velocity, we shall have to do the integration. (Remember that there is help with calculus.) So let's consider the case – still one dimensional – in which the force is
applied over a short distance dx, and that the mass m increases in velocity from v to v+dv.
The total work done on the mass is
dW = Fdx
where F is the total force acting on the mass. Subsituting for Newton's second law, F = ma = m(dv/dt), gives:
dW = m(dv/dt)dx = m*dx*dv/dt
where we have written the mutliplication and division explicitly. dx, dv and dt are all small quantities, but there is no reason why we cannot change the order of multiplication. So let's write:
dW = m*dv*dx/dt = m*v*dv
The advantage of this rearrangment is that we can now do the integral easily:
W = ∫ dW = ∫ mv.dv
Suppose we start from v = 0, then the total work done to accelerate mass m from rest to a speed v is:
W = ∫ mv.dv = ½mv^2
This quantity is so useful that we give it a name, the kinetic energy and write
K = ½mv^2
So you don't like calculus? Let's use the equations from one-dimensional kinematics (for which there is a multimedia tutorial). Let's suppose that a body starts from rest and that we apply a
constant (total) force F for a certain time T in one example, and for twice that time (2T) in a later example (the black graphs at right).
The final velocity will be v = aT = (F/m)T in the first example, and twice that value in the second example (the red graphs at right).
The distance travelled while the force is acting, i.e. the distance travelled during the acceleration, is now four times as great, as shown in the purple graphs at right. So the constant force has
been applied over four times the distance, and has done four times the work. So, even though the velocity has only doubled, we have done four times as much work (blue graph at right).
That is an important consequence: at twice the speed, a mass has four times the kinetic energy. This has important implications for road safety, as we see next.
Stopping distances and the work energy theorem
If I travel twice as fast on my bicycle, how much further does it take to stop? (I include only the distance after I apply the brakes, not the time it takes me to react to danger and to apply the
At twice the speed, my kinetic energy K = ½mv^2 is four times as great. So, to do four times as much (negative) work, the braking force (assumed constant) must be applied over four times the
distance. Please remember this on the road.
Suppose I slowly lift a mass in a gravitational field. In this clip from the multimedia tutorial, the rope, with a little assistance from me, is slowly lifting a container of water. The tension
force F is doing work W on the container, but it is not increasing its kinetic energy. The reason, of course, is that the weight mg of the container is pulling the other way: it is doing negative
work on the container. However, the work W is not lost: we can recover it: we can slowly lower the container, and thus lift the the brick on the other end of the rope.
So where is the work done by F going when we lift the container? It is, in a sense, stored in the gravitational interaction between the container and the earth. This 'stored work' has the potential
to do work for us. This is an example of potential energy – in this case gravitational potential energy. So, how much potential energy do we store in this case?
We use a force F to move an object of mass m a displacement ds in a gravitational field, so we do work
dW = F.ds ,
(where you might wish to revise vectors). Suppose that we are moving it in such a way that we do not change its velocity (and so don't change its kinetic energy). Then the total force on it is
zero, so F + mg = 0, so
dW = −mg.ds .
However, g is in the negative vertical direction, say the minus y direction, so
dW = mgdy.
For displacements on the planetary scale, we'd have to consider the variation of the gravitational field with height, which we do in the section on gravity. For more modest displacements, g is
uniform and integration gives us
∫ dU[grav] = ∫ dW = ∫ mg dy = mgΔy = mgΔh
where h is commonly used for the vertical coordinate. U is defined by an integral, and integrals require a constant of integration. For potential energy, this constant is the reference for the zero
of potential energy. If we define U[grav] to be zero at h = 0, then we can write
U[grav] = mgh.
As we shall see, not all forces allow one to define a potential energy. However, another example is the
Potential energy of a spring.
If we slowly compress or extend a spring from its resting position, again we do work without creating kinetic energy. But again it is 'stored' – we can get it back. From Hooke's law, the force
exerted by a spring is F[spring] = − kx, where x is the displacement from its unstretched length, and k is the spring constant for that particular spring. Because we are not accelerating anything,
we have to apply a force F = − F[spring], so
∫ dU[spring] = ∫ dW = ∫ F dx = −∫ F[spring]dx = ∫ kx dx = Δ(½kx^2)
Again, we have a constant of integration and a zero of potential energy to define. Usually, we set U = 0 at x = 0, so
U[spring] = ½kx^2.
Note that, with this reference value, U[spring] is always positive: with respect to the unstressed state, both stretching (x > 0) and compressing (x < 0) require work, so the potential energy is
positive in each case.
In the film clip, I do work to store potential energy in the spring, the spring then does work on the mass, giving it kinetic energy. Biochemical energy in my arm was converted into potential
energy in the spring and then to kinetic energy.
Conservative and non-conservative forces
Let's look at the work that I do in moving a mass in a gravitational field. We'll pretend that I do this with accelerations so slow that the mass is always in mechanical equilibrium, i.e. that the
force exerted by my hand plus the weight of the mass add to zero, so
F[hand] = −mg
The work I do against gravity is ∫ F[hand].ds, which is shown as the brown coloured histogram.
As I lift the mass, F[hand] is upwards (positive) and s is also positive, so the work done by me is positive:
∫ F[hand].ds > 0.
As I lower the mass, F[hand] is still upwards (positive) but now s is negative, so the work done by me is negative:
∫ F[hand].ds < 0.
Consequently, round a complete cycle that returns the mass to its starting point, ∫ F[hand].ds =0. Similarly, the work done by gravity around the cycle is zero (because F[grav] = −F[hand]). This
makes gravity a conservative force:
Definition: A conservative force is one that does zero work around a closed loop in space. It follows that, for a conservative force F, we may define a potential energy as a function of position r:
U = U(r) ≡ ∫ F.dr .
If the work done around a closed loop is not zero, then we cannot define such a function: its value would have to change with time if we went around such a loop. Forces with this property are
called, obviously, nonconservative forces.
So, what sort of force is that exerted by an ideal spring?
Again, let's imagine that I do this so slowly that the spring is in mechanical equilibrium: F[hand] = −F[spring]. I move my hand to the right, stretching the spring. F[hand] is positive and ds is
positive. I do positive work (shown in the histogram) and the spring does negative work. Then I move my hand to the left, but still pulling to maipntain the stretch in the spring. As the spring
shortens, F[hand] is still positive but now ds is negative. I do negative work (shown in the histogram) and the spring does positive work. For the spring, ∫ F.dr around a closed path is zero. The
force exerted by an ideal spring is a conservative force.
What sort of a force is friction?
Again, let's imagine that I do this so slowly that the mass is in mechanical equilibrium: F[hand] = −F[friction]. Moving to the right, I apply a force to the right and the object moves to the right:
F[hand] and ds are both positive: I do positive work (shown in the histogram) and friction does negative work. Moving to the left, I apply a force to the left and the object moves to the left: F
[hand] and ds are both positive: I do positive work (shown in the histogram) and friction does negative work. So, around a closed loop, the work done against friction is greater than zero, so
friction is a nonconservative force.
Conservation of mechanical energy
We saw above in the work energy theorem that the total work ΔW done on an object equals the increase ΔK in its kinetic energy. But consider the case where all of the forces that do the work ΔW are
conservative forces: here, the work done by those forces is minus one times the work done against them, in other words it is −ΔU. So, if the only forces that act are conservative forces, then ΔU + ΔK
= 0.
Define the mechanical energy: E ≡ U + K.
So , if the only forces that act are conservative forces, mechanical energy is conserved. We shall make this stronger below but, before we do let's look at an example in which mechanical energy is
(nearly) conserved.
Kinetic and potential energy in the pendulum
This video clip shows an example of the exchange of kinetic and potential energy in a pendulum. A warning, however: for the sake of keeping the download time small, this film clips is a single
cycle repeated. In the original film, the pendulum gradually loses energy: in each cycle, a small fraction of the energy is lost, partly in pushing the air out of the way.
The kinetic energy K is shown in red: as a function of x on the graph, and as a histogram that varies with time. Note that the K goes to zero at the extremes of the motion. The potential energy U
is shown in purple. It has maxima at the extremes of the motion, when the mass is highest. Because the zero of potential energy is arbitrary, so is the zero of the total mechanical energy E = U +
K. Here, E (shown in white) is constant.
The Waves and Sound section of Physclips has a chapter on the mechanics of Oscillations and a section on Mechanical energy in Simple Harmonic Motion.
Conservation of mechanical energy: statement
We have seen that, if the only forces present are conservative, then mechanical energy is conserved. However, we can go further. Provided that nonconservative forces do no work, then the increase ΔK
in the kinetic energy of a body is still the work done by the conservative f orces, which is −ΔU. So we can conclude that
If nonconservative forces do no work then mechanical energy (E ≡ U + K) is conserved.
This statement can be written in several ways, of which here are two:
If nonconservative forces do no work, ΔU + ΔK = 0 or U [i]+ K[i] = U[f] + K[f] ,
where i and f mean initial and final. I strongly advise that you always write the qualifying clause because, in general, mechanical energy is not conserved. (And never, ever, write "kinetic energy
equals potential energy". That is not true, and you shouldn't tell lies.)
On a rolling wheel, friction does no work. Here, I'm travelling slowly so let's neglect air resistance and rolling resistance. There is a substantial friction force: it is friction between tires and
paving that accelerates me in a circle. In this case, the frictional force force is at right angles to the displacement, so friction does no work. So, while I'm not pedalling, (approximately) no work
is done and my mechanical energy is (approximately) constant.
Work and power
Power is defined as the rate of doing work or the rate of transforming or transferring energy: P = dW/dt. In this example, my kinetic energy is approximately constant. However, my potential energy is
increasing. Because I'm climbing, I'm not going very fast, so the rate at which I'm doing work against nonconservative forces such as air resistance is small. The equations below allow us to
calculate the rate at which I'm doing work against gravity (which is an underestimate of the rate at which I'm doing work). My altitude is increasing at 1 m.s^−1, and my weight is 700 N, so
P = dW/dt ≅ dU/dt = mg(dh/dt) = 700 W.
The sliding problem
Here is the problem from the tutorial: doing work against a nonconservative force. Here I apply a force F via the tension in a string. The work dW that I do is
dW = F.ds = F ds cos θ
Now v = ds/dt, so the power I am applying, i.e. the rate at which I am doing work is:
P = dW/dt = F v cos θ
I'll leave it to the reader to draw a free body diagram. Then use Newton's second law, then relate P to m, g, v and μ[k].
The loop-the-loop problem The forces providing this acceleration are its weight mg (acting
down) and the normal force N from the track, also acting down at
This is a classic problem. A small toy car runs on wheels that turn are assumed to turn freely and whose mass is negligible, so this point.
we can treat it as a particle. From how high must I release it so that it will loop the loop, remaining in contact with the track
all the way around? So, if N > 0, we require v^2/r > g, or, for the critical condition
at which it just loses contact, we require
If the car retains contact with the track then, at the top of the loop, which is circular, the centripetal acceleration will be
downwards and its magnitude will be v^2/r. v[crit]^2/r = g or v[crit]^2 = rg
We can do this problem using the conservation of mechanical energy.
U[initial] + K[initial] = U[final] + K[final]
Choosing the bottom of the track as the zero for U, we could write,
mgh[initial] + 0 = mg.(2r) + ½mv[final]
and, if v[final] = v[crit] = √(rg)
mgh[initial] = 2mgr + ½mgr
So the cricital height is 5r/2 above the bottom of the track.
The hydroelectric dam problem
The water level in a hydroelectric dam is 100 m above the height at which water comes out of the pipes. Assuming that the turbines and generators are 100% efficient, and neglecting viscosity and
turbulence, calculate the flow of water required to produce 10 MW of power. The output pipes have a cross section of 5 m^2. This problem has the work-energy theorem, uses power, and requires a bit
of thought. Let's do it.
Let's consider what is happening in steady state for this system. Over a time dt, some water of mass dm exits the lower pipe at speed v. This water is delivered to the top of the dam at negligible
speed. So the nett effect is to take dm of stationary water at height h and deliver it at the bottom of the dam at height zero and speed v. Looks straightforward. Let's go.
Let the flow be dm/dt. The work done by the water, dW, is minus the energy increase of the water, so
dW = − dE = − dK − dU
= − (½dm.v^2 − 0) − (0 − dm.gh) = dm(gh − ½v^2)
The power delivered is just P = dW/dt. so
P = (gh − ½v^2)dm/dt
Of course the flow dm/dt depends on v. Let's see how: In time dt, the water flows a distance vdt along the pipe. The cross section of the pipe is A, so the volume of water that has passed a given
point is dV = A(vdt). Using the definition of density, ρ = dm/dV, we have
dm/dt = ρdV/dt = ρA.(vdt)/dt = ρAv. Substituting in the equation above gives us
P = ρAv(gh − ½v^2) or
½v^3 − ghv + P/ρA = 0.
However you look at it, it's a cubic equation, which sounds like a messy solution. However, let's think of what the terms mean. The first one came from the kinetic energy term. The second is the work
done by gravity. The third is the work done on the turbines. Now, if I had designed this dam, I'd have wanted to convert as much gravitational potential energy as possible into work done on the
turbines, so I'd make the pipes wide enough so that the kinetic energy lost by the water outflow would be negligible. Let's see if my guess is correct.
If the first term is negligible, then we simply have hgv = P/ρA. So v = P/ρghA = 2 m.s^−1. So the first term would be 4 m^3.s^−3, the second would be − 2000 m^3.s^−3 and the third would be 2000 m^3.s
^−3. So yes, the guess was correct and, to the precision required of this problem, the answer is v = 2 m.s^−1.
Bernoulli's equation
Bernoulli's equation is an example of the work-energy theorem. In the animation, a fluid flows at a steady rate into a pipe with cross section A[1] and height h[1], where it has velocity v[1] and
pressure P[1]. The fluid leaves the pipe with cross section A[2] and height h[2], where it has velocity v[2] and pressure P[2]. The fluid has constant density ρ and we assume that its viscosity is
negligible, and that there is no turbulence, so that nonconservative forces do no work. What is the relation among the velocity, height and pressure? What is the relation among the velocity, height
and pressure?
Before doing this quantitatively, we can ask how pressure and velocity are related. At the same height, and if we have no turbulence or viscosity, then the only thing that accelerates the fluid is
the difference in pressure. The fluid will accelerate from high to low pressure so, where P is high, v should be low and vice versa. Let's see:
In a short time dt, a mass dm enters the pipe at left and, because the flow is steady, an equal mass dm flows out at right. Because the flow is steady, the total energy of the water in the pipe is
unchanged. So the total work done on dm, by the work-energy theorem, is
dW[total] = ½ dm.v[2]^2 − ½ dm.v[1]^2.
The work is done by two forces: gravity, which does work − dU[grav], and the pressure. dU[grav] is dm.gΔh, so
dW[pressure] − dU[grav] = ½ dm.v[2]^2 − ½ dm.v[1]^2 , so
dW[pressure] = ½ dm.v[2]^2 + dm.gh[2] − ½ dm.v[1]^2 − dm.gh[2] .
So, how much work is done by the difference in pressure acting across the pipe? By definition, the pressure is the force per unit area, so the force exerted by P on cross sectional area A is PA. If
this force is applied over a distance ds at right angles to A, it does work PAds. But the volume moved is dV = Ads, so the work done by pressure is PdV. The work done by P[1] is positive and that
done by P[2] is negative, so
P[1]dV − P[2]dV = ½ dm.v[2]^2 + dm.gh[2] − ½ dm.v[1]^2 − dm.gh[2] .
Now we use the definition of density: ρ = dm/dV. So, if we divide both sides of the equation by dV and rearrange the terms,
P[1] + ½ρv[1]^2 + ρgh[1] = P[2] + ½ρv[2]^2 + ρgh[2].
Of course, we could apply this analysis to any two points in the pipe so, provided that the flow is steady, incompressible, non-viscous and non turbulent, we have Bernoulli's equation
P + ½ρv^2 + ρgh = constant[].
Remembering that ρ = dm/dV, we can see the significance of each of these terms: P is the work done by pressure, per unit volume, ½ρv^2 is the kinetic energy per unit volume, and ρgh is the
gravitational potential energy per unit volume. Bernoulli's equation is just the work energy theorem, written per unit volume. In the absence of flow, this just gives the variation of pressure with
ΔP = − ρgΔh[] , if there is no flow.
If the height is constant, we have
ΔP = − Δ(½ρv^2) , if there is no change in height.
This last observation tells us that (at equal height), pressure will be high when velocity is low and vice versa. This makes sense: if the velocity has increased at constant h, then pressure must
have acted to accelerate the fluid. The fluid of course flows from high pressure to low, so it must be slower in high pressure and faster in low pressure. (With the reminder that we are neglecting
viscosity and turbulence: there are no non-conservative forces acting.)
This is a nice demonstration: the hose delivers a high speed jet of air. What is holding the ball up in the air?
The drag of the air jet as it passes the ball makes it rotate, so we can deduce from the direction of rotation that most of the jet passes above the ball.
The ball has weight, and the only forces acting on it are those due to the pressure of the air around it. So we can conclude that the pressure above the ball is substantially less than that below
the ball.
This is not, however, a simple demonstration of the effect described by Bernoulli's equation. It is certainly true that the fast moving air coming out of the hose has a pressure somewhat less than
the pressure in the stationary air. Because the jet of air coming out of the hose is mainly deflected above the ball, this makes the pressure above the ball less than atmospheric. However, in this
case the jet itself is deflected by the presence of the ball, so there is also a contribution from the change in momentum of the jet. (Further, the drag that causes the rotation tells us that there
is a nonconservative force present and so Bernoulli's equation would not apply accurately here.)
Centre of mass work
When we write W = ∫F.ds, for an extended object, what is F and what is ds?
F is the total external force acting on the object which, because of Newton's third law, equals the total force on the object. ds in this case is the displacement of the centre of mass, ds[CoM]. In
this simple demonstration, the force that accelerates me is the force that the wall exerts on my hand. The wall, however, doesn't move. What does move during my acceleration is my centre of mass,
so the kinetic energy associated with the motion of my centre of mass is increased by ∫F[external].ds[CoM].
We'll leave the derivation of this to the section on centre of mass. | {"url":"http://www.animations.physics.unsw.edu.au/jw/work.htm","timestamp":"2014-04-18T15:38:30Z","content_type":null,"content_length":"60670","record_id":"<urn:uuid:7d0f455c-f022-4f9d-8af5-ad0b382f6a8d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Factor 15x^2 - 37x + 20
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51333b72e4b093a1d948eaff","timestamp":"2014-04-17T22:07:15Z","content_type":null,"content_length":"89672","record_id":"<urn:uuid:64d4a6a0-c544-4081-bf60-813568adcb28>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: Acknowledgments Up: Gödel Machines: Self-Referential Universal Previous: Frequently Asked Questions
In 1931, Kurt Gödel laid the foundations of theoretical computer science, using elementary arithmetics to build a universal programming language for encoding arbitrary proofs, given an arbitrary
enumerable set of axioms. He went on to construct self-referential formal statements that claim their own unprovability, using Cantor's diagonalization trick [5] to demonstrate that formal systems
such as traditional mathematics are either flawed in a certain sense or contain unprovable but true statements [11]. Since Gödel's exhibition of the fundamental limits of proof and computation, and
Konrad Zuse's subsequent construction of the first working programmable computer (1935-1941), there has been a lot of work on specialized algorithms solving problems taken from more or less general
problem classes. Apparently, however, one remarkable fact has so far escaped the attention of computer scientists: it is possible to use self-referential proof systems to build optimally efficient
yet conceptually very simple universal problem solvers.
The initial software 16,17] which have at least an optimal order of complexity, or some less general method [20]. Simultaneously, it runs an Universal Search to test proof techniques, which are
programs able to compute proofs concerning the system's own future performance, based on an axiomatic system utility function
After the theoretical discussion in Sections 1-5, one practical question remains: to build a particular, especially practical Gödel machine with small initial constant overhead, which generally
useful theorems should one add as axioms to
Next: Acknowledgments Up: Gödel Machines: Self-Referential Universal Previous: Frequently Asked Questions Juergen Schmidhuber 2005-01-03 | {"url":"http://www.idsia.ch/~juergen/gmweb4/node26.html","timestamp":"2014-04-18T13:06:57Z","content_type":null,"content_length":"6427","record_id":"<urn:uuid:301ee27f-4b96-4c8a-a9be-934165e99eb6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sorting algorithms
Quicksort is one of the fastest and simplest sorting algorithms [Hoa 62]. It works recursively by a divide-and-conquer strategy .
First, the sequence to be sorted a is partitioned into two parts, such that all elements of the first part b are less than or equal to all elements of the second part c (divide). Then the two parts
are sorted separately by recursive application of the same procedure (conquer). Recombination of the two parts yields the sorted sequence (combine). Figure 1 illustrates this approach.
The first step of the partition procedure is choosing a comparison element x. All elements of the sequence that are less than x are placed in the first part, all elements greater than x are placed in
the second part. For elements equal to x it does not matter into which part they come. In the following algorithm it may also happen that an element equal to x remains between the two parts.
Algorithm Partition
Input: sequence a[0], ..., a[n-1] with n elements
Output: permutation of the sequence such that all elements a[0], ..., a[j] are less than or equal to all elements a[i], ..., a[n-1] (i > j)
Method: 1. choose the element in the middle of the sequence as comparison element x
let i = 0 and j = n-1
while ij
1. search for the first element a[i] which is greater than or equal to x
search for the last element a[j] which is less than or equal to x
if ij
1. exchange a[i] and a[j]
let i = i+1 and j = j-1
After partitioning the sequence, quicksort treats the two parts recursively by the same procedure. The recursion ends whenever a part consists of one element only.
The following simulation illustrates the partitioning procedure.
The following Java program implements quicksort.
public class QuickSorter
private int[] a;
private int n;
public void sort(int[] a)
quicksort(0, n-1);
// lo is the lower index, hi is the upper index
// of the region of array a that is to be sorted
private void quicksort (int lo, int hi)
if (lo>=hi) // less than two elements
// comparison element x
int x=a[lo+hi/2];
// partition
int i=lo, j=hi;
while (i<=j)
while (a[i]<x) i++;
while (a[j]>x) j--;
if (i<=j)
exchange(i++, j--);
// recursion
quicksort(lo, j);
quicksort(i, hi);
private void exchange(int i, int j)
int t=a[i];
} // end class QuickSorter
The best-case behavior of the quicksort algorithm occurs when in each recursion step the partitioning produces two parts of equal length. In order to sort n elements, in this case the running time is
in Θ(n log(n)). This is because the recursion depth is log(n) and on each level there are n elements to be treated (Figure 2 a).
The worst case occurs when in each recursion step an unbalanced partitioning is produced, namely that one part consists of only one element and the other part consists of the rest of the elements
(Figure 2 c). Then the recursion depth is n-1 and quicksort runs in time Θ(n^2).
In the average case a partitioning as shown in Figure 2 b is to be expected.
Figure 2: Recursion depth of quicksort: a) best case, b) average case, c) worst case
The choice of the comparison element x determines which partition is achieved. Suppose that the first element of the sequence is chosen as comparison element. This would lead to the worst case
behavior of the algorithm when the sequence is initially sorted. Therefore, it is better to choose the element in the middle of the sequence as comparison element.
Even better would it be to take the n/2-th greatest element of the sequence (the median). Then the optimal partition is achieved. Actually, it is possible to compute the median in linear time [AHU
74]. This variant of quicksort would run in time O(n log(n)) even in the worst case.
However, the beauty of quicksort lies in its simplicity. And it turns out that even in its simple form quicksort runs in O(n log(n)) on the average. Moreover, the constant hidden in the O-notation is
small. Therefore, we trade this for the (rare) worst case behavior of Θ(n^2).
Proposition: The time complexity of quicksort is in
Θ(n log(n)) in the average case and in
Θ(n^2) in the worst case
Quicksort turns out to be the fastest sorting algorithm in practice. It has a time complexity of Θ(n log(n)) on the average. However, in the (very rare) worst case quicksort is as slow as Bubblesort,
namely in Θ(n^2). There are sorting algorithms with a time complexity of O(n log(n)) even in the worst case, e.g. Heapsort and Mergesort. But on the average, these algorithms are by a constant factor
slower than quicksort.
It is possible to obtain a worst case complexity of O(n log(n)) with a variant of quicksort (by choosing the median as comparison element). But this algorithm is on the average and in the worst case
by a constant factor slower than Heapsort or Mergesort; therefore, it is not interesting in practice.
Quicksort was originally published by C.A.R. Hoare [Hoa 62].
[AHU 74] A.V. Aho, J.E. Hopcroft, J.D. Ullman: The Design and Analysis of Computer Algorithms. Addison-Wesley (1974)
[Hoa 62] C.A.R. Hoare: Quicksort. Computer Journal, Vol. 5, 1, 10-15 (1962)
[CLRS 01] T.H. Cormen, C.E. Leiserson, R.L. Rivest, C. Stein: Introduction to Algorithms. 2nd edition, The MIT Press (2001)
[Sed 03] R. Sedgewick: Algorithms in Java, Parts 1-4. 3rd edition, Addison-Wesley (2003)
[Web 1] http://www.bluffton.edu/~nesterd/java/SortingDemo.html Simulation of quicksort and other sorting algorithms
FH Flensburg lang@fh-flensburg.de Impressum © Created: 12.09.1997 Updated: 20.10.2013 | {"url":"http://www.iti.fh-flensburg.de/lang/algorithmen/sortieren/quick/quicken.htm","timestamp":"2014-04-18T18:11:13Z","content_type":null,"content_length":"18791","record_id":"<urn:uuid:2973182c-6102-402e-a668-f2056a5252b1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: dummy variable
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: dummy variable
From "Jing Zhou" <jing.zhou@rmit.edu.au>
To <statalist@hsphsun2.harvard.edu>
Subject Re: st: dummy variable
Date Fri, 16 Sep 2011 20:06:22 +1000
Thanks a lot, Maarten. your suggestions are very appreciated. Can you
please provide the reference of "The consequence of the fact that there
are so few 1s is that the variance of the variable will be low, and thus
the precision with which the effect of that variable is measured will
also be low, i.e. large standard errors and confidence intervals."?
>>> Maarten Buis <maartenlbuis@gmail.com> 16/09/11 7:30 下午 >>>
If you want to get results that control for that than you should add
that variable, otherwise you should not do so. If you want to add that
variable I would first try to find out why they are missing. For
example, is there only one shareholder, and is it thus impossible for
anyone to be the second largest shareholder or is the second largest
shareholder unknown. In the former case you can set those missing
values at zero (and possible add a dummy variable for single
shareholder), while in the latter case you can think of multiple
imputation (see: -help mi-).
The consequence of the fact that there are so few 1s is that the
variance of the variable will be low, and thus the precision with
which the effect of that variable is measured will also be low, i.e.
large standard errors and confidence intervals.
Hope this helps,
On Fri, Sep 16, 2011 at 10:51 AM, Jing Zhou <jing.zhou@rmit.edu.au>
> Thanks Maarten, it measues whether the second largest shareholder of a
> listed company is a state shareholder.
> Jing
>>>> Maarten Buis <maartenlbuis@gmail.com> 16/09/11 6:02 下午 >>>
> On Fri, Sep 16, 2011 at 2:09 AM, Jing Zhou wrote:
>> I have a sample period from 2000-2009. now I am considering to add a
> dummy variable into the regression. for the year of 2009, only 24 out
> 550 observations (<5%) valued at 1 of this dummy. for 2008, 31 out of
> 502 observations (<5%) valued at 1. further, for the year of
> the missing value rate is 61.19%, 31.91%, and 13.33%, respectively.
> other variables of the regression, i can get relatively complete data
> value of each observation. therefore, is it still necessary to include
> this dummy in my regression? thanks!
> What is this dummy variable supposed to measure?
> -- Maarten
> --------------------------
> Maarten L. Buis
> Institut fuer Soziologie
> Universitaet Tuebingen
> Wilhelmstrasse 36
> 72074 Tuebingen
> Germany
> http://www.maartenbuis.nl
> --------------------------
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-09/msg00653.html","timestamp":"2014-04-18T08:48:18Z","content_type":null,"content_length":"11120","record_id":"<urn:uuid:9b9b25ee-dd78-49bb-8b1d-5f7a0263b717>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Golf, FL Algebra 2 Tutor
Find a Golf, FL Algebra 2 Tutor
...This school targets 15-22 year old students that have had behavioral, academic, and/or social problems at regular public schools. Students focus on gaining skills that help them to become
employable in the community and find success outside of the traditional school setting. Last year, I served...
15 Subjects: including algebra 2, reading, literature, algebra 1
...I have gained extensive experience tutoring Spanish as a second language. Spanish is my native language and I am fluent in English as well. My Spanish tutoring experience includes helping both
adults and children.
8 Subjects: including algebra 2, Spanish, geometry, algebra 1
I have been tutoring in Boca Raton for the last 10 years and references would be available on request. I basically tutor Math, mostly junior high and high school subjects and also tutor college
prep, both ACT and SAT. I have also tutored SSAT.
10 Subjects: including algebra 2, geometry, algebra 1, SAT math
...By the end of this course, students will have all the knowledge necessary to solve and graph equations and inequalities. They will also be able to apply this knowledge to other areas of math,
such as word problems, ratios and proportions. The course starts off with a quick review of basic algebraic concepts, such as variables, order of operations, exponents and problem solving skills.
9 Subjects: including algebra 2, geometry, ASVAB, algebra 1
...Once you got it, I may try to take you even further with more difficult problems. At Score at the Top, I would assist an Advanced Placement Student teaching Calculus II going over the text
book chapter by chapter. This was different from the usual tutoring where you guide what the student had learned previously.
16 Subjects: including algebra 2, Spanish, physics, calculus
Related Golf, FL Tutors
Golf, FL Accounting Tutors
Golf, FL ACT Tutors
Golf, FL Algebra Tutors
Golf, FL Algebra 2 Tutors
Golf, FL Calculus Tutors
Golf, FL Geometry Tutors
Golf, FL Math Tutors
Golf, FL Prealgebra Tutors
Golf, FL Precalculus Tutors
Golf, FL SAT Tutors
Golf, FL SAT Math Tutors
Golf, FL Science Tutors
Golf, FL Statistics Tutors
Golf, FL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Atlantis, FL algebra 2 Tutors
Boynton Beach algebra 2 Tutors
Briny Breezes, FL algebra 2 Tutors
Delray Beach algebra 2 Tutors
Glen Ridge, FL algebra 2 Tutors
Gulf Stream, FL algebra 2 Tutors
Highland Beach, FL algebra 2 Tutors
Hypoluxo, FL algebra 2 Tutors
Lantana, FL algebra 2 Tutors
Manalapan, FL algebra 2 Tutors
Ocean Ridge, FL algebra 2 Tutors
Palm Beach algebra 2 Tutors
Palm Springs, FL algebra 2 Tutors
South Palm Beach, FL algebra 2 Tutors
West Delray Beach, FL algebra 2 Tutors | {"url":"http://www.purplemath.com/Golf_FL_algebra_2_tutors.php","timestamp":"2014-04-17T11:11:32Z","content_type":null,"content_length":"24109","record_id":"<urn:uuid:dbf29f24-acbe-4116-9a71-da7e27484622>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |